Computer Networks – Module 2
Presented By,
Asst Prof Sandhya C,
SNGCE
Table of Contents
⚫ Principles of Reliable data transfer
⚫ Stop-and-wait and Go-back- N design and
evaluation
⚫ Connection oriented transport TCP
⚫ Connectionless transport UDP, Principles
of congestion control -efficiency and
fairness
Principles of Reliable data transfer
⚫ Transport Layer protocols are central piece of
layered architectures, these provides the
logical communication between application
processes
⚫ The data is transferred in the form of packets
but the problem occurs in reliable transfer of
data
⚫ The problem of transferring the data occurs
not only at the transport layer, but also at the
application layer as well as in the link layer
⚫ Reliable data transfer means that the data are
received ordered and error- free
Principles of Reliable data transfer[contd..]
⚫ For example, TCP (Transmission Control
Protocol) is a reliable data transfer protocol
that is implemented on top of an unreliable
layer, i.e., Internet Protocol (IP) is an end to
end network layer protocol.
Figure: Study of Reliable Data Transfer
Principles
⚫ of Reliable data transfer[contd..]
In this model, we have design the sender and receiver
sides of a protocol over a reliable channel. In the
reliable transfer of data the layer receives the data
from the above layer breaks the message in the form
of segment and put the header on each segment and
transfer. Below layer receives the segments and
remove the header from each segment and make it a
packet by adding to header
⚫ The data which is transferred from the above has no
transferred data bits corrupted or lost, and all are
delivered in the same sequence in which they were
sent to the below layer this is reliable data transfer
protocol. This service model is offered by TCP to the
Internet applications that invoke this transfer of data.
Principles of Reliable data transfer[contd..]
Principles
⚫ of Reliable data transfer[contd..]
TCP uses checksum, sequence numbers,
acknowledgements, timers, and retransmission to
ensure correct and in order delivery of data to the
application processes.
Sliding Window Protocols
⚫ The sliding window is a technique for sending multiple
frames at a time
⚫ It controls the data packets between the two devices where
reliable and gradual delivery of data frames is needed. It is
also used in TCP (Transmission Control Protocol)
⚫ Each frame has sent from the sequence number. The
sequence numbers are used to find the missing data in the
receiver end. The purpose of the sliding window technique
is to avoid duplicate data, so it uses the sequence number
⚫ There are 2 types of Sliding Window Protocol
⚫ Go-Back N ARQ
⚫ Selective Repeat ARQ
Working Principle of Sliding Window
⚫ The sender’s buffer is the sending window, and the
receiver’s buffer is the receiving window in these
protocols
⚫ Assignment of sequence numbers to frames is between
the range 0 to 2n−1
⚫ Modulo-n is used to number the sequence numbers. If
the sending window size is set as 3, the sequence
numbers will be 0, 1, 2, 0, 1, 2, 0, 1, 2 and so on
⚫ The receiving window’s size refers to the maximum
number of frames the receiver can accept at one time.
It establishes the maximum number of frames a sender
can send before receiving an acknowledgement
Sender Side
⚫ The sequence number of the frame occupies a field in
the frame. So, the sequence number should be kept to
a minimum
⚫ Sequence numbers to frames is between the range 0
to 2n−1
⚫ For example, if the frame allows 4 bits, the window’s
size is 2 raised to the power 4 -1 16-1=15.
⚫ The sender has a buffer with the same size as the
window
Receiver Side
⚫ On the receiver side, the size of the window is always
1.
⚫ The receiver acknowledges a frame by sending an ACK
frame to the sender, along with the sequence number
of the next expected frame.
⚫ The receiver declares explicitly that it is ready to
receive N subsequent frames, starting with the
specified number.
⚫ We use this scheme in order to acknowledge multiple
frames.
⚫ The receiver requires a buffer size of one.
Go-Back N ARQ
⚫ Go-Back-N Automatic Repeat Query (ARQ)
protocol is also referred to as Go-Back-N
Automatic Repeat Request
⚫ It is a data link layer protocol that helps a
sliding window method. In this, if any frame is
manipulated or lost, all subsequent frames
have to be sent again
⚫ In this protocol, the sender window size is N.
The size of the receiver window is always one.
Go-Back N ARQ - Example
First, the sender sends the first four frames in the window
(here the window size is 4
Go-Back N ARQ - Example
Then, the receiver sends the acknowledgment for the 0th
frame.
Go-Back N ARQ - Example
The receiver then slides the window over and sends the
next frame in the queue.
Go-Back N ARQ - Example
Accordingly, the receiver sends the acknowledgement for the 1st
frame, and upon receiving that, the sender slides the window again
and sends the next frame. This process keeps on happening until all
the frames are sent successfully.
Go-Back N ARQ [contd..]
Two Scenarios to be dealt with
⮚ When the acknowledgment is lost
⮚ When the packet is lost
Go-Back N ARQ- packet is lost
Go-Back N ARQ- packet is lost
Go-Back N ARQ- packet is lost
Go-Back N ARQ- acknowledgment is lost
Go-Back N ARQ- acknowledgment is lost
Go-Back N ARQ- acknowledgment is lost
Go-Back N ARQ- acknowledgment is lost
Go-Back N ARQ- acknowledgment is lost
Summary : Go-Back N ARQ
Stop and Wait vs Sliding Window
Protocol
Selective Repeat ARQ
⚫ Selective Repeat ARQ (Selective Repeat Automatic
Repeat Request) is another name for Selective
Repeat ARQ
⚫ A sliding window method is used in this data link
layer protocol. If the frame has fewer errors, Go-
Back-N ARQ works well
⚫ However, if the frame contains a lot of errors,
sending the frames again will result in a lot of
bandwidth loss. As a result, we employ the
Selective Repeat ARQ method.
⚫ The size of the sender window is always equal to
the size of the receiver window in this protocol.
The sliding window’s size is always greater than 1.
Selective Repeat ARQ
⚫ Selective Repeat ARQ (Selective Repeat Automatic
Repeat Request) is another name for Selective
Repeat ARQ
⚫ A sliding window method is used in this data link
layer protocol. If the frame has fewer errors, Go-
Back-N ARQ works well
⚫ However, if the frame contains a lot of errors,
sending the frames again will result in a lot of
bandwidth loss. As a result, we employ the
Selective Repeat ARQ method.
⚫ The size of the sender window is always equal to
the size of the receiver window in this protocol.
The sliding window’s size is always greater than 1.
Selective Repeat ARQ
Selective Repeat ARQ [contd..]
Selective Repeat ARQ[contd..]
Selective Repeat ARQ[contd..]
Selective Repeat ARQ[contd..]
Selective Repeat ARQ- packet lost
Selective Repeat ARQ- packet lost
Selective Repeat ARQ- Ack lost
Selective Repeat ARQ- Ack lost [contd..]
Summary : Selective Repeat
Go-Back-N vs Selective Repeat
TRANSMISSION CONTROL PROTOCOL (TCP)
TCP Services
1. Process-to-Process Communication
As with UDP, TCP provides process-to-process communication using port
numbers.
2. Stream Delivery Service
TCP, is a stream-oriented protocol. TCP, allows the sending process to deliver
data as a stream of bytes and allows the receiving process to obtain data as a
stream of bytes. TCP creates an environment in which the two processes
seem to be connected by an imaginary “tube” that carries their bytes
across the Internet.
TRANSMISSION CONTROL PROTOCOL (TCP)
TCP Services
3. Sending and Receiving Buffers
Because the sending and the receiving processes may not necessarily
write or read data at the same rate, TCP needs buffers for storage. There
are two buffers, the sending buffer and the receiving buffer, one for each
direction. These buffers are also necessary for flow- and error-control
mechanisms used by TCP.
TRANSMISSION CONTROL PROTOCOL (TCP)
TCP Services
4. Full-Duplex Communication
TCP offers full-duplex service -data can flow in both directions at the same
time. Each TCP endpoint then has its own sending and receiving buffer, and
segments move in both directions.
5. Multiplexing and Demultiplexing
TCP performs multiplexing at the sender and demultiplexing at the
receiver.
6. Connection-Oriented Service
7. Reliable Service
TCP is a reliable transport protocol. It uses an acknowledgment
mechanism to check the safe and sound arrival of data.
Connection Establishment and Termination
Connection establishment
Connection Establishment
Connection establishment
Data Transfer
Connection Termination
Connection termination
Connection Termination
⚫ In a common situation, the client TCP, after
receiving a close command from the client
process, sends the first segment, a FIN segment
in which the FIN flag is set. FIN is set to 1 which
informs server to release resources such as
cpu,buffer,bandwidth etc.(Half close)
⚫ The server TCP, after receiving the FIN segment,
informs its process of the situation and sends
the second segment, a FIN+ACK segment, to
confirm the receipt of the FIN segment from the
client and at the same time to announce the
closing of the connection in the other direction
Connection Termination [contd..]
⚫The client TCP sends the last segment, an
ACK segment, to confirm the receipt of the
FIN segment from the TCP server
⚫ This segment contains the
acknowledgment number, which is one plus
the sequence number received in the FIN
segment from the server
Two – Army Problem
(a)Normal Scenario –
Connection Release
Issues with Connection
Release
(b) Final ACK lost (c) Response lost
Issues with Connection
Release [contd..]
(d) Response lost and
subsequent DRs lost
Three-Way Handshaking
SYN Flooding Attack
The connection establishment procedure in TCP is susceptible to a serious
security problem called SYN flooding attack.
This happens when one or more malicious attackers send a large number of
SYN segments to a server pretending that each of them is coming from a
different client by faking the source IP addresses in the datagrams.
The server, assuming that the clients are issuing an active open, allocates the
necessary resources.
The TCP server then sends the SYN + ACK segments to the fake clients, which
are lost. When the server waits for the third leg of the handshaking process,
however, resources are allocated without being used. If, during this short period
of time, the number of SYN segments is large, the server eventually runs out of
resources and may be unable to accept connection requests from valid clients.
This SYN flooding attack belongs to a group of security attacks known as a
denial of service attack, in which an attacker monopolizes a system with so
many service requests that the system overloads and denies service to valid
requests.
Three-Way Handshaking
SYN Flooding Attack
Some implementations of TCP have strategies to
alleviate the effect of a SYN attack.
Some have imposed a limit of connection requests
during a specified period of time.
Others try to filter out datagrams coming from
unwanted source addresses.
One recent strategy is to postpone resource
allocation until the server can verify that the
connection request is coming from a valid IP
address, by using what is called a cookie.
TCP (Transmission control protocol)
⚫Transmission control protocol is one that offers a
reliable, connection-oriented, byte-stream service
⚫TCP services
⚫ Process-to-Process Communication: TCP provides
process-to-process communication using port
numbers
⚫TCP is a stream-oriented protocol: TCP allows the
sending process to deliver data as a stream of bytes
and allows the receiving process to obtain data as a
stream of bytes
TCP (Transmission control protocol)[contd.]
⚫The sending process produces (writes to) the stream of
bytes and the receiving process consumes (reads from)
them
⚫TCP groups a number of bytes together into a packet
called a segment. TCP adds a header to each segment
(for control purposes) and delivers the segment to the IP
layer for transmission
⚫TCP offers full-duplex service, where data can flow in
both directions at the same time
⚫TCP performs multiplexing at the sender and
demultiplexing at the receiver
TCP (Transmission control protocol)[contd.]
⚫Connection oriented service :when a process at site A
wants to send to and receive data from another process
at site B, the following three phases occur: 1. The two
TCPs establish a virtual connection between them. 2.
Data are exchanged in both directions. 3. The
connection is terminate. Note that this is a virtual
connection, not a physical connection
⚫Reliable Service: TCP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the
safe and sound arrival of data
⚫ Error control,Flow control and Congestion control
Questions
• Find the applications of TCP
• Explain pipeling and its importance in
transport layer
• Explain piggybacking with an example .
Does piggybacking and pipelining improves
the resource utilization of networks – Justify ?
Internet Transport Protocols - UDP
⮚ Connectionless transport protocol
⮚ UDP is basically just IP with a short header added
⮚ Provides a way for applications to send encapsulated
IP datagrams and send them without having to establish
a connection
⮚ UDP transmits segments consisting of an 8-byte
header followed by the payload
⮚ Ordered delivery of data is not guaranteed: In the case
of UDP, the datagrams are sent in some order will be
received in the same order is not guaranteed as the
datagrams are not numbered.
⮚ If a process wants to send a small message and does
not care much about reliability, it can use UDP.
Internet Transport Protocols - UDP
⮚ UDP uses special address to identify the target
process – PORT numbers
⮚ Ports:The UDP protocol uses different port
numbers so that the data can be sent to the correct
destination. The port numbers are defined between
0 and 1023.(Total number of ports available is
65535)
⮚ Stateless: It is a stateless protocol that means that
the sender does not get the acknowledgement for
the packet which has been sent
⮚ unreliable protocol
UDP Services
1. Process-to-Process Communication
⚫ Use socket addresses, a combination of IP addresses
and port numbers for process to process
communication.
2. Connectionless Services
⚫ Connectionless service means that each user
datagram sent by UDP is an independent datagram.
⚫ There is no relationship between the different user
datagrams even if they are coming from the same
source process and going to the same destination
program.
⚫ There is no connection establishment and no
connection termination.
⚫ Each user datagram can travel on a different path.
UDP Services [contd..]
• One of the consequence of being connectionless is that the
process that uses UDP cannot send a stream of data to UDP.
• Each request must be small enough to fit into one user datagram.
Only those processes sending short messages, messages less
than 65,507 bytes.
Flow Control and Error control
There is no flow control, and hence no window mechanism.
There is no error control mechanism in UDP except for the
checksum. This means that the sender does not know if a message
has been lost or duplicated. When the receiver detects an error
through the checksum, the user datagram is silently discarded.
UDP Services [contd..]
Flow Control and Error control : Checksum
Optional Inclusion of Checksum The sender of a
UDP packet can choose not to calculate the
checksum.
In this case, the checksum field is filled with all 0s
before being sent.
In the situation where the sender decides to calculate
the checksum, but it happens that the result is all 0s,
the checksum is changed to all 1s before the packet
is sent.
UDP Services [contd..]
•The pseudo header is the part of the header of the IP
packet in which the user datagram is to be encapsulated
with some fields filled with 0s . The protocol field is
added to ensure that the packet belongs to UDP, and not
to TCP.
• If a process can use either UDP or TCP, the destination
port number can be the same.
•The value of the protocol field for UDP is 17.
•If this value is changed during transmission, the
checksum calculation at the receiver will detect it and
UDP drops the packet. It is not delivered to the wrong
protocol.
UDP Services [contd..]
Flow Control and Error control : Checksum
Optional Inclusion of Checksum The sender of a UDP packet
can choose not to calculate the checksum.
In this case, the checksum field is filled with all 0s before
being sent.
In the situation where the sender decides to calculate the
checksum, but it happens that the result is all 0s, the
checksum is changed to all 1s before the packet is sent.
UDP Services [contd..]
⚫ Congestion Control
⚫ UDP does not provide congestion control.
⚫ Encapsulation and Decapsulation
⚫ To send a message from one process to another, the
UDP protocol encapsulates and decapsulates
messages.
⚫ In UDP, queues are associated with ports.
⚫ At the client site, when a process starts, it requests
a port number from the operating system. Some
implementations create both an incoming and an
outgoing queue associated with each process. Other
implementations create only an incoming queue
associated with each process.
UDP Services [contd..]
Multiplexing and Demultiplexing
In a host running a TCP/IP protocol suite, there is
only one UDP but possibly several processes that
may want to use the services of UDP.
The only difference is that UDP provides an
optional checksum to detect corrupted packets
at the receiver site. If the checksum is added to
the packet, the receiving UDP can check the
packet and discard the packet if it is corrupted
The size of the data that the UDP packet can carry would
be 65,535 minus 8 bytes for the header of the UDP
packet and 20 bytes for IP header.
The UDP header contains four fields
⚫ Source port number: It is 16-bit information that identifies
which port is going to send the packet
⚫ Destination port number: It identifies which port is going to
accept the information. It is 16-bit information which is used
to identify application-level service on the destination machine
⚫ If the source host is the client (a client sending a request), the
port number, in most cases, is an ephemeral port number
requested by the process and chosen by the UDP software
running on the source host
⚫ An ephemeral port number is greater than 1023. If the source
host is the server (a server sending a response), the port
number, in most cases, is a well-known port number. Universal
port numbers are assigned for servers. The range of well
known port number is from 0 to 1023.
The UDP header contains four fields
⚫ The destination IP address defines the host among the different host.
After the host has been selected, the port number defines one of the
processes on this particular host. The combination of IP address and
port number is called socket address
⚫ Length: It is 16-bit field that specifies the entire length of the UDP
packet that includes the header also. The minimum value would be 8-
byte as the size of the header is 8 bytes
⚫ Checksum: It is a 16-bits field, and it is an optional field. This
checksum field checks whether the information is accurate or not as
there is the possibility that the information can be corrupted while
transmission. It is an optional field, which means that it depends upon
the application, whether it wants to write the checksum or not. If it does
not want to write the checksum, then all the 16 bits are zero; otherwise,
it writes the checksum. In UDP, the checksum field is applied to the
entire packet, i.e., header as well as data part whereas, in IP, the
checksum field is applied to only the header field
Applications
⚫Client-server :client sends a short request to the
server and expects a short reply back Eg: DNS,
RPC, RTP
Slow Start Phase
⚫ Initially, sender sets congestion window size =
Maximum Segment Size (1 MSS)
⚫ After receiving each acknowledgment, sender
increases the congestion window size by 1 MSS.
⚫ In this phase, the size of congestion window
increases exponentially
⚫ Formula :
Congestion window size = Congestion window size +
Maximum segment size
•After 1 round trip time, congestion window size = (2)1 = 2 MSS
•After 2 round trip time, congestion window size = (2)2 = 4 MSS
•After 3 round trip time, congestion window size = (2)3 = 8 MSS and so on
This phase continues until the congestion window size reaches the
slow start threshold.
Threshold
= Maximum number of TCP segments that receiver window can
accommodate / 2
= (Receiver window size / Maximum Segment Size) / 2
Congestion Avoidance Phase
⚫ After reaching the threshold,
Sender increases the congestion window size
linearly to avoid the congestion
⚫ On receiving each acknowledgement, sender
increments the congestion window size by 1
⚫ This phase continues until the congestion
window size becomes equal to the receiver
window size
⚫ Formula:
Congestion window size = Congestion window
size + 1
Congestion Detection Phase
⚫ When sender detects the loss of segments, it
reacts in different ways depending on how the
loss is detected
⚫ Case-01: Detection On Time Out
⚫ Time Out Timer expires before receiving the
acknowledgement for a segment.
⚫ This case suggests the stronger possibility of
congestion in the network.
⚫ There are chances that a segment has been
dropped in the network.
Congestion Detection Phase [contd..]
In this case, sender reacts by-
⚫ Setting the slow start threshold to half of the
current congestion window size.
⚫ Decreasing the congestion window size to 1
MSS.
⚫ Resuming the slow start phase.
Congestion Detection Phase [contd..]
Case-02: Detection On Receiving 3
Duplicate Acknowledgements-
• Sender receives 3 duplicate acknowledgements for a segment.
• This case suggests the weaker possibility of congestion in
the network.
• There are chances that a segment has been dropped but few
segments sent later may have reached
Congestion Detection Phase [contd..]
Case-02: Detection On Receiving 3
Duplicate Acknowledgements-
In this case, sender reacts by-
•Setting the slow start threshold to half of the current
congestion window size.
•Decreasing the congestion window size to slow start
threshold.
•Resuming the congestion avoidance phase.
Problem 1
⚫ Consider the effect of using slow start on a
line with a 10 msec RTT and no
congestion. The receiver window is 24 KB
and the maximum segment size is 2 KB.
How long does it take before the first full
window can be sent?
Solution
Given-
⚫ Receiver window size = 24 KB
⚫ Maximum Segment Size = 2 KB
⚫ RTT = 10 msec
⚫ Receiver Window Size-
⚫ Receiver window size in terms of MSS
= Receiver window size / Size of 1 MSS
=24 KB / 2 KB
= 12 MSS
Solution [contd..]
⚫ Slow Start Phase-
⚫ Window size at the start of 1st transmission = 1 MSS
⚫ Window size at the start of 2nd transmission = 2 MSS
⚫ Window size at the start of 3rd transmission = 4 MSS
⚫ Window size at the start of 4th transmission = 6 MSS
⚫ Since the threshold is reached, so it marks the end
of slow start phase.
⚫ Now, congestion avoidance phase begins.
Solution [contd..]
Congestion Avoidance Phase
⚫ Window size at the start of 5th transmission = 7 MSS
⚫ Window size at the start of 6th transmission = 8 MSS
⚫ Window size at the start of 7th transmission = 9 MSS
⚫ Window size at the start of 8th transmission = 10 MSS
⚫ Window size at the start of 9th transmission = 11 MSS
⚫ Window size at the start of 10th transmission = 12 MSS
Solution [contd..]
Congestion Avoidance Phase
⚫ From here,
⚫ Window size at the end of 9th transmission or at
the start of 10th transmission is 12 MSS.
⚫ Thus, 9 RTT’s will be taken before the first full
window can be sent.
⚫ So,time taken before the first full window is sent
= 9 RTT’s
= 9 x 10 msec
= 90 msec
References
⚫ [Link]/watch?v=ahFLI2GEuUE
⚫ TCP 3-Way Handshake Process (tutorialspo
[Link])
Thank
You