0% found this document useful (0 votes)
37 views39 pages

Module 4

The document provides a comprehensive overview of transport layer services in networking, detailing its role in process-to-process communication and the core services it offers, such as addressing, encapsulation, multiplexing, flow control, and error control. It distinguishes between connectionless and connection-oriented services, explaining their mechanisms and characteristics, and introduces the Simple Protocol and Stop-and-Wait Protocol as examples of transport layer protocols. Overall, it emphasizes the transport layer's critical function in ensuring reliable and efficient communication across networks.

Uploaded by

Ajit s Adin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views39 pages

Module 4

The document provides a comprehensive overview of transport layer services in networking, detailing its role in process-to-process communication and the core services it offers, such as addressing, encapsulation, multiplexing, flow control, and error control. It distinguishes between connectionless and connection-oriented services, explaining their mechanisms and characteristics, and introduces the Simple Protocol and Stop-and-Wait Protocol as examples of transport layer protocols. Overall, it emphasizes the transport layer's critical function in ensuring reliable and efficient communication across networks.

Uploaded by

Ajit s Adin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Transport layer services

This excerpt comprehensively explains transport-layer services in networking, emphasizing


the role of the transport layer in providing process-to-process communication. Here's a
summary of the key points covered:

Transport Layer Overview

1. Role: Located between the network and application layers, the transport layer
provides services to application processes and relies on the network layer for host-to-
host communication.
2. Domains of Operation:
o Network Layer: Host-to-host communication.
o Transport Layer: Process-to-process communication, enabling data to reach
the correct application process on a host.

Core Services of the Transport Layer

1. Addressing: Port Numbers


o Purpose: Identifies processes on a host.
o Port Number Ranges:
 Well-known (0–1023): Reserved by ICANN.
 Registered (1024–49151): Registered to prevent duplication.
 Dynamic/Private (49152–65535): Temporary or private use.
o Socket Address: A combination of IP address and port number uniquely
identifies a process.
2. Encapsulation and Decapsulation
o Encapsulation: Sender's transport layer adds a header to the message.
o Decapsulation: Receiver's transport layer removes the header and delivers the
message to the application.
3. Multiplexing and Demultiplexing
o Multiplexing: Combining multiple messages at the sender side.
o Demultiplexing: Delivering messages to the correct process at the receiver
side.
4. Flow Control
o Purpose: Balances the rate of data production and consumption to prevent
data loss or inefficiency.
o Mechanisms:
 Uses buffers at the sender and receiver transport layers.
 Supports pushing (data sent without request) and pulling (data sent on
demand).
5. Error Control
o Purpose: Ensures reliable data delivery in an unreliable network layer
environment.
o Features:
 Detecting and discarding corrupted packets.
 Resending lost or discarded packets.
 Identifying and discarding duplicate packets.
 Buffering out-of-order packets until missing ones arrive.
o Tools:
Sequence Numbers: Helps track packets and manage errors.

Acknowledgments (ACKs): Confirm safe receipt of packets.

6. Sliding Window Mechanism
o Concept: Manages flow and error control by maintaining a "window" of
packets that can be sent and acknowledged.
o Properties:
 Modulo numbering for sequence numbers.
 Buffers at both ends to track sent and received packets.
 Dynamic window adjustment based on acknowledgments.
 “

Implementation Highlights

 Buffers: Memory locations for temporary storage of data to handle flow control.
 Timers: Used to detect packet loss when acknowledgments are delayed.
 Sliding Window Examples:
o Circular and linear representations illustrate how the sender manages
outstanding packets.

This discussion underscores the transport layer's critical role in reliable communication,
ensuring data is accurately delivered between application processes across networks.

4o

You said:

explain once agin with detailed explanation of all with how it works flow
ChatGPT said:

ChatGPT

The transport layer in a network is responsible for reliable communication between


processes on different devices. It provides essential services such as process-to-process
communication, flow control, error control, and multiplexing/demultiplexing. Below is a
detailed explanation of each concept and how it works in the context of the transport layer:

1. Process-to-Process Communication
 What it is:
o Communication between application-layer processes (programs) running on
different hosts.
o Example: A web browser (client) requesting data from a web server.
 How it works:
o The network layer delivers data from one host to another (host-to-host
communication).
o The transport layer ensures the data reaches the correct process on the
destination host using port numbers.
o Each process is identified by a unique combination of an IP address and a port
number, forming a socket address.

2. Addressing and Port Numbers

 IP Address: Identifies the host in the network.


 Port Number: Identifies a specific process on the host.
 Port Number Ranges:
o Well-Known Ports (0-1023): Reserved for system services (e.g., HTTP on
port 80, FTP on port 21).
o Registered Ports (1024-49151): For user applications that can register these
ports to avoid conflicts.
o Dynamic/Private Ports (49152-65535): Temporary ports used for client
applications.

3. Encapsulation and Decapsulation

 Encapsulation:
o The sender process gives a message to the transport layer.
o The transport layer adds a transport header (containing source and
destination port numbers) to the message.
o This combined unit is sent to the network layer for delivery.
 Decapsulation:
o At the receiver, the transport layer extracts the transport header.
o It identifies the destination process using the port number and passes the data
to the application layer.

4. Multiplexing and Demultiplexing

 Multiplexing:
o Combining multiple processes' data into one channel at the sender.
o The transport layer adds headers to each packet to identify which process it
belongs to.
 Demultiplexing:
o At the receiver, the transport layer examines headers to direct packets to the
correct processes.
 Example:
o A client has three processes sending data to two servers. The transport layer
multiplexes the data into packets.
o At the servers, the transport layer demultiplexes packets to deliver them to the
appropriate processes.

5. Flow Control

 Ensures the sender does not overwhelm the receiver with data.
 Why it’s needed:
o If the sender sends too quickly, the receiver might run out of buffer space,
leading to data loss.
o If the sender sends too slowly, the system becomes inefficient.
 How it works:
o The transport layer uses buffers to temporarily store packets.
o The receiver informs the sender when to pause or resume sending based on
buffer availability.
 Mechanisms:
o Pushing: The sender pushes data without waiting for a request.
o Pulling: The sender waits for the receiver to request data.

6. Error Control

 Ensures reliable data transfer despite errors in the network.


 Functions:
1. Detect and discard corrupted packets.
2. Keep track of lost packets and request retransmission.
3. Identify duplicate packets and discard them.
4. Handle out-of-order packets.
 How it works:

o Sequence Numbers:
 Each packet is assigned a unique sequence number.
 Helps detect missing, duplicate, or out-of-order packets.
o Acknowledgments (ACK):
 The receiver sends an ACK for each successfully received packet.
 If the sender doesn’t receive an ACK within a timeout period, it
retransmits the packet.

7. Sliding Window Protocol


 Combines flow control and error control.
 What it does:
o Manages the number of packets that can be sent without receiving an
acknowledgment.
 How it works:

1. The sender has a buffer (sliding window) of a fixed size.


2. Packets within the window are sent and wait for ACKs.
3. When an ACK is received:
 The window slides forward, freeing up space for new packets.
4. If a packet is lost, the sender retransmits it.

 Window Size:
o Determines the maximum number of unacknowledged packets the sender can
have at any time.
o Example: If the window size is 4, the sender can send 4 packets before waiting
for ACKs.

8. Combining Flow and Error Control

 Both flow and error control require buffers and sequence numbers.
 The sender and receiver use numbered buffers:
o The sender buffers sent packets until ACKs are received.
o The receiver buffers out-of-order packets until missing packets arrive.
 Example:
o A sender sends packets 1, 2, 3, and 4.
o If the receiver ACKs 1 and 2, the sender can slide its window and send packets
5 and 6.

Visualization of Sliding Window

 Circular Representation:
o Sequence numbers are arranged in a circle (modulo arithmetic).
o The window slides forward as packets are acknowledged.
 Linear Representation:
o Sequence numbers are shown linearly.
o The sliding window moves forward, freeing space for new packets.

Summary of Key Concepts:

Feature Purpose Key Mechanism


IP addresses + port
Process Communication Connects processes on different hosts.
numbers.
Feature Purpose Key Mechanism
Handles data from/to multiple
Multiplexing/Demux Transport headers.
processes.
Buffers, flow control
Flow Control Prevents overwhelming the receiver.
signals.
Error Control Ensures reliable communication. Sequence numbers, ACKs.
Sliding Window Combines flow and error control. Modulo sequence numbers.

This approach ensures reliable, efficient communication between processes across a network.

Explanation of Connectionless and Connection-Oriented Protocols

At the transport layer of the OSI model, there are two main types of protocols:
connectionless and connection-oriented. These types of services define how data is
transmitted between devices in a network. Here's a detailed explanation of both services and
their mechanisms:

Connectionless Service

In a connectionless service:

 Independence of packets: Each chunk of data (called a segment or packet) is treated


as an independent unit. There is no logical connection or dependency between
packets.
 No setup required: No prior connection is established between the sender and
receiver.
 No sequencing: Packets may arrive out of order at the destination because the service
does not ensure they are received in the order sent.
 No acknowledgment or error checking: There is no feedback from the receiver
about successful delivery or errors.

Key Characteristics:

1. Packet Handling:
o The source process (application) divides the data into chunks.
o The transport layer encapsulates each chunk into a packet and sends it
independently.
2. Challenges:
o Out-of-order delivery: Packets might not arrive in the order they were sent
due to varying network delays.
o Loss of packets: If a packet is lost, the receiver has no way to detect the loss.
o No flow or error control: Since the sender and receiver are not synchronized,
mechanisms like flow control (ensuring the sender doesn’t overwhelm the
receiver) and error control (detecting and correcting errors) are not
implemented.
3. Real-World Example: UDP (User Datagram Protocol) is a widely-used
connectionless transport-layer protocol. It is suitable for applications like video
streaming and online gaming, where speed is more important than reliability.

Illustration (Figure 23.14):

 Three packets (0, 1, and 2) are sent by the client.


 Due to varying delays in the network, the packets arrive at the server out of order (0,
2, 1).
 The receiving application might receive an incorrect or incomplete message if the
packets are not reordered or one is missing.

Connection-Oriented Service

In a connection-oriented service:

 Establishment of a logical connection: Before data is exchanged, the client and


server establish a logical connection.
 Sequencing: Packets are numbered, and their delivery order is maintained.
 Acknowledgment and error control: The receiver acknowledges received packets,
and mechanisms are in place to handle errors and lost packets.

Phases of Connection-Oriented Service:

1. Connection Establishment:
o A logical connection is set up between the client and server.
o The transport layers at both ends coordinate to agree on communication
parameters.
2. Data Transfer:
o Packets are sent in sequence, and the receiver ensures they are reassembled in
the correct order.
o Flow control ensures the sender doesn’t overwhelm the receiver.
o Error control detects and corrects errors in the transmission.
3. Connection Teardown:
o After the data transfer is complete, the logical connection is closed.
o Both sides acknowledge the termination of the connection.

Key Features:

 Reliable communication: Ensures data is delivered accurately and in order.


 Suitable for applications requiring high reliability, such as email, file transfer, and
web browsing.
 Example: TCP (Transmission Control Protocol) is the most widely-used connection-
oriented protocol.

Illustration (Figure 23.15):

 The client and server go through connection establishment before data transfer.
 After all data packets (0, 1, 2) are successfully sent and acknowledged, the connection
is closed.

Finite State Machine (FSM) Representation

An FSM represents the behavior of the transport layer as a series of states, events, and
transitions.

Connectionless FSM:

 Single state: The transport layer always remains in the established state, ready to
send and receive packets.

Connection-Oriented FSM:

 Multiple states:
1. Closed State: Initial state where no connection exists.
2. Connection Establishment:
 Open-Wait-I: The local machine sends a connection request.
 Open-Wait-II: Waits for acknowledgment and a connection request
from the remote machine.
 Established State: Connection is fully established, and data transfer
can occur.
3. Connection Teardown:
 Close-Wait-I: Sends a request to close the connection.
 Close-Wait-II: Waits for the remote machine's closure request.
 Closed State: Connection is terminated.

Benefits of FSM:

 Provides a clear framework for designing and understanding transport-layer protocols.


 Allows for systematic handling of events like packet arrival, acknowledgment, and
errors.

Summary Table

Feature Connectionless Service Connection-Oriented Service


Setup Required No Yes
Delivery Order Not guaranteed Guaranteed
Error Control None Implemented
Flow Control None Implemented
Reliability Low High
Example Protocol UDP TCP
This detailed explanation covers the key aspects of connectionless and connection-oriented
services, their implementation, and their behavior in the transport layer.
TRANSPORT LAYER PROTOCOLS STRATEGIES OR
MECHANISMS
Explanation of Simple Protocol in Transport Layer

The Simple Protocol is the most basic form of a transport-layer protocol. It is a


connectionless protocol that lacks any advanced mechanisms such as flow control, error
control, or acknowledgment. Here's a detailed explanation of its components and behavior:

Overview of the Simple Protocol

1. Purpose:
o Provides the simplest communication service between two application
processes using the transport layer.
o Transmits data in a straightforward manner without checking for errors or
ensuring delivery.
2. Key Characteristics:
o Unidirectional communication: Data flows only in one direction (from
sender to receiver).
o No flow control: Assumes the receiver is always ready and can handle
incoming packets without being overwhelmed.
o No error control: Does not detect or correct errors in transmitted packets.
o No retransmission: Lost or corrupted packets are not retransmitted.
3. Assumptions:
o The sender can always send packets when it has data.
o The receiver can process every packet immediately upon arrival.
o No congestion occurs in the network.

How It Works

The protocol's operation is straightforward:

1. Sender:
o The transport layer at the sender takes a message from its application layer.
o It encapsulates the message into a packet and sends it to the receiver through
the network layer.
2. Receiver:
o The transport layer at the receiver receives the packet from its network layer.
o It decapsulates the packet to extract the message.
o The message is delivered to the application layer of the receiver.
Stop-and-Wait Protocol
You're absolutely correct that frames are typically used at the Data Link Layer, whereas the
Transport Layer deals with segments (in TCP) or datagrams (in UDP). Let me clarify the
definition for the Stop-and-Wait Protocol to avoid confusion and ensure it's accurate for an
interview:

"The Stop-and-Wait Protocol is a simple flow control mechanism used to ensure


reliable data transmission. In this protocol, the sender transmits one data unit (e.g.,
segment in the Transport Layer) at a time and waits for an acknowledgment (ACK)
from the receiver before sending the next data unit. If the acknowledgment is not
received within a specific timeout period, the sender retransmits the data unit. This
mechanism ensures reliability but can be inefficient for networks with high latency."

The Stop-and-Wait Protocol is a simple connection-oriented protocol used for reliable


communication with flow control and error control. Below is a brief explanation of its
components and operation, including an example for better understanding:

How It Works:

1. Flow Control:
The sender sends one packet at a time and waits for an acknowledgment (ACK)
before sending the next one.
2. Error Control:
Each packet includes a checksum to detect errors.
o If the packet is corrupted, the receiver discards it silently (doesn't respond).
o If no acknowledgment is received, the sender assumes the packet was lost or
corrupted and retransmits it after a timeout.
3. Sequence Numbers:
To distinguish new packets from duplicates, packets are numbered with sequence
numbers using modulo-2 arithmetic: 0, 1, 0, 1....
4. Acknowledgment Numbers:
The receiver sends an acknowledgment containing the sequence number of the next
expected packet.
5. Timers:
The sender starts a timer whenever a packet is sent. If the timer expires before an
acknowledgment arrives, the sender retransmits the packet.

Finite State Machines (FSMs):

Sender States:
 Ready State:
o Creates a packet when a request comes from the application.
o Sends the packet and moves to the Blocking State.
 Blocking State:
o Three possible events:
1. Correct ACK:
 Stops the timer and moves back to the ready state to send the
next packet.
2. Corrupted ACK or Incorrect ACK:
 Discards the ACK and remains in the blocking state.
3. Timeout:
 Retransmits the packet and restarts the timer.

Receiver State:

 Always in a Ready State:


o Three possible events:
1. Correct Packet (seqNo = R):
 Delivers the message to the application layer, updates its
expected sequence number (using modulo-2), and sends an
ACK for the next expected packet.
2. Unexpected Packet (seqNo ≠ R):
 Discards the packet but sends an ACK for the last correctly
received packet.
3. Corrupted Packet:
 Discards the packet without delivering it and does not update
its state.

Example Walkthrough:

Setup:

 Sender (S) starts at seqNo = 0.


 Receiver (R) expects seqNo = 0.

Scenario 1: Successful Packet Delivery

1. Sender sends Packet 0 with seqNo = 0.


o Timer starts.
2. Receiver receives Packet 0, verifies checksum (error-free), delivers it to the
application, and sends ACK 1 (indicating it expects seqNo = 1).
3. Sender receives ACK 1, stops the timer, and updates S to seqNo = 1.
o Ready to send Packet 1.
Scenario 2: Packet Loss or Corruption

1. Sender sends Packet 1 with seqNo = 1.


o Timer starts.
2. Receiver does not receive the packet (loss) or receives a corrupted packet (checksum
fails).
o No ACK is sent.
3. Sender's timer expires.
o Resends Packet 1.

Scenario 3: Lost Acknowledgment

1. Sender sends Packet 0 with seqNo = 0.


o Timer starts.
2. Receiver receives Packet 0, delivers it, and sends ACK 1.
3. ACK 1 is lost.
4. Sender's timer expires and it resends Packet 0.
5. Receiver recognizes the duplicate packet because it still expects seqNo = 1.
o Resends ACK 1.
6. Sender receives ACK 1, stops the timer, and proceeds.

Summary of Sequence Numbers:

 Sender alternates sequence numbers: 0 → 1 → 0 → 1...


 Receiver expects the same sequence: 0 → 1 → 0 → 1...
 This ensures unambiguous detection of new packets and duplicates.

Visual Flow:

1. Sender → Packet 0 → Receiver


2. Receiver → ACK 1 → Sender
3. Sender → Packet 1 → Receiver
4. Receiver → ACK 0 → Sender
...and so on.

The Stop-and-Wait Protocol is simple but has limitations, such as inefficiency over long or
high-latency links due to waiting for each ACK before proceeding.
GO BACK -N PROTOCOL

"The Go-Back-N Protocol is a reliable data transmission mechanism used in the


Transport Layer. In this protocol, the sender can transmit multiple data segments (up
to a specified window size) without waiting for an acknowledgment (ACK) for each one.
However, if an error occurs or a segment is lost, the sender goes back and retransmits
that segment and all subsequent segments, even if they were correctly received. This
protocol ensures reliability while improving efficiency over Stop-and-Wait by allowing
continuous data transmission within the window."

The Go-Back-N (GBN) Protocol is a transport-layer mechanism used to ensure reliable data
transmission over potentially unreliable networks. It is designed to allow multiple packets to
be sent before waiting for acknowledgments, improving efficiency compared to protocols that
require acknowledgment after each packet. Below is a detailed breakdown of the concepts
and working of the GBN protocol.

1. Sequence Numbers

 Packets are labeled with sequence numbers to ensure proper ordering and to
distinguish between different packets.
 Sequence numbers are modulo 2m2^m2m, where mmm is the number of bits used for
the sequence number.
o Example: If m=3m = 3m=3, the sequence numbers range from 0 to
23−1=72^3 - 1 = 723−1=7.

2. Acknowledgments (ACK)

 The acknowledgment number (ackNo) sent by the receiver is cumulative.


o Example: If ackNo = 7, it means packets with sequence numbers 0 through 6
have been received successfully, and the receiver is expecting packet 7.
 This approach reduces overhead by not requiring individual acknowledgments for
each packet.

3. Sliding Window Concept

The protocol uses a sliding window mechanism to control the flow of packets.

3.1 Send Window (Sender)

 The sender maintains a send window, which is an abstract representation of the


sequence numbers that can be sent or are in transit.
 The window divides the sequence numbers into four regions:
1. Sent and acknowledged: These packets are outside the window and no longer
need to be tracked.
2. Outstanding (sent but not yet acknowledged): The sender retains copies of
these packets in case they need to be retransmitted.
3. Ready to send: Packets that can be sent as soon as data is available from the
application layer.
4. Unusable: Sequence numbers outside the window that cannot yet be used
until the window slides.

Variables for Send Window:

 Sf\text{Sf}Sf: Sequence number of the first outstanding packet.


 Sn\text{Sn}Sn: Sequence number of the next packet to be sent.
 Ssize\text{Ssize}Ssize: Size of the window (fixed as 2m−12^m - 12m−1).

Sliding the Send Window:

 When an acknowledgment is received (ackNo), the send window slides forward.


 Example:
o If Sf=3\text{Sf} = 3Sf=3 and ackNo=6\text{ackNo} = 6ackNo=6, the window
slides forward to start at 6.

3.2 Receive Window (Receiver)

 The receiver has a receive window of size 1 (a key feature of GBN).


 It only accepts packets in order (based on the sequence number Rn\text{Rn}Rn, the
next expected packet).
o If seqNo=Rn\text{seqNo} = \text{Rn}seqNo=Rn, the packet is delivered, and
Rn\text{Rn}Rn increments.
o If seqNo≠Rn\text{seqNo} \neq \text{Rn}seqNo=Rn, the packet is discarded.
 Any packet received out of order or with errors is discarded, and an acknowledgment
is sent for the last correctly received packet.

4. Timers and Retransmission

 The sender uses one timer for the first outstanding packet (rather than a timer for
each packet).
 If the timer expires (indicating the loss of packets or their acknowledgments), the
sender:
o Resends all outstanding packets (from Sf\text{Sf}Sf to Sn−1\text{Sn} -
1Sn−1).
o Restarts the timer.

Example of Timeout and Resending:


 Sf=3,Sn=7\text{Sf} = 3, \text{Sn} = 7Sf=3,Sn=7: Packets 3, 4, 5, and 6 have been
sent but not acknowledged.
 Timer expires: Sender resends packets 3, 4, 5, and 6.

5. Why is it Called Go-Back-N?

 On a timeout, the sender goes back to the sequence number Sf\text{Sf}Sf (first
outstanding packet) and resends all packets from that point, even if some were
successfully received.

6. FSMs (Finite State Machines)

Sender:

1. Ready State:
o Responds to:
 Requests to send packets.
 Receiving acknowledgments.
 Timeout events.
o Sends packets and manages the send window.
2. Blocking State:
o Activated when the window is full (no more packets can be sent).
o Resumes sending upon receiving acknowledgments.

Receiver:

 Always in the ready state.


 Accepts only in-order packets (seqNo=Rn\text{seqNo} = \text{Rn}seqNo=Rn).
 Sends acknowledgments for the last correctly received packet.

7. Send Window Size Limitation

 The send window size must be less than 2m2^m2m.


 If the window size equals 2m2^m2m, ambiguities arise due to sequence number
wrapping.

Example:

 m=2,2m=4m = 2, 2^m = 4m=2,2m=4, sequence numbers = 0, 1, 2, 3.


 If the window size is 4, and acknowledgments for packets 0, 1, 2, and 3 are lost:
o The sender resends packets 0, 1, 2, and 3.
o The receiver (in the next cycle) expects packet 0. It accepts the duplicate as a
new packet, causing errors.
 To avoid this, the send window size is limited to 2m−12^m - 12m−1.

Summary:

The Go-Back-N protocol is an efficient way to manage reliable transmission by allowing


multiple packets to be sent before acknowledgment while ensuring order and data integrity.
Its key features, such as cumulative acknowledgments, a sliding window mechanism, and
timed retransmissions, enable high performance while maintaining simplicity in
implementation.

SELECTIVE REPEAT PROTOCOL


Selective-Repeat (SR) Protocol Explanation

The Selective-Repeat (SR) protocol is a sliding window protocol used in reliable transport
layer communication, which improves upon Go-Back-N by selectively retransmitting only
lost packets. Unlike GBN, which retransmits all packets after a packet loss, SR resends only
the specific packets that are lost.

Key Concepts in Selective-Repeat Protocol:

1. Sliding Windows:
o Send Window: It controls the sending of packets. The size of the send
window is limited to 2m−12^m - 12m−1. For example, if m=4m = 4m=4, the
window can handle 8 packets.
o Receive Window: It controls which packets the receiver can accept. The
receive window also has a size of 2m−12^m - 12m−1 and helps store packets
that arrive out of order.
2. Handling Packet Acknowledgments:
o In SR, an ACK acknowledges receipt of a specific packet (not all cumulative
packets as in GBN).
o The ACK number refers to the sequence number of the packet that was
correctly received.
3. Timer Mechanism:
o The protocol uses timers for each outstanding packet. This prevents the
sender from mistakenly resending packets already received.
o When a timer expires, only the packet corresponding to that timer is
retransmitted.

Sender Operation in Selective-Repeat:

The sender maintains two main states: Ready and Blocking.

1. Ready State:

 If the sender has data to send and the send window isn't full, it sends a new packet
with sequence number SnSnSn.
 The sender starts a timer for this packet. The sequence number SnSnSn is then
incremented.
 If an ACK arrives with ackNo, the corresponding packet is acknowledged. If the
ACK points to Sf (the first outstanding packet), the window slides and the
acknowledged packets are removed from the send window.
 If an ACK doesn’t refer to an outstanding packet or is corrupted, it is discarded.
 If the window is full, the sender transitions to the Blocking State.

2. Blocking State:
 In this state, the sender waits for space to open in the send window.
 ACKs can help slide the window and release packets for sending. If an ACK points to
Sf, the window slides, and if it’s not full, the sender goes back to Ready.
 If a corrupted ACK or an ACK for an invalid packet arrives, it is discarded.
 Timer Expiry: If a timer expires, the sender retransmits the unacknowledged packet
and restarts the timer.

Receiver Operation in Selective-Repeat:

The receiver, in the Ready State, performs the following actions:

 Correct Packet:
o If a packet with sequence number seqNoseqNoseqNo falls within the receive
window, it is accepted.
o An ACK is sent with ackNo = seqNo. If seqNo equals Rn (the next expected
sequence number), the packet and all consecutive packets are delivered to the
application layer. The receive window slides forward to point to the first
empty slot.
 Out-of-Order Packet:
o If the received packet’s seqNo is outside the receive window, the packet is
discarded but an ACK is sent with ackNo = Rn to indicate which packets
should be acknowledged. This helps sender slide its window.
 Corrupted Packet:
o The packet is discarded and no ACK is sent.

Diagram Explanation:

The protocol uses two windows:

 Send Window: Displays packets sent but not yet acknowledged.


 Receive Window: Shows the next expected packet, with shaded boxes indicating
packets that arrived out of order and are waiting for the missing packet.

Timers:

 Selective-Repeat protocol uses one timer for each outstanding packet. This means
if a packet hasn’t been acknowledged within a certain time, only that packet is
retransmitted.
 If multiple packets are lost, only the corresponding packets are retransmitted,
making it more efficient compared to Go-Back-N, which resends all unacknowledged
packets.
Acknowledgments (ACK):

 In GBN (Go-Back-N), an ACK is cumulative, meaning it acknowledges all previous


packets and tells the sender what the next expected packet is.
 In SR, the ACK is specific to a packet. It tells the sender the exact packet that was
correctly received.

USER DATA GRAM PROTOCOL


24.2 USER DATAGRAM PROTOCOL (UDP)

The User Datagram Protocol (UDP) is a connectionless, unreliable transport-layer


protocol used primarily in applications that do not require guaranteed delivery, such as
streaming or real-time communications. UDP is simple, lightweight, and efficient, making it
suitable for applications where quick transmission of data is more important than reliability.
Unlike TCP, UDP doesn’t establish a connection before sending data, and it doesn’t perform
error checking or retransmissions. Instead, it simply sends datagrams directly to the
destination.

Key Characteristics of UDP:

 Connectionless: No need to establish a connection before communication begins.


Each message is sent independently.
 Unreliable: UDP does not guarantee the delivery of data, in-order delivery, or
duplicate elimination. It’s up to the application to handle such issues.
 Simple and Efficient: Low overhead compared to TCP, with minimal interactions
between sender and receiver.
 Low Latency: Since there is no connection setup or verification, data packets can be
sent quickly.

24.2.1 UDP Header Format:

The UDP packet, also called a user datagram, has a fixed-size header of 8 bytes. The UDP
header includes four fields, each 2 bytes (16 bits):

UDP Header Format:

Field Description Size


Source Port The port number from which the data is sent. 2 bytes (16 bits)
Destination Port The port number to which the data is sent. 2 bytes (16 bits)
Length Total length of the user datagram (header + data). 2 bytes (16 bits)
Checksum Optional field used for error-checking. 2 bytes (16 bits)

Fields of the UDP Header:


1. Source Port:
o This is a 16-bit field where the port number from which the data is sent is
defined.
2. Destination Port:
o This 16-bit field defines the port number to which the data is destined.
3. Length:
o This field indicates the total length of the UDP datagram, which includes both
the header and the data. It is defined as a 16-bit field and can represent values
from 0 to 65,535 bytes, though in practical use, it is much smaller since the
entire UDP packet must fit within an IP datagram (65,535 bytes limit for IP
datagrams).
4. Checksum:
o This is an optional 2-byte field used to verify the integrity of the data. It helps
detect errors during transmission. However, not all implementations use this
field.

24.2.2 UDP Services: Detailed Explanation

In this section, we explore the various services provided by the User Datagram Protocol
(UDP). Even though UDP is a simple and connectionless protocol, it still offers several
essential services required for reliable data transfer, albeit with limitations compared to TCP.
Let’s break down these services:

1. Process-to-Process Communication:

UDP provides process-to-process communication, using socket addresses. A socket


address is a combination of an IP address (host address) and a port number (which
identifies the application or service). Each communication pair involves a source port and a
destination port.

 Socket Address:
A socket address uniquely identifies the endpoint (a combination of IP and port). For
example, the process at IP_ADDRESS:PORT_NUMBER.

2. Connectionless Services:

UDP is connectionless, meaning each datagram (UDP packet) is independent. There is no


established connection between sender and receiver, and each packet can be routed
independently across the network.

 No Relationship Between Packets: Unlike TCP, UDP doesn’t establish a


connection or perform connection termination. Each datagram can follow different
paths, and there is no reordering of packets.
 Independent Datagrams:
o No Flow Control: Each datagram must fit within its own packet (must be
small enough to fit within a single datagram). Processes using UDP cannot
send a stream of data expecting the protocol to break it into smaller datagrams.
Instead, each request must be small enough to be sent as one individual
datagram (typically less than 65,507 bytes).
o Datagram Size Limit: The total size of a UDP datagram is limited by the
maximum transmission unit (MTU), which, when combined with the IP
header, is typically less than 65,535 bytes. This means only small data packets
(up to about 65,507 bytes) can be transmitted reliably via UDP.

3. Flow Control:

UDP does not include flow control mechanisms. Flow control is the process of managing the
pace of data transfer between sender and receiver, preventing one from overwhelming the
other.

 No Window Mechanism: UDP doesn’t regulate the rate at which data is sent, and the
receiver may overflow with incoming packets. The sender has no mechanism to
know when the receiver’s buffer is full.
 Application-Level Flow Control: The responsibility of handling flow control falls
on the application itself, if required.

4. Error Control:

UDP does not provide error control for reliable transmission. It simply sends data without
ensuring its arrival or accuracy. The only form of error detection in UDP is through the
checksum field.

 No Acknowledgments: UDP does not perform retransmissions. The sender doesn’t


know whether the packet was lost, corrupted, or delivered successfully.
 Checksum:
o The checksum in UDP helps detect corruption (errors in transmission). It
doesn’t guarantee delivery but helps the receiver discard corrupted packets.
o Checksum Calculation:
The checksum is calculated by combining parts of the IP header, UDP header,
and the data from the application layer. It’s designed to detect packet
corruption. The checksum value is placed in the packet header.
 Protocol Field: In the checksum calculation, the protocol field
indicates whether the packet belongs to UDP (value 17). If this value
changes during transmission, the checksum will detect it, and the
packet will be dropped.

5. Checksum:
The checksum in UDP provides a simple method for detecting errors in the transmission. If
the checksum matches, the packet is accepted; if it doesn't, the packet is silently discarded.

 Pseudoheader: For checksum calculation, the UDP protocol combines the UDP
header with a pseudoheader. The pseudoheader includes the IP addresses, protocol
field, and UDP length.
o Pseudoheader for Checksum:
 Source IP + Destination IP + Protocol (17 for UDP) + UDP length
 The checksum is calculated by summing these values, and if the result
isn’t zero, it will be complemented twice before being sent.
 Optional Checksum Inclusion:
o The checksum may be optional in UDP. The sender can send a packet with a
checksum of all 0s to indicate that checksum verification isn’t performed.
o If the checksum field itself is all 0s, then the checksum is complemented to
all 1s before sending.
o Examples:
 a) If the checksum isn’t calculated, the checksum value is sent as all
0s.
 b) If the checksum calculation results in all 1s, it is complemented to
all 0s before sending.
 c) There cannot be a checksum with all 0s because some fields in the
header (like IP addresses) always have nonzero values.

6. Congestion Control:

UDP does not provide congestion control. Congestion control regulates the rate at which
data is sent to prevent network congestion.

 Assumption: UDP packets are small and sporadic, so they don’t typically cause
congestion. However, this may no longer be true today with UDP being used for
real-time audio/video streaming.
 Current Reality: In modern applications, especially multimedia streaming, UDP
can contribute to network congestion, requiring application-level congestion control.

7. Encapsulation and Decapsulation:

UDP encapsulates application data into a UDP datagram and decapsulates it when it reaches
the destination.

 The UDP header adds minimal overhead (8 bytes) to the data before sending it across
the network.

8. Queues and Ports:


In UDP, queues are associated with ports. When a process starts, it requests a port from the
operating system.

 Client Port: At the client side, the operating system can create both an incoming and
outgoing queue associated with each process.
 Inbound Queue: Other implementations may create only an incoming queue.
 Port Multiplexing: In UDP, multiple processes may use the same protocol, so
multiplexing and demultiplexing happens at the port level.

9. Multiplexing and Demultiplexing:

In a host running TCP/IP, UDP multiplexes multiple processes by assigning unique port
numbers.

 Multiplexing: A single UDP instance can handle multiple application-level


processes through port numbers.
 Demultiplexing: When data arrives, port numbers help distinguish which process
the data is meant for.

10. Comparison Between UDP and a Generic Simple Protocol:

UDP differs from other simple connectionless protocols because it may optionally
include a checksum to detect packet corruption.

 Simple Protocol: Some connectionless protocols only send packets without error
checking, unlike UDP, which provides optional checksum detection.

Summary:

 UDP provides simple, connectionless communication with minimal overhead.


 It lacks error control, flow control, and reliable delivery. The checksum in UDP
provides basic error detection, but it doesn’t guarantee delivery.
 UDP assumes the transmitted data is small enough to fit into individual packets
(usually under 65,507 bytes).
 Multiplexing at UDP ports allows multiple applications to communicate using the
same protocol.

This makes UDP ideal for applications that prioritize low latency and efficiency, like real-
time applications, audio/video streaming, and online gaming, but it shouldn’t be used for
applications where reliable data transfer is crucial.

APPLICATIONS
Here's a short table summarizing UDP applications:

Feature Explanation
Connectionless Ideal for simple request-response communication with quick interactions.
Use Cases - DNS, TFTP, SNMP, RIP.
Error Control No error control; suitable for real-time applications like Skype.
Flow Control No flow control; applications must handle it themselves.
Multicasting Supported by UDP, useful for broadcasting to multiple recipients.
Speed Faster due to minimal connection setup/termination (less overhead).
Real-Time Suitable for interactive and low-latency applications (audio/video).

This table provides a brief summary of typical UDP applications and features.

TCP
SERVICES

Here’s a short table summarizing the Transmission Control Protocol (TCP) with its key
features, explanations, and examples:

Feature Explanation Example


TCP establishes a logical connection
Connection- File transfer, streaming
between sender and receiver before data
Oriented videos.
transfer.
TCP ensures error detection and
File downloads, web
Reliable Service retransmission of lost packets using
browsing.
ACKs.
Data is sent as a stream of bytes rather
Stream Delivery Web applications, email.
than individual messages.
Full-Duplex Data flows simultaneously in both
VoIP calls, live chats.
Communication directions.
Multiple communication channels using
Multiplexing Web servers, FTP servers.
port numbers for differentiation.
TCP breaks data into segments and
Segments encapsulates them inside IP datagrams for File transfers over TCP.
transmission.
TCP uses sending and receiving buffers Buffering for large file
Buffers
to handle disparities in data flow rates. transfers, streaming media.
Acks & TCP acknowledges received data and Email transmission, file
Retransmission retransmits lost/corrupted packets. sharing.
TCP includes error detection
Online shopping carts,
Error Control (checksum) and retransmission to ensure
secure file transfers.
data integrity.
TCP manages data flow to prevent Streaming services, real-
Flow Control
overwhelming the receiving side. time data communication.
Feature Explanation Example
TCP can handle out-of-order delivery of
Out-of-Order Online gaming, live video
segments and reorders them at the
Delivery streaming.
receiver.

Example Explanations:

 File Transfer: TCP is used for file downloads because it ensures data integrity by
providing reliable data transfer and retransmitting lost packets.
 Streaming Videos: TCP is useful for streaming videos where reliable delivery of
large amounts of data is essential, and out-of-order data can be reassembled.
 Email Communication: TCP handles email transmission securely by ensuring that
messages are delivered reliably, with error detection and retransmission if needed.
 Online Gaming: TCP can handle out-of-order delivery to some extent, making it
suitable for multiplayer online gaming where real-time data must be handled
efficiently.

This table highlights TCP's connection-oriented, reliable service and error control
mechanisms with examples for clarity.

4o mini

Detailed Explanation of TCP Segment

A TCP segment is a structured data unit used in the Transmission Control Protocol (TCP)
to ensure reliable communication between two processes over a network. Let’s break down
the explanation step by step.

TCP Segment Format

A TCP segment consists of two main parts:

1. Header (20–60 bytes): Contains control information required for communication.


2. Data: The payload from the application layer.

1. Header Structure

The TCP header contains several fields, each serving a specific purpose. Let’s look at these
fields in detail:

a. Source Port Address (16 bits)


 This field identifies the port number of the sending application (process) on the
source host.
 Example: If the sending application is a web browser, the source port might be
dynamically assigned (e.g., 49152).

b. Destination Port Address (16 bits)

 This field identifies the port number of the receiving application (process) on the
destination host.
 Example: A web server typically uses port 80 for HTTP or port 443 for HTTPS.

c. Sequence Number (32 bits)

 The sequence number uniquely identifies the first byte of data in this segment.
 TCP assigns a sequence number to each byte of data for reliability and ordering.
 During connection establishment, each side generates an Initial Sequence Number
(ISN), which starts the numbering process.

d. Acknowledgment Number (32 bits)

 This field indicates the next byte the receiver expects to receive.
 Example: If the sender transmitted bytes numbered 1–100 successfully, the
acknowledgment number will be 101.
 It ensures reliable communication by confirming receipt of data.

e. Header Length (HLEN) (4 bits)

 Indicates the size of the TCP header in 4-byte words.


 Minimum: 5 (5 × 4 = 20 bytes, no options).
 Maximum: 15 (15 × 4 = 60 bytes, includes options).

f. Reserved (6 bits)

 Reserved for future use; usually set to 0.

g. Control Flags (6 bits)

These are individual bits used to control TCP behavior:

Flag Description
URG Urgent pointer field is valid.
ACK Acknowledgment field is valid.
PSH Push data immediately to the application.
RST Reset the connection.
SYN Synchronize sequence numbers during connection.
FIN Finish the connection (terminate gracefully).

h. Window Size (16 bits)


 Indicates the number of bytes the receiver is willing to accept (i.e., the size of the
receiving window or rwnd).
 Maximum value: 65,535 bytes.

i. Checksum (16 bits)

 Used for error detection during transmission.


 The checksum ensures data integrity by verifying that the received data matches the
original data.

j. Urgent Pointer (16 bits)

 Valid only if the URG flag is set.


 Indicates the position of urgent data within the segment.
 Urgent data is typically used for high-priority or control messages.

k. Options and Padding (0–40 bytes)

 Optional field used for additional features like Maximum Segment Size (MSS),
timestamps, or selective acknowledgments.
 Padding ensures that the header length is a multiple of 4 bytes.

2. Data Section

 The application layer payload (e.g., HTTP request or response).


 Encapsulated within the TCP segment after the header.

Encapsulation

TCP segments are encapsulated in an IP datagram for transmission across the network. The
process works as follows:

1. TCP Segment:
o Created at the transport layer with a header and data.
2. IP Datagram:
o Encapsulates the TCP segment.
o Adds an IP header with source and destination IP addresses.
3. Data-Link Frame:
o Encapsulates the IP datagram.
o Adds a frame header and trailer for delivery over the physical network.

Detailed Field Workflow Example

Let’s consider a scenario where a client sends a request to a web server:


1. Source Port and Destination Port:
o Source port: 49152 (assigned by the operating system).
o Destination port: 80 (for HTTP).
2. Sequence Number:
o Suppose the client sends data starting at byte number 1,000. The sequence
number is set to 1,000.
3. Acknowledgment Number:
o The server receives bytes 1,000–1,999 and sends back an acknowledgment
with the number 2,000.
4. Flags:
o During connection setup:
 Client sends a segment with the SYN flag to initiate the connection.
 Server responds with SYN and ACK flags set.
5. Checksum:
o Ensures the data is transmitted without corruption.

Key Features of TCP Segment

1. Reliability:
o Sequence numbers and acknowledgment numbers ensure data is delivered
without errors or loss.
2. Flow Control:
o The window size allows the sender to regulate the amount of data sent, based
on the receiver’s capacity.
3. Error Detection:
o The checksum verifies data integrity.
4. Connection Control:
o Control flags (SYN, FIN, RST) manage connection setup, termination, and
reset.

Let me know if you'd like any aspect of this explained further!

4o

3 WAY CONNECTION EXTABLISHEMENT FOR THE


TCP
1. TCP Connection Establishment

When two devices (like a client and a server) want to communicate over the internet using
TCP, they need to establish a reliable connection. To ensure that both devices are ready to
send and receive data, they go through a special process called the Three-Way Handshake.
Here’s the basic idea:

Step 1: Client Sends SYN (Synchronize)

 Client's Action: The client wants to start a communication with the server. It sends a
SYN (synchronize) message to the server. The SYN is a signal telling the server, "I
want to connect and I’m ready to communicate."
 The client also sends a random number called a Sequence Number (ISN), which
acts as the starting point for counting the data it will send.
 Why SYN is used: The SYN flag is used for synchronization, to make sure both
devices are on the same page in terms of data transfer.

Step 2: Server Sends SYN + ACK (Acknowledge)

 Server's Action: The server receives the SYN message and replies with a SYN +
ACK (acknowledge) message.
o SYN part: This is the server’s own way of saying "I am ready to
communicate."
o ACK part: The server acknowledges (or says "yes") to the client's request and
confirms the client's sequence number by adding 1 to it and sending that as an
Acknowledgment Number.
 The server also picks its own random Sequence Number (ISN), which will be used
for communication in the other direction (from server to client).
 Why SYN + ACK is used: This is a way for the server to acknowledge the client’s
request while also starting its own communication with the client.

Step 3: Client Sends ACK (Acknowledge)

 Client’s Final Action: The client sends an ACK message back to the server. This
message acknowledges the server's SYN + ACK message.
 The client sends the Acknowledgment Number (which is the server’s Sequence
Number + 1) to confirm that it received the server's SYN + ACK.
 Now, the connection is officially established, and both the client and server can start
sending data.

2. Why Three-Way Handshake is Needed:

This three-step process is necessary to:

 Make sure both sides are ready to communicate.


 Agree on the sequence numbers (starting points) for sending and receiving data.
These numbers ensure that both sides can properly assemble the data later.
 Ensure data is sent in the correct order, and both sides know where to start
counting.

3. Example of the Three-Way Handshake:

Imagine the client and server as two people who want to talk to each other. Here’s how they
do it:
 Client: "Hey, I want to talk! Here’s my starting point number 15000." (This is like
sending a SYN message with a random number, say 15000).
 Server: "Okay! I’m ready. Here’s my starting point number 8000, and I acknowledge
your number 15000." (This is the SYN + ACK message with two sequence numbers:
one from the client and one from the server).
 Client: "Got it! I acknowledge your number 8000, and I’m ready to talk." (This is the
final ACK message).

After this process, they both agree on the starting points and can start sending actual data.

4. SYN Flooding Attack (Why It’s a Problem)

Now, here's where things get tricky. A SYN Flooding Attack is a type of attack that takes
advantage of this connection process (the Three-Way Handshake).

What Happens in a SYN Flooding Attack?

 Malicious Attacker's Action: The attacker sends many SYN messages (like step 1 of
the handshake) to the server, pretending to be many different clients. The attacker
fakes the source IP addresses, so the server thinks it’s receiving connection requests
from many different machines.
 Server's Response: The server sees these requests and replies to each of them with
SYN + ACK messages, assuming they are legitimate clients. However, the attacker
never sends the final ACK to complete the handshake.
 Result: The server is left waiting for the ACK that never comes. The server has to
allocate resources (memory, time, etc.) to handle each of these half-open
connections. If the attacker sends enough fake SYN requests, the server runs out of
resources and can no longer handle real client requests.
 Denial of Service: This is called a Denial of Service (DoS) attack, where legitimate
users cannot connect to the server because it is overwhelmed with fake connection
requests.

Why Is SYN Flooding Effective?

 The attacker doesn’t need to complete the handshake. The server just keeps waiting,
tying up resources.
 The server is tricked into thinking it is accepting many legitimate connection requests,
but since the ACK part is never sent, these requests are never completed.

5. Mitigation of SYN Flooding Attacks

To prevent such attacks, servers can use several techniques:

 Limit the number of SYN requests the server accepts within a certain time frame.
 Verify the source IP address to ensure it’s a legitimate source.
 Use SYN Cookies: This is a technique where the server does not reserve resources
until it is sure the handshake is complete. It uses a "cookie" to verify the client’s
identity before allocating resources.

Conclusion:
 The Three-Way Handshake is how a reliable TCP connection is established,
ensuring both devices are ready for communication and agree on how to track the
data.
 SYN Flooding Attacks exploit this process by sending fake SYN requests to
overload the server with half-open connections, making it unable to accept real
requests.

There are measures like SYN Cookies and rate-limiting to prevent such attacks

Termination
Connection Termination

Connection termination ensures both parties know the connection is closing. TCP offers two
mechanisms:

1. Three-Way Handshake Termination:


o FIN (Finish): The initiating party (usually the client) sends a FIN segment,
consuming one sequence number.
o FIN + ACK: The receiver (server) acknowledges the FIN and simultaneously
sends its own FIN.
o ACK: The initiator acknowledges the FIN from the server.
2. Four-Way Handshake with Half-Close:
o Allows one party to stop sending data while still receiving it.
o Useful for scenarios like sending data to a server for processing, where the
client stops transmitting but waits to receive results.

Connection Termination

Connection termination ensures both parties know the connection is closing. TCP offers two
mechanisms:

1. Three-Way Handshake Termination:


o FIN (Finish): The initiating party (usually the client) sends a FIN segment, consuming
one sequence number.
o FIN + ACK: The receiver (server) acknowledges the FIN and simultaneously sends its
own FIN.
o ACK: The initiator acknowledges the FIN from the server.

2. Four-Way Handshake with Half-Close:


o Allows one party to stop sending data while still receiving it.
o Useful for scenarios like sending data to a server for processing, where the client
stops transmitting but waits to receive results.

Diagram Representation:

| Client | FIN | →
| | FIN + ACK | ←
| ACK | →
Tcp congestion control
TCP Congestion refers to the condition where data transfer over a network using the
Transmission Control Protocol (TCP) encounters delays due to network overload, leading to
packet loss, retransmissions, and degraded performance. It occurs when the available network
resources are insufficient to handle the current data transfer rates.

Key Concepts of TCP Congestion:

1. Congestion Control Mechanisms: TCP uses congestion control to avoid network


congestion. The primary mechanism includes:
o Slow Start: TCP starts by sending a small number of packets and gradually
increases as it gathers feedback from the network.
o Congestion Avoidance: Once the network is deemed congested, TCP reduces
the sending rate to avoid packet loss.
o Fast Retransmit: This mechanism retransmits a packet when packets are lost,
indicated by multiple duplicate acknowledgments.
o Fast Recovery: This enhances TCP’s response to detected losses by quickly
recovering to avoid unnecessary slow starts.
2. Causes of TCP Congestion:
o Buffer Overflow: When network routers or switches have limited buffer
space, and the incoming traffic exceeds the capacity, leading to packet loss.
o High Latency: Delays in network transmission cause TCP to wait, which can
exacerbate congestion.
o Network Congestion: When the aggregate data traffic on the network exceeds
its bandwidth capacity, leading to packet queuing, drop-offs, and
retransmissions.
3. Effects of Congestion:
o Packet Loss: When the network buffer overflows, packets are discarded,
leading to retransmissions.
o High Latency: Delayed packet delivery due to queue buildup and waiting
times.
o Reduced Throughput: The effective data transfer rate decreases because TCP
slows down to recover from congestion.
o Retransmissions: Repeated retransmissions can cause increased latency and
bandwidth waste.
4. Strategies to Mitigate Congestion:
o Queue Management: Techniques like RED (Random Early Detection) and
ECN (Explicit Congestion Notification) help signal congestion early.
o Bandwidth Management: Network resources can be allocated dynamically to
reduce congestion using protocols like Resource Reservation Protocol
(RSVP).
o Scalability Enhancements: Deploying TCP version 2 (TCP Vegas) and TCP
BBR (Bottleneck Bandwidth and Round-trip propagation time) that
provide more sophisticated congestion control.
o Network Monitoring: Continuous monitoring helps detect congestion early
and make adjustments accordingly.
5. TCP Congestion Control Algorithms:
o TCP Tahoe: Simple congestion control that relies on retransmitting lost
packets.
o TCP Reno: Improved version of TCP Tahoe, adding fast retransmit and fast
recovery.
o TCP NewReno: Enhances TCP Reno by handling multiple losses in a
window.
o TCP BBR: Developed by Google, it focuses on identifying bottleneck
bandwidth and round-trip propagation time, providing more efficient
congestion control.

Conclusion:

TCP Congestion can lead to poor network performance if not managed properly. TCP's
congestion control mechanisms aim to prevent network saturation, maintain fairness, and
optimize throughput. Understanding and fine-tuning these mechanisms are essential to
ensuring efficient data transfer over a network.

Certainly! Let's break down TCP Congestion Control into its main components and explain
them in detail.

1. Congestion Window (cwnd)

TCP controls congestion in the network by using a congestion window (cwnd) variable. The
cwnd represents the maximum number of segments that TCP is allowed to send into the
network before waiting for acknowledgment (ACKs). The actual amount of data transmitted
depends on the minimum of the cwnd and the receiver’s window size (rwnd).

 cwnd: Controlled by the network's congestion state. This window adjusts based on
how much congestion is detected.
 rwnd: Controlled by the receiver. It advertises the amount of buffer space available to
TCP to prevent overflow at the receiver side.

Key Points:

 The cwnd size changes dynamically depending on network conditions:


o It increases when the network shows no congestion (ACK received).
o It decreases if congestion is detected (timeout or duplicate ACKs).
 The actual send window size is min(cwnd, rwnd).

2. Congestion Detection

TCP relies on feedback to detect congestion, which comes from ACKs (acknowledgments):
 Time-Out: If the sender doesn’t receive an ACK for a segment (or a group of
segments) before the timeout, it assumes congestion has occurred due to lost
segments.
 Three Duplicate ACKs: When a TCP receiver sends multiple duplicate ACKs (i.e.,
three duplicate ACKs), it indicates packet loss. The loss could be due to network
congestion, but this situation is less severe compared to a timeout.

Congestion Detection Summary:

 Timeout: Strong congestion (more severe), leading to a segment loss.


 Three Duplicate ACKs: Light congestion (less severe), leading to a lost segment but
still received many segments.

3. Congestion Control Algorithms

TCP uses three key algorithms to handle congestion: Slow Start, Congestion Avoidance,
and Fast Recovery.

3.1 Slow Start: Exponential Increase

 In Slow Start, TCP begins by sending just one segment and then increases
exponentially based on received ACKs.
 Growth: For each ACK, the congestion window (cwnd) is incremented by 1 segment.
 Formula:
o If an ACK arrives: cwnd = cwnd + 1
 The growth rate of cwnd is exponential in terms of round-trip times (RTT).

Example Flow:

 Start with cwnd = 1 (1 MSS - Maximum Segment Size).


 After each ACK, cwnd increases (cwnd = 1, 2, 4, 8, 16, etc.).
 This phase is aggressive in expanding bandwidth usage, but it must eventually stop to
prevent overwhelming the network.

3.2 Congestion Avoidance: Additive Increase

 When the congestion window (cwnd) grows beyond the slow-start threshold
(ssthresh), TCP enters Congestion Avoidance.
 Growth: Here, cwnd increases additively (i.e., by 1 segment for each window of
ACKs).
 Formula:
o If an ACK arrives: cwnd = cwnd + 1 / cwnd
 The growth rate of cwnd is linear in terms of RTT. This approach is more
conservative compared to slow start.

Example Flow:
 Start with cwnd = 4 (for instance).
 After each set of i ACKs, the window size increases: cwnd = 4, 5, 6, 7, 8,
etc..
 This method helps avoid sending too much data into the network, thus reducing the
chance of congestion.

3.3 Fast Recovery

 The Fast Recovery algorithm is used after 2


 Instead of retransmitting immediately (which leads to time-outs), TCP recovers using
duplicate ACKs.
 Growth: In Fast Recovery, the congestion window increases additively after
receiving the third duplicate ACK.
 Formula:
o If a duplicate ACK arrives: cwnd = cwnd + 1 / cwnd

Example Flow:

 When three duplicate ACKs arrive, TCP assumes the loss of one segment but has
received many others.
 Instead of slow start, it goes into fast recovery and keeps sending packets until the
fourth ACK arrives, indicating loss recovery.
 Once that fourth ACK is received, TCP resumes Congestion Avoidance.

4. Policy Transition

 Taho TCP: Only uses Slow Start and Congestion Avoidance.


 Reno TCP: Introduces Fast Recovery after receiving three duplicate ACKs.
 New Reno TCP: Improves upon Reno TCP, better handling of retransmissions after
packet loss.

FSM (Finite State Machine):

 The state machine describes how TCP transitions from Slow Start to Congestion
Avoidance to Fast Recovery based on congestion events.

Transition Flow:

 Slow Start phase starts with a small cwnd and grows exponentially.
 ssthresh is set when TCP detects potential congestion.
 Upon crossing ssthresh, it switches to Congestion Avoidance.
 In case of packet loss (three duplicate ACKs), TCP enters Fast Recovery.
 If recovery is successful, TCP moves back to Congestion Avoidance.
5. Summary of TCP Congestion Control Mechanisms:

 Slow Start: Exponential growth; aggressive initially.


 Congestion Avoidance: Linear growth; more conservative approach.
 Fast Recovery: Efficient response to packet loss with additive increases after
detecting congestion.
 TCP transitions between these states based on network feedback (ACKs, time-outs,
duplicate ACKs).

This combination of algorithms ensures TCP dynamically adapts to changing network


conditions, optimizing performance while avoiding overloading and congestion.

Key Differences Between Reno TCP and NewReno TCP:

1. Fast-Recovery Mechanism:
o Reno TCP: In Reno TCP, after receiving three duplicate ACKs, the sender
moves into the fast-recovery state. It stays there and grows the congestion
window (cwnd) exponentially. When a new (non-duplicate) ACK arrives, it
transitions back to congestion-avoidance but resets the cwnd to the ssthresh
value.
o NewReno TCP: In NewReno TCP, the sender goes a step further by
retransmitting all lost segments detected during the three duplicate ACKs. It
doesn't just stop at retransmitting the first segment; it keeps retransmitting
until a new (non-duplicate) ACK arrives.
2. Retransmission in NewReno TCP:
o In Reno TCP, only the first lost segment is retransmitted once three duplicate
ACKs arrive.
o In NewReno TCP, when three duplicate ACKs arrive, it checks if more than
one segment is lost. If ACKs specify that more than one segment is lost,
NewReno retransmits all lost segments until a new (non-duplicate) ACK is
received.
3. Behavior on ACK Arrival:
o In Reno TCP, after a new ACK arrives (not duplicate), it deflates the cwnd to
ssthresh.
o In NewReno TCP, after receiving a new ACK (not duplicate), it adjusts the
window appropriately, aiming to retransmit additional segments if necessary
to avoid more duplicate ACKs.

Example Behavior in Reno TCP and NewReno TCP:

 Example in Reno TCP:


o After receiving three duplicate ACKs, ssthresh is reduced to 6 MSS, and the
cwnd is set to 9 MSS (ssthresh + 3 MSS). During fast-recovery, if another
ACK arrives, the sender transitions to congestion-avoidance, resetting the
cwnd to the ssthresh value.
o New ACK arrives -> transitions to congestion-avoidance but assumes only
one lost segment due to ssthresh reset.
 Example in NewReno TCP:
o Upon receiving three duplicate ACKs, the sender performs retransmissions
until a new ACK arrives. If multiple segments were lost, NewReno retransmits
all those segments.
o Once a new ACK arrives, it identifies whether one or multiple segments were
lost and retransmits accordingly.

Diagram Explanation:

 Figure 24.33: Illustrates the FSM for Reno TCP, showing transitions from slow start
to congestion avoidance based on events.
 Figure 24.34: Shows the situation in Reno TCP, demonstrating how cwnd behaves
after three duplicate ACKs and the arrival of a new ACK, which transitions the TCP
to congestion-avoidance.

Conclusion:

 Reno TCP: Focuses on retransmitting the first lost segment after three duplicate
ACKs and moves back to congestion-avoidance upon a new ACK.
 NewReno TCP: Optimizes by retransmitting all lost segments after three duplicate
ACKs and continues to retransmit until a new ACK arrives, preventing duplicate
ACKs from piling up.

4o mini
O

You might also like