Module 4
Module 4
1. Role: Located between the network and application layers, the transport layer
provides services to application processes and relies on the network layer for host-to-
host communication.
2. Domains of Operation:
o Network Layer: Host-to-host communication.
o Transport Layer: Process-to-process communication, enabling data to reach
the correct application process on a host.
Implementation Highlights
Buffers: Memory locations for temporary storage of data to handle flow control.
Timers: Used to detect packet loss when acknowledgments are delayed.
Sliding Window Examples:
o Circular and linear representations illustrate how the sender manages
outstanding packets.
This discussion underscores the transport layer's critical role in reliable communication,
ensuring data is accurately delivered between application processes across networks.
4o
You said:
explain once agin with detailed explanation of all with how it works flow
ChatGPT said:
ChatGPT
1. Process-to-Process Communication
What it is:
o Communication between application-layer processes (programs) running on
different hosts.
o Example: A web browser (client) requesting data from a web server.
How it works:
o The network layer delivers data from one host to another (host-to-host
communication).
o The transport layer ensures the data reaches the correct process on the
destination host using port numbers.
o Each process is identified by a unique combination of an IP address and a port
number, forming a socket address.
Encapsulation:
o The sender process gives a message to the transport layer.
o The transport layer adds a transport header (containing source and
destination port numbers) to the message.
o This combined unit is sent to the network layer for delivery.
Decapsulation:
o At the receiver, the transport layer extracts the transport header.
o It identifies the destination process using the port number and passes the data
to the application layer.
Multiplexing:
o Combining multiple processes' data into one channel at the sender.
o The transport layer adds headers to each packet to identify which process it
belongs to.
Demultiplexing:
o At the receiver, the transport layer examines headers to direct packets to the
correct processes.
Example:
o A client has three processes sending data to two servers. The transport layer
multiplexes the data into packets.
o At the servers, the transport layer demultiplexes packets to deliver them to the
appropriate processes.
5. Flow Control
Ensures the sender does not overwhelm the receiver with data.
Why it’s needed:
o If the sender sends too quickly, the receiver might run out of buffer space,
leading to data loss.
o If the sender sends too slowly, the system becomes inefficient.
How it works:
o The transport layer uses buffers to temporarily store packets.
o The receiver informs the sender when to pause or resume sending based on
buffer availability.
Mechanisms:
o Pushing: The sender pushes data without waiting for a request.
o Pulling: The sender waits for the receiver to request data.
6. Error Control
o Sequence Numbers:
Each packet is assigned a unique sequence number.
Helps detect missing, duplicate, or out-of-order packets.
o Acknowledgments (ACK):
The receiver sends an ACK for each successfully received packet.
If the sender doesn’t receive an ACK within a timeout period, it
retransmits the packet.
Window Size:
o Determines the maximum number of unacknowledged packets the sender can
have at any time.
o Example: If the window size is 4, the sender can send 4 packets before waiting
for ACKs.
Both flow and error control require buffers and sequence numbers.
The sender and receiver use numbered buffers:
o The sender buffers sent packets until ACKs are received.
o The receiver buffers out-of-order packets until missing packets arrive.
Example:
o A sender sends packets 1, 2, 3, and 4.
o If the receiver ACKs 1 and 2, the sender can slide its window and send packets
5 and 6.
Circular Representation:
o Sequence numbers are arranged in a circle (modulo arithmetic).
o The window slides forward as packets are acknowledged.
Linear Representation:
o Sequence numbers are shown linearly.
o The sliding window moves forward, freeing space for new packets.
This approach ensures reliable, efficient communication between processes across a network.
At the transport layer of the OSI model, there are two main types of protocols:
connectionless and connection-oriented. These types of services define how data is
transmitted between devices in a network. Here's a detailed explanation of both services and
their mechanisms:
Connectionless Service
In a connectionless service:
Key Characteristics:
1. Packet Handling:
o The source process (application) divides the data into chunks.
o The transport layer encapsulates each chunk into a packet and sends it
independently.
2. Challenges:
o Out-of-order delivery: Packets might not arrive in the order they were sent
due to varying network delays.
o Loss of packets: If a packet is lost, the receiver has no way to detect the loss.
o No flow or error control: Since the sender and receiver are not synchronized,
mechanisms like flow control (ensuring the sender doesn’t overwhelm the
receiver) and error control (detecting and correcting errors) are not
implemented.
3. Real-World Example: UDP (User Datagram Protocol) is a widely-used
connectionless transport-layer protocol. It is suitable for applications like video
streaming and online gaming, where speed is more important than reliability.
Connection-Oriented Service
In a connection-oriented service:
1. Connection Establishment:
o A logical connection is set up between the client and server.
o The transport layers at both ends coordinate to agree on communication
parameters.
2. Data Transfer:
o Packets are sent in sequence, and the receiver ensures they are reassembled in
the correct order.
o Flow control ensures the sender doesn’t overwhelm the receiver.
o Error control detects and corrects errors in the transmission.
3. Connection Teardown:
o After the data transfer is complete, the logical connection is closed.
o Both sides acknowledge the termination of the connection.
Key Features:
The client and server go through connection establishment before data transfer.
After all data packets (0, 1, 2) are successfully sent and acknowledged, the connection
is closed.
An FSM represents the behavior of the transport layer as a series of states, events, and
transitions.
Connectionless FSM:
Single state: The transport layer always remains in the established state, ready to
send and receive packets.
Connection-Oriented FSM:
Multiple states:
1. Closed State: Initial state where no connection exists.
2. Connection Establishment:
Open-Wait-I: The local machine sends a connection request.
Open-Wait-II: Waits for acknowledgment and a connection request
from the remote machine.
Established State: Connection is fully established, and data transfer
can occur.
3. Connection Teardown:
Close-Wait-I: Sends a request to close the connection.
Close-Wait-II: Waits for the remote machine's closure request.
Closed State: Connection is terminated.
Benefits of FSM:
Summary Table
1. Purpose:
o Provides the simplest communication service between two application
processes using the transport layer.
o Transmits data in a straightforward manner without checking for errors or
ensuring delivery.
2. Key Characteristics:
o Unidirectional communication: Data flows only in one direction (from
sender to receiver).
o No flow control: Assumes the receiver is always ready and can handle
incoming packets without being overwhelmed.
o No error control: Does not detect or correct errors in transmitted packets.
o No retransmission: Lost or corrupted packets are not retransmitted.
3. Assumptions:
o The sender can always send packets when it has data.
o The receiver can process every packet immediately upon arrival.
o No congestion occurs in the network.
How It Works
1. Sender:
o The transport layer at the sender takes a message from its application layer.
o It encapsulates the message into a packet and sends it to the receiver through
the network layer.
2. Receiver:
o The transport layer at the receiver receives the packet from its network layer.
o It decapsulates the packet to extract the message.
o The message is delivered to the application layer of the receiver.
Stop-and-Wait Protocol
You're absolutely correct that frames are typically used at the Data Link Layer, whereas the
Transport Layer deals with segments (in TCP) or datagrams (in UDP). Let me clarify the
definition for the Stop-and-Wait Protocol to avoid confusion and ensure it's accurate for an
interview:
How It Works:
1. Flow Control:
The sender sends one packet at a time and waits for an acknowledgment (ACK)
before sending the next one.
2. Error Control:
Each packet includes a checksum to detect errors.
o If the packet is corrupted, the receiver discards it silently (doesn't respond).
o If no acknowledgment is received, the sender assumes the packet was lost or
corrupted and retransmits it after a timeout.
3. Sequence Numbers:
To distinguish new packets from duplicates, packets are numbered with sequence
numbers using modulo-2 arithmetic: 0, 1, 0, 1....
4. Acknowledgment Numbers:
The receiver sends an acknowledgment containing the sequence number of the next
expected packet.
5. Timers:
The sender starts a timer whenever a packet is sent. If the timer expires before an
acknowledgment arrives, the sender retransmits the packet.
Sender States:
Ready State:
o Creates a packet when a request comes from the application.
o Sends the packet and moves to the Blocking State.
Blocking State:
o Three possible events:
1. Correct ACK:
Stops the timer and moves back to the ready state to send the
next packet.
2. Corrupted ACK or Incorrect ACK:
Discards the ACK and remains in the blocking state.
3. Timeout:
Retransmits the packet and restarts the timer.
Receiver State:
Example Walkthrough:
Setup:
Visual Flow:
The Stop-and-Wait Protocol is simple but has limitations, such as inefficiency over long or
high-latency links due to waiting for each ACK before proceeding.
GO BACK -N PROTOCOL
The Go-Back-N (GBN) Protocol is a transport-layer mechanism used to ensure reliable data
transmission over potentially unreliable networks. It is designed to allow multiple packets to
be sent before waiting for acknowledgments, improving efficiency compared to protocols that
require acknowledgment after each packet. Below is a detailed breakdown of the concepts
and working of the GBN protocol.
1. Sequence Numbers
Packets are labeled with sequence numbers to ensure proper ordering and to
distinguish between different packets.
Sequence numbers are modulo 2m2^m2m, where mmm is the number of bits used for
the sequence number.
o Example: If m=3m = 3m=3, the sequence numbers range from 0 to
23−1=72^3 - 1 = 723−1=7.
2. Acknowledgments (ACK)
The protocol uses a sliding window mechanism to control the flow of packets.
The sender uses one timer for the first outstanding packet (rather than a timer for
each packet).
If the timer expires (indicating the loss of packets or their acknowledgments), the
sender:
o Resends all outstanding packets (from Sf\text{Sf}Sf to Sn−1\text{Sn} -
1Sn−1).
o Restarts the timer.
On a timeout, the sender goes back to the sequence number Sf\text{Sf}Sf (first
outstanding packet) and resends all packets from that point, even if some were
successfully received.
Sender:
1. Ready State:
o Responds to:
Requests to send packets.
Receiving acknowledgments.
Timeout events.
o Sends packets and manages the send window.
2. Blocking State:
o Activated when the window is full (no more packets can be sent).
o Resumes sending upon receiving acknowledgments.
Receiver:
Example:
Summary:
The Selective-Repeat (SR) protocol is a sliding window protocol used in reliable transport
layer communication, which improves upon Go-Back-N by selectively retransmitting only
lost packets. Unlike GBN, which retransmits all packets after a packet loss, SR resends only
the specific packets that are lost.
1. Sliding Windows:
o Send Window: It controls the sending of packets. The size of the send
window is limited to 2m−12^m - 12m−1. For example, if m=4m = 4m=4, the
window can handle 8 packets.
o Receive Window: It controls which packets the receiver can accept. The
receive window also has a size of 2m−12^m - 12m−1 and helps store packets
that arrive out of order.
2. Handling Packet Acknowledgments:
o In SR, an ACK acknowledges receipt of a specific packet (not all cumulative
packets as in GBN).
o The ACK number refers to the sequence number of the packet that was
correctly received.
3. Timer Mechanism:
o The protocol uses timers for each outstanding packet. This prevents the
sender from mistakenly resending packets already received.
o When a timer expires, only the packet corresponding to that timer is
retransmitted.
1. Ready State:
If the sender has data to send and the send window isn't full, it sends a new packet
with sequence number SnSnSn.
The sender starts a timer for this packet. The sequence number SnSnSn is then
incremented.
If an ACK arrives with ackNo, the corresponding packet is acknowledged. If the
ACK points to Sf (the first outstanding packet), the window slides and the
acknowledged packets are removed from the send window.
If an ACK doesn’t refer to an outstanding packet or is corrupted, it is discarded.
If the window is full, the sender transitions to the Blocking State.
2. Blocking State:
In this state, the sender waits for space to open in the send window.
ACKs can help slide the window and release packets for sending. If an ACK points to
Sf, the window slides, and if it’s not full, the sender goes back to Ready.
If a corrupted ACK or an ACK for an invalid packet arrives, it is discarded.
Timer Expiry: If a timer expires, the sender retransmits the unacknowledged packet
and restarts the timer.
Correct Packet:
o If a packet with sequence number seqNoseqNoseqNo falls within the receive
window, it is accepted.
o An ACK is sent with ackNo = seqNo. If seqNo equals Rn (the next expected
sequence number), the packet and all consecutive packets are delivered to the
application layer. The receive window slides forward to point to the first
empty slot.
Out-of-Order Packet:
o If the received packet’s seqNo is outside the receive window, the packet is
discarded but an ACK is sent with ackNo = Rn to indicate which packets
should be acknowledged. This helps sender slide its window.
Corrupted Packet:
o The packet is discarded and no ACK is sent.
Diagram Explanation:
Timers:
Selective-Repeat protocol uses one timer for each outstanding packet. This means
if a packet hasn’t been acknowledged within a certain time, only that packet is
retransmitted.
If multiple packets are lost, only the corresponding packets are retransmitted,
making it more efficient compared to Go-Back-N, which resends all unacknowledged
packets.
Acknowledgments (ACK):
The UDP packet, also called a user datagram, has a fixed-size header of 8 bytes. The UDP
header includes four fields, each 2 bytes (16 bits):
In this section, we explore the various services provided by the User Datagram Protocol
(UDP). Even though UDP is a simple and connectionless protocol, it still offers several
essential services required for reliable data transfer, albeit with limitations compared to TCP.
Let’s break down these services:
1. Process-to-Process Communication:
Socket Address:
A socket address uniquely identifies the endpoint (a combination of IP and port). For
example, the process at IP_ADDRESS:PORT_NUMBER.
2. Connectionless Services:
3. Flow Control:
UDP does not include flow control mechanisms. Flow control is the process of managing the
pace of data transfer between sender and receiver, preventing one from overwhelming the
other.
No Window Mechanism: UDP doesn’t regulate the rate at which data is sent, and the
receiver may overflow with incoming packets. The sender has no mechanism to
know when the receiver’s buffer is full.
Application-Level Flow Control: The responsibility of handling flow control falls
on the application itself, if required.
4. Error Control:
UDP does not provide error control for reliable transmission. It simply sends data without
ensuring its arrival or accuracy. The only form of error detection in UDP is through the
checksum field.
5. Checksum:
The checksum in UDP provides a simple method for detecting errors in the transmission. If
the checksum matches, the packet is accepted; if it doesn't, the packet is silently discarded.
Pseudoheader: For checksum calculation, the UDP protocol combines the UDP
header with a pseudoheader. The pseudoheader includes the IP addresses, protocol
field, and UDP length.
o Pseudoheader for Checksum:
Source IP + Destination IP + Protocol (17 for UDP) + UDP length
The checksum is calculated by summing these values, and if the result
isn’t zero, it will be complemented twice before being sent.
Optional Checksum Inclusion:
o The checksum may be optional in UDP. The sender can send a packet with a
checksum of all 0s to indicate that checksum verification isn’t performed.
o If the checksum field itself is all 0s, then the checksum is complemented to
all 1s before sending.
o Examples:
a) If the checksum isn’t calculated, the checksum value is sent as all
0s.
b) If the checksum calculation results in all 1s, it is complemented to
all 0s before sending.
c) There cannot be a checksum with all 0s because some fields in the
header (like IP addresses) always have nonzero values.
6. Congestion Control:
UDP does not provide congestion control. Congestion control regulates the rate at which
data is sent to prevent network congestion.
Assumption: UDP packets are small and sporadic, so they don’t typically cause
congestion. However, this may no longer be true today with UDP being used for
real-time audio/video streaming.
Current Reality: In modern applications, especially multimedia streaming, UDP
can contribute to network congestion, requiring application-level congestion control.
UDP encapsulates application data into a UDP datagram and decapsulates it when it reaches
the destination.
The UDP header adds minimal overhead (8 bytes) to the data before sending it across
the network.
Client Port: At the client side, the operating system can create both an incoming and
outgoing queue associated with each process.
Inbound Queue: Other implementations may create only an incoming queue.
Port Multiplexing: In UDP, multiple processes may use the same protocol, so
multiplexing and demultiplexing happens at the port level.
In a host running TCP/IP, UDP multiplexes multiple processes by assigning unique port
numbers.
UDP differs from other simple connectionless protocols because it may optionally
include a checksum to detect packet corruption.
Simple Protocol: Some connectionless protocols only send packets without error
checking, unlike UDP, which provides optional checksum detection.
Summary:
This makes UDP ideal for applications that prioritize low latency and efficiency, like real-
time applications, audio/video streaming, and online gaming, but it shouldn’t be used for
applications where reliable data transfer is crucial.
APPLICATIONS
Here's a short table summarizing UDP applications:
Feature Explanation
Connectionless Ideal for simple request-response communication with quick interactions.
Use Cases - DNS, TFTP, SNMP, RIP.
Error Control No error control; suitable for real-time applications like Skype.
Flow Control No flow control; applications must handle it themselves.
Multicasting Supported by UDP, useful for broadcasting to multiple recipients.
Speed Faster due to minimal connection setup/termination (less overhead).
Real-Time Suitable for interactive and low-latency applications (audio/video).
This table provides a brief summary of typical UDP applications and features.
TCP
SERVICES
Here’s a short table summarizing the Transmission Control Protocol (TCP) with its key
features, explanations, and examples:
Example Explanations:
File Transfer: TCP is used for file downloads because it ensures data integrity by
providing reliable data transfer and retransmitting lost packets.
Streaming Videos: TCP is useful for streaming videos where reliable delivery of
large amounts of data is essential, and out-of-order data can be reassembled.
Email Communication: TCP handles email transmission securely by ensuring that
messages are delivered reliably, with error detection and retransmission if needed.
Online Gaming: TCP can handle out-of-order delivery to some extent, making it
suitable for multiplayer online gaming where real-time data must be handled
efficiently.
This table highlights TCP's connection-oriented, reliable service and error control
mechanisms with examples for clarity.
4o mini
A TCP segment is a structured data unit used in the Transmission Control Protocol (TCP)
to ensure reliable communication between two processes over a network. Let’s break down
the explanation step by step.
1. Header Structure
The TCP header contains several fields, each serving a specific purpose. Let’s look at these
fields in detail:
This field identifies the port number of the receiving application (process) on the
destination host.
Example: A web server typically uses port 80 for HTTP or port 443 for HTTPS.
The sequence number uniquely identifies the first byte of data in this segment.
TCP assigns a sequence number to each byte of data for reliability and ordering.
During connection establishment, each side generates an Initial Sequence Number
(ISN), which starts the numbering process.
This field indicates the next byte the receiver expects to receive.
Example: If the sender transmitted bytes numbered 1–100 successfully, the
acknowledgment number will be 101.
It ensures reliable communication by confirming receipt of data.
f. Reserved (6 bits)
Flag Description
URG Urgent pointer field is valid.
ACK Acknowledgment field is valid.
PSH Push data immediately to the application.
RST Reset the connection.
SYN Synchronize sequence numbers during connection.
FIN Finish the connection (terminate gracefully).
Optional field used for additional features like Maximum Segment Size (MSS),
timestamps, or selective acknowledgments.
Padding ensures that the header length is a multiple of 4 bytes.
2. Data Section
Encapsulation
TCP segments are encapsulated in an IP datagram for transmission across the network. The
process works as follows:
1. TCP Segment:
o Created at the transport layer with a header and data.
2. IP Datagram:
o Encapsulates the TCP segment.
o Adds an IP header with source and destination IP addresses.
3. Data-Link Frame:
o Encapsulates the IP datagram.
o Adds a frame header and trailer for delivery over the physical network.
1. Reliability:
o Sequence numbers and acknowledgment numbers ensure data is delivered
without errors or loss.
2. Flow Control:
o The window size allows the sender to regulate the amount of data sent, based
on the receiver’s capacity.
3. Error Detection:
o The checksum verifies data integrity.
4. Connection Control:
o Control flags (SYN, FIN, RST) manage connection setup, termination, and
reset.
4o
When two devices (like a client and a server) want to communicate over the internet using
TCP, they need to establish a reliable connection. To ensure that both devices are ready to
send and receive data, they go through a special process called the Three-Way Handshake.
Here’s the basic idea:
Client's Action: The client wants to start a communication with the server. It sends a
SYN (synchronize) message to the server. The SYN is a signal telling the server, "I
want to connect and I’m ready to communicate."
The client also sends a random number called a Sequence Number (ISN), which
acts as the starting point for counting the data it will send.
Why SYN is used: The SYN flag is used for synchronization, to make sure both
devices are on the same page in terms of data transfer.
Server's Action: The server receives the SYN message and replies with a SYN +
ACK (acknowledge) message.
o SYN part: This is the server’s own way of saying "I am ready to
communicate."
o ACK part: The server acknowledges (or says "yes") to the client's request and
confirms the client's sequence number by adding 1 to it and sending that as an
Acknowledgment Number.
The server also picks its own random Sequence Number (ISN), which will be used
for communication in the other direction (from server to client).
Why SYN + ACK is used: This is a way for the server to acknowledge the client’s
request while also starting its own communication with the client.
Client’s Final Action: The client sends an ACK message back to the server. This
message acknowledges the server's SYN + ACK message.
The client sends the Acknowledgment Number (which is the server’s Sequence
Number + 1) to confirm that it received the server's SYN + ACK.
Now, the connection is officially established, and both the client and server can start
sending data.
Imagine the client and server as two people who want to talk to each other. Here’s how they
do it:
Client: "Hey, I want to talk! Here’s my starting point number 15000." (This is like
sending a SYN message with a random number, say 15000).
Server: "Okay! I’m ready. Here’s my starting point number 8000, and I acknowledge
your number 15000." (This is the SYN + ACK message with two sequence numbers:
one from the client and one from the server).
Client: "Got it! I acknowledge your number 8000, and I’m ready to talk." (This is the
final ACK message).
After this process, they both agree on the starting points and can start sending actual data.
Now, here's where things get tricky. A SYN Flooding Attack is a type of attack that takes
advantage of this connection process (the Three-Way Handshake).
Malicious Attacker's Action: The attacker sends many SYN messages (like step 1 of
the handshake) to the server, pretending to be many different clients. The attacker
fakes the source IP addresses, so the server thinks it’s receiving connection requests
from many different machines.
Server's Response: The server sees these requests and replies to each of them with
SYN + ACK messages, assuming they are legitimate clients. However, the attacker
never sends the final ACK to complete the handshake.
Result: The server is left waiting for the ACK that never comes. The server has to
allocate resources (memory, time, etc.) to handle each of these half-open
connections. If the attacker sends enough fake SYN requests, the server runs out of
resources and can no longer handle real client requests.
Denial of Service: This is called a Denial of Service (DoS) attack, where legitimate
users cannot connect to the server because it is overwhelmed with fake connection
requests.
The attacker doesn’t need to complete the handshake. The server just keeps waiting,
tying up resources.
The server is tricked into thinking it is accepting many legitimate connection requests,
but since the ACK part is never sent, these requests are never completed.
Limit the number of SYN requests the server accepts within a certain time frame.
Verify the source IP address to ensure it’s a legitimate source.
Use SYN Cookies: This is a technique where the server does not reserve resources
until it is sure the handshake is complete. It uses a "cookie" to verify the client’s
identity before allocating resources.
Conclusion:
The Three-Way Handshake is how a reliable TCP connection is established,
ensuring both devices are ready for communication and agree on how to track the
data.
SYN Flooding Attacks exploit this process by sending fake SYN requests to
overload the server with half-open connections, making it unable to accept real
requests.
There are measures like SYN Cookies and rate-limiting to prevent such attacks
Termination
Connection Termination
Connection termination ensures both parties know the connection is closing. TCP offers two
mechanisms:
Connection Termination
Connection termination ensures both parties know the connection is closing. TCP offers two
mechanisms:
Diagram Representation:
| Client | FIN | →
| | FIN + ACK | ←
| ACK | →
Tcp congestion control
TCP Congestion refers to the condition where data transfer over a network using the
Transmission Control Protocol (TCP) encounters delays due to network overload, leading to
packet loss, retransmissions, and degraded performance. It occurs when the available network
resources are insufficient to handle the current data transfer rates.
Conclusion:
TCP Congestion can lead to poor network performance if not managed properly. TCP's
congestion control mechanisms aim to prevent network saturation, maintain fairness, and
optimize throughput. Understanding and fine-tuning these mechanisms are essential to
ensuring efficient data transfer over a network.
Certainly! Let's break down TCP Congestion Control into its main components and explain
them in detail.
TCP controls congestion in the network by using a congestion window (cwnd) variable. The
cwnd represents the maximum number of segments that TCP is allowed to send into the
network before waiting for acknowledgment (ACKs). The actual amount of data transmitted
depends on the minimum of the cwnd and the receiver’s window size (rwnd).
cwnd: Controlled by the network's congestion state. This window adjusts based on
how much congestion is detected.
rwnd: Controlled by the receiver. It advertises the amount of buffer space available to
TCP to prevent overflow at the receiver side.
Key Points:
2. Congestion Detection
TCP relies on feedback to detect congestion, which comes from ACKs (acknowledgments):
Time-Out: If the sender doesn’t receive an ACK for a segment (or a group of
segments) before the timeout, it assumes congestion has occurred due to lost
segments.
Three Duplicate ACKs: When a TCP receiver sends multiple duplicate ACKs (i.e.,
three duplicate ACKs), it indicates packet loss. The loss could be due to network
congestion, but this situation is less severe compared to a timeout.
TCP uses three key algorithms to handle congestion: Slow Start, Congestion Avoidance,
and Fast Recovery.
In Slow Start, TCP begins by sending just one segment and then increases
exponentially based on received ACKs.
Growth: For each ACK, the congestion window (cwnd) is incremented by 1 segment.
Formula:
o If an ACK arrives: cwnd = cwnd + 1
The growth rate of cwnd is exponential in terms of round-trip times (RTT).
Example Flow:
When the congestion window (cwnd) grows beyond the slow-start threshold
(ssthresh), TCP enters Congestion Avoidance.
Growth: Here, cwnd increases additively (i.e., by 1 segment for each window of
ACKs).
Formula:
o If an ACK arrives: cwnd = cwnd + 1 / cwnd
The growth rate of cwnd is linear in terms of RTT. This approach is more
conservative compared to slow start.
Example Flow:
Start with cwnd = 4 (for instance).
After each set of i ACKs, the window size increases: cwnd = 4, 5, 6, 7, 8,
etc..
This method helps avoid sending too much data into the network, thus reducing the
chance of congestion.
Example Flow:
When three duplicate ACKs arrive, TCP assumes the loss of one segment but has
received many others.
Instead of slow start, it goes into fast recovery and keeps sending packets until the
fourth ACK arrives, indicating loss recovery.
Once that fourth ACK is received, TCP resumes Congestion Avoidance.
4. Policy Transition
The state machine describes how TCP transitions from Slow Start to Congestion
Avoidance to Fast Recovery based on congestion events.
Transition Flow:
Slow Start phase starts with a small cwnd and grows exponentially.
ssthresh is set when TCP detects potential congestion.
Upon crossing ssthresh, it switches to Congestion Avoidance.
In case of packet loss (three duplicate ACKs), TCP enters Fast Recovery.
If recovery is successful, TCP moves back to Congestion Avoidance.
5. Summary of TCP Congestion Control Mechanisms:
1. Fast-Recovery Mechanism:
o Reno TCP: In Reno TCP, after receiving three duplicate ACKs, the sender
moves into the fast-recovery state. It stays there and grows the congestion
window (cwnd) exponentially. When a new (non-duplicate) ACK arrives, it
transitions back to congestion-avoidance but resets the cwnd to the ssthresh
value.
o NewReno TCP: In NewReno TCP, the sender goes a step further by
retransmitting all lost segments detected during the three duplicate ACKs. It
doesn't just stop at retransmitting the first segment; it keeps retransmitting
until a new (non-duplicate) ACK arrives.
2. Retransmission in NewReno TCP:
o In Reno TCP, only the first lost segment is retransmitted once three duplicate
ACKs arrive.
o In NewReno TCP, when three duplicate ACKs arrive, it checks if more than
one segment is lost. If ACKs specify that more than one segment is lost,
NewReno retransmits all lost segments until a new (non-duplicate) ACK is
received.
3. Behavior on ACK Arrival:
o In Reno TCP, after a new ACK arrives (not duplicate), it deflates the cwnd to
ssthresh.
o In NewReno TCP, after receiving a new ACK (not duplicate), it adjusts the
window appropriately, aiming to retransmit additional segments if necessary
to avoid more duplicate ACKs.
Diagram Explanation:
Figure 24.33: Illustrates the FSM for Reno TCP, showing transitions from slow start
to congestion avoidance based on events.
Figure 24.34: Shows the situation in Reno TCP, demonstrating how cwnd behaves
after three duplicate ACKs and the arrival of a new ACK, which transitions the TCP
to congestion-avoidance.
Conclusion:
Reno TCP: Focuses on retransmitting the first lost segment after three duplicate
ACKs and moves back to congestion-avoidance upon a new ACK.
NewReno TCP: Optimizes by retransmitting all lost segments after three duplicate
ACKs and continues to retransmit until a new ACK arrives, preventing duplicate
ACKs from piling up.
4o mini
O