Computer Network Unit 4 Transport Layer
Computer Network Unit 4 Transport Layer
TRANSPORT LAYER
Introduction:
The network layer provides end-to-end packet delivery using data-grams or virtual circuits. The
transport layer builds on the network layer to provide data transport from a process on a source machine to a
process on a destination machine with a desired level of reliability that is independent of the physical networks
currently in use. It provides the abstractions that applications need to use the network.
Transport Entity: The hardware and/or software which make use of services provided by the network layer,
(within the transport layer) is called transport entity.
Transport Service User: The upper layers i.e., layers 5 to 7 are called Transport Service User.
Transport Service Primitives: Which allow transport users (application programs) to access the transport
service.
TPDU (Transport Protocol Data Unit): Transmissions of message between 2 transport entities are carried out
by TPDU. The transport entity carries out the transport service primitives by blocking the caller and sending a
packet the service. Encapsulated in the payload of this packet is a transport layer message for the server’s
transport entity. The task of the transport layer is to provide reliable, cost-effective data transport from the
source machine to the destination machine, independent of physical network or networks currently in use.
TRANSPORT SERVICE
1.Services Provided to the Upper Layers
The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective data
transmission service to its users, normally processes in the application layer. To achieve this, the transport
layer makes use of the services pro-vided by the network layer. The software and/or hardware within the
transport layer that does the work is called the transport entity. The transport entity can be located in the
operating system kernel, in a library package bound into network applications, in a separate user process, or
even on the network interface card.
To allow users to access the transport service, the transport layer must provide some operations to
application programs, that is, a transport service interface. Each transport service has its own interface.
The transport service is similar to the network service, but there are also some important differences.
The main difference is that the network service is intended to model the service offered by real
networks. Real networks can lose packets, so the network service is generally unreliable.
The (connection-oriented) transport service, in contrast, is reliable
As an example, consider two processes connected by pipes in UNIX. They assume the connection
between them is perfect. They do not want to know about acknowledgements, lost packets, congestion, or
anything like that. What they want is a 100 percent reliable connection. Process A puts data into one end of the
pipe, and process B takes it out of the other.
A second difference between the network service and transport service is whom the services are
intended for. The network service is used only by the transport entities. Consequently, the transport service
must be convenient and easy to use.
1. The server executes a “LISTEN” primitive by calling a library procedure that makes a
System call to block the server until a client turns up.
2. When a client wants to talk to the server, it executes a “CONNECT” primitive, with “CONNECTION
REQUEST” TPDU sent to the server.
3. When it arrives, the TE unblocks the server and sends a “CONNECTION ACCEPTED” TPDU back to the
client.
4. When it arrives, the client is unblocked and the connection is established. Data can now be exchanged using
“SEND” and “RECEIVE” primitives.
5. When a connection is no longer needed, it must be released to free up table space within the 2 transport
entries, which is done with “DISCONNECT” primitive by sending “DISCONNECTION REQUEST”
TPDU. This disconnection can b done either by asymmetric variant (connection is released, depending on
other one) or by symmetric variant (connection is released, independent of other one).
The term segment for messages sent from transport entity to transport entity.
TCP, UDP and other Internet protocols use this term. Segments (exchanged by the transport layer) are
contained in packets (exchanged by the network layer).
These packets are contained in frames(exchanged by the data link layer).When a frame arrives, the data
link layer processes the frame header and, if the destination address matches for local delivery, passes
the contents of the frame payload field up to the network entity.
The network entity similarly processes the packet header and then passes the contents of the packet
payload up to the transport entity. This nesting is illustrated in Fig. 4.2.
Figure 4.3 - A state diagram for a simple connection management scheme. Transitions labelled in italics are
caused by packet arrivals. The solid lines show the client's state sequence. The dashed lines show the
server's state sequence.
In fig. 4.3 each transition is triggered by some event, either a primitive executed by the local transport
user or an incoming packet. For simplicity, we assume here that each TPDU is separately acknowledged. We
also assume that a symmetric disconnection model is used, with the client going first. Please note that this
model is quite unsophisticated. We will look at more realistic models later on.
BERKLEY SOCKETS
These primitives are socket primitives used in Berkley UNIX for TCP.
The socket primitives are mainly used for TCP. These sockets were first released as part of the Berkeley
UNIX 4.2BSD software distribution in 1983. They quickly became popular. The primitives are now widely
used for Internet programming on many operating systems, especially UNIX -based systems, and there is a
socket-style API for Windows called ‘‘winsock.’’
The first four primitives in the list are executed in that order by servers.
The SOCKET primitive creates a new endpoint and allocates table space for it within the transport
entity. The parameter includes the addressing format to be used, the type of service desired and the protocol.
Newly created sockets do not have network addresses.
The BIND primitive is used to connect the newly created sockets to an address. Once a server has bound
an address to a socket, remote clients can connect to it.
The LISTEN call, which allocates space to queue incoming calls for the case that several clients try to
connect at the same time.
The server executes an ACCEPT primitive to block waiting for an incoming connection.
Some of the client side primitives are. Here, too, a socket must first be created
The CONNECT primitive blocks the caller and actively starts the connection process. When it
completes, the client process is unblocked and the connection is established.
Both sides can now use SEND and RECEIVE to transmit and receive data over the full-duplex
connection.
Connection release with sockets is symmetric. When both sides have executed a CLOSE primitive, the
connection is released.
ELEMENTS OF TRANSPORT PROTOCOLS
The transport service is implemented by a transport protocol used between the two transport entities. The
transport protocols resemble the data link protocols. Both have to deal with error control, sequencing, and flow
control, among other issues. The difference transport protocol and data link protocol depends upon the
environment in which they are operated.
These differences are due to major dissimilarities between the environments in which the two protocols
operate, as shown in Fig.
At the data link layer, two routers communicate directly via a physical channel, whether wired or
wireless, whereas at the transport layer, this physical channel is replaced by the entire network. This difference
has many important implications for the protocols.
Figure (a) Environment of the data link layer. (b) Environment of the transport layer.
In the data link layer, it is not necessary for a router to specify which router it wants to talk to. In the
transport layer, explicit addressing of destinations is required.
In the transport layer, initial connection establishment is more complicated, as we will see. Difference
between the data link layer and the transport layer is the potential existence of storage capacity in the subnet
Buffering and flow control are needed in both layers, but the presence of a large and dynamically
varying number of connections in the transport layer may require a different approach than we used in the data
link layer.
The transport service is implemented by a transport protocol between the 2 transport entities.
Figure 4.5 illustrates the relationship between the NSAP, TSAP and transport connection. Application
processes, both clients and servers, can attach themselves to a TSAP to establish a connection to a remote
TSAP.
These connections run through NSAPs on each host, as shown. The purpose of having TSAPs is that in
some networks, each computer has a single NSAP, so some way is needed to distinguish multiple transport end
points that share that NSAP.
1. ADDRESSING
When an application (e.g., a user) process wishes to set up a connection to a remote application process, it
must specify which one to connect to. The method normally used is to define transport addresses to which
processes can listen for connection requests. In the Internet, these endpoints are called ports.
There are two types of access points.
TSAP (Transport Service Access Point) to mean a specific endpoint in the transport layer.
The analogous endpoints in the network layer (i.e., network layer addresses) are not surprisingly called
NSAPs (Network Service Access Points). IP addresses are examples of NSAPs.
Application processes, both clients and servers, can attach themselves to a local TSAP to establish a
connection to a remote TSAP. These connections run through NSAPs on each host. The purpose of having
TSAPs is that in some networks, each computer has a single NSAP, so some way is needed to distinguish
multiple transport endpoints that share that NSAP.
A possible scenario for a transport connection is as follows:
1. A mail server process attaches itself to TSAP 1522 on host 2 to wait for an incoming call. How a
process attaches itself to a TSAP is outside the networking model and depends entirely on the local operating
system. A call such as our LISTEN might be used, for example.
2. An application process on host 1 wants to send an email message, so it attaches itself to TSAP 1208 and
issues a CONNECT request. The request specifies TSAP 1208 on host 1 as the source and TSAP 1522 on
host 2 as the destination. This action ultimately results in a transport connection being established between
the application process and the server.
3. The application process sends over the mail message.
4. The mail server responds to say that it will deliver the message.
5. The transport connection is released.
2. CONNECTION ESTABLISHMENT:
With packet lifetimes bounded, it is possible to devise a fool proof way to establish connections safely.
Packet lifetime can be bounded to a known maximum using one of the following techniques:
Restricted subnet design
Putting a hop counter in each packet
Time stamping in each packet
Using a 3-way hand shake, a connection can be established. This establishment protocol doesn’t require both
sides to begin sending with the same sequence number.
Fig 4.6: Three protocol scenarios for establishing a connection using a three-way handshake. CR denotes
CONNEC TION REQUEST (a) Normal operation. (b) Old duplicate CONNECTION REQUEST
appearing out of nowhere. (c) Duplicate CONNECTION REQUEST and duplicate ACK .
The first technique includes any method that prevents packets from looping, combined with some way
of bounding delay including congestion over the longest possible path. It is difficult, given that
internets may range from a single city to international in scope.
The second method consists of having the hop count initialized to some appropriate value and
decremented each time the packet is forwarded. The network protocol simply discards any packet
whose hop counter becomes zero.
The third method requires each packet to bear the time it was created, with the routers agreeing to
discard any packet older than some agreed-upon time.
This establishment protocol involves one peer checking with the other that the connection request is
indeed current. Host 1 chooses a sequence number, x , and sends a CONNECTION REQUEST
segment containing it to host 2. Host 2replies with an ACK segment acknowledging x and announcing
its own initial sequence number, y.
Finally, host 1 acknowledges host 2’s choice of an initial sequence number in the first data segment that
it sends
In fig (B) the first segment is a delayed duplicate CONNECTION REQUEST from an old connection.
This segment arrives at host 2 without host 1’s knowledge. Host 2 reacts to this segment by sending
host1an ACK segment, in effect asking for verification that host 1 was indeed trying to set up a new
connection.
When host 1 rejects host 2’s attempt to establish a connection, host 2 realizes that it was tricked by a
delayed duplicate and abandons the connection. In this way, a delayed duplicate does no damage.
The worst case is when both a delayed CONNECTION REQUEST and an ACK are floating around in
the subnet.
In fig (C) previous example, host 2 gets a delayed CONNECTION REQUEST and replies to it.
At this point, it is crucial to realize that host 2 has proposed using y as the initial sequence number for
host 2 to host 1 traffic, knowing full well that no segments containing sequence number y or
acknowledgements to y are still in existence.
When the second delayed segment arrives at host 2, the fact that z has been acknowledged rather than y
tells host 2 that this, too, is an old duplicate.
The important thing to realize here is that there is no combination of old segments that can cause
the protocol to fail and have a connection set up by accident when no one wants it.
3.CONNECTION RELEASE:
A connection is released using either asymmetric or symmetric variant. But, the improved protocol for
releasing a connection is a 3-way handshake protocol.
There are two styles of terminating a connection:
1) Asymmetric release and
2) Symmetric release.
Asymmetric release is the way the telephone system works: when one party hangs up, the
connection is broken. Symmetric release treats the connection as two separate unidirectional
connections and requires each one to be released separately.
Flow control is done by having a sliding window on each connection to keep a fast transmitter from over
running a slow receiver. Buffering must be done by the sender, if the network service is unreliable. The sender
buffers all the TPDUs sent to the receiver. The buffer size varies for different TPDUs.
They are:
a) Chained Fixed-size Buffers
b) Chained Variable-size Buffers
c) One large Circular Buffer per Connection
If most TPDUs are nearly the same size, the buffers are organized as a pool of identical size buffers,
with one TPDU per buffer.
(b). Chained Variable-size Buffers:
This is an approach to the buffer-size problem. i.e., if there is wide variation in TPDU size, from a few
characters typed at a terminal to thousands of characters from file transfers, some problems may occur:
If the buffer size is chosen equal to the largest possible TPDU, space will be wasted whenever a short
TPDU arrives.
If the buffer size is chosen less than the maximum TPDU size, multiple buffers will be needed for long
TPDUs.
To overcome these problems, we employ variable-size buffers.
Figure 4.7. (a) Chained fixed-size buffers. (b) Chained variable-sized buffers. (c) One large circular
buffer per connection.
5.MULTIPLEXING:
In networks that use virtual circuits within the subnet, each open connection consumes some table space
in the routers for the entire duration of the connection. If buffers are dedicated to the virtual circuit in each
router as well, a user who left a terminal logged into a remote machine, there is need for multiplexing. There are
2 kinds of multiplexing:
Figure 4.8. (a) Upward multiplexing. (b) Downward multiplexing
In the below figure, all the 4 distinct transport connections use the same network connection to the
remote host. When connect time forms the major component of the carrier’s bill, it is up to the transport layer to
group port connections according to their destination and map each group onto the minimum number of port
connections.
The Internet has two main protocols in the transport layer, a connectionless protocol and a connection-
oriented one. The protocols complement each other. The connectionless protocol is UDP. It does almost
nothing beyond sending packets between applications, letting applications build their own protocols on top as
needed.
The connection-oriented protocol is TCP. It does almost everything. It makes connections and adds
reliability with retransmissions, along with flow control and congestion control, all on behalf of the
applications that use it. Since UDP is a transport layer protocol that typically runs in the operating system and
protocols that use UDP typically run in user s pace, these uses might be considered applications.
INTROUCTION TO UDP
The Internet protocol suite supports a connectionless transport protocol called UDP (User Datagram
Protocol). UDP provides a way for applications to send encapsulated IP datagrams without having to
establish a connection.
UDP transmits segments consisting of an 8-byte header followed by the pay-load. The two ports serve
to identify the end-points within the source and destination machines.
When a UDP packet arrives, its payload is handed to the process attached to the destination port. This
attachment occurs when the BIND primitive. Without the port fields, the transport layer would not
know what to do with each incoming packet. With them, it delivers the embedded segment to the
correct application.
It was specifically designed to provide a reliable end-to end byte stream over an unreliable network. It
was designed to adapt dynamically to properties of the inter network and to be robust in the face of many kinds
of failures.
Each machine supporting TCP has a TCP transport entity, which accepts user data streams from local
processes, breaks them up into pieces not exceeding 64kbytes and sends each piece as a separate IP datagram.
When these datagrams arrive at a machine, they are given to TCP entity, which reconstructs the original byte
streams. It is up to TCP to time out and retransmits them as needed, also to reassemble datagrams into messages
in proper sequence.
The different issues to be considered are:
Ports: Port numbers below 256 are called Well- known ports and are reserved for standard services.
Eg:
To establish a connection, one side, say, the server, passively waits for an incoming connection by
executing the LISTEN and ACCEPT primitives, either specifying a specific source or nobody in particular.
The other side, say, the client, executes a CONNECT primitive, specifying the IP address and port to
which it wants to connect, the maximum TCP segment size it is willing to accept, and optionally some user data
(e.g., a password).
The CONNECT primitive sends a TCP segment with the SYN bit on and ACK bit off and waits for a
response.
Fig 4.12: a) TCP Connection establishment in the normal case b) Call Collision
Figure 4.13. The states used in the TCP connection management finite state machine.
Figure 4.14 - TCP connection management finite state machine.
1. The server does a LISTEN and settles down to see who turns up.
2. When a SYN comes in, the server acknowledges it and goes to the SYNRCVD state
3. When the servers SYN is itself acknowledged the 3-way handshake is complete and server goes to the
ESTABLISHED state. Data transfer can now occur.
4. When the client has had enough, it does a close, which causes a FIN to arrive at the server [dashed
box marked passive close].
5. The server is then signaled.
6. When it too, does a CLOSE, a FIN is sent to the client.
7. When the client’s acknowledgement shows up, the server releases the connection and deletes the
connection record.
TCP Transmission Policy
This is one of the problems that ruin the TCP performance, which occurs when data are passed to the
sending TCP entity in large blocks, but an interactive application on the receiving side reads 1 byte at a time.
Initially the TCP buffer on the receiving side is full and the sender knows this(win=0)
Then the interactive application reads 1 character from tcp stream.
Now, the receiving TCP sends a window update to the sender saying that it is all right to send 1 byte.
The sender obligates and sends 1 byte.
The buffer is now full, and so the receiver acknowledges the 1 byte segment but sets window to zero.
This behavior can go on forever.
TCP does to try to prevent the congestion from occurring in the first place in the following way:
When a connection is established, a suitable window size is chosen and the receiver specifies a window
based on its buffer size. If the sender sticks to this window size, problems will not occur due to buffer overflow
at the receiving end. But they may still occur due to internal congestion within the network. Let’s see this
problem occurs.
Figure 4.16. (a) A fast network feeding a low-capacity receiver. (b) A slow network feeding a high-
capacity receiver.
In fig (a): We see a thick pipe leading to a small- capacity receiver. As long as the sender does not send more
water than the bucket can contain, no water will be lost.
In fig (b): The limiting factor is not the bucket capacity, but the internal carrying capacity of the n/w. if too
much water comes in too fast, it will backup and some will be lost.
When a connection is established, the sender initializes the congestion window to the size of the max
segment in use our connection.
It then sends one max segment .if this max segment is acknowledged before the timer goes off, it adds
one segment s worth of bytes to the congestion window to make it two maximum size segments and
sends 2 segments.
As each of these segments is acknowledged, the congestion window is increased by one max segment
size.
When the congestion window is ‘n’ segments, if all ‘n’ are acknowledged on time, the congestion
window is increased by the byte count corresponding to ‘n’ segments.
The congestion window keeps growing exponentially until either a time out occurs or the receiver’s
window is reached.
The internet congestion control algorithm uses a third parameter, the “threshold” in addition to receiver
and congestion windows.
1. Retransmission timer: When a segment is sent, a timer is started. If the segment is acknowledged before the
timer expires, the timer is stopped. If on the other hand, the timer goes off before the acknowledgement comes
in, the segment is retransmitted and the timer is started again. The algorithm that constantly adjusts the time-out
interval, based on continuous measurements of n/w performance was proposed by JACOBSON and works as
follows:
for each connection, TCP maintains a variable RTT, that is the best current estimate of the round trip
time to the destination inn question.
When a segment is sent, a timer is started, both to see how long the acknowledgement takes and to
trigger a retransmission if it takes too long.
If the acknowledgement gets back before the timer expires, TCP measures how long the measurements
took say M
It then updates RTT according to the formula
2. Persistence timer:
It is designed to prevent the following deadlock:
The receiver sends an acknowledgement with a window size of ‘0’ telling the sender to wait later, the
receiver updates the window, but the packet with the update is lost now both the sender and receiver are
waiting for each other to do something
when the persistence timer goes off, the sender transmits a probe to the receiver the response to the
probe gives the window size
if it is still zero, the persistence timer is set again and the cycle repeats
if it is non zero, data can now be sent
3. Keep-Alive timer: When a connection has been idle for a long time, this timer may go off to cause one side
to check if other side is still there. If it fails to respond, the connection is terminated.