0% found this document useful (0 votes)
23 views224 pages

CS3591 Notes

The document provides an overview of data communication, networks, and their types, emphasizing the importance of protocols, transmission modes, and network topologies. It details the characteristics of effective data communication systems, including delivery, accuracy, timeliness, and jitter, while also explaining various network types like LAN, MAN, and WAN. Additionally, it discusses the advantages and disadvantages of different topologies such as mesh, star, bus, and ring.

Uploaded by

arulgania
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views224 pages

CS3591 Notes

The document provides an overview of data communication, networks, and their types, emphasizing the importance of protocols, transmission modes, and network topologies. It details the characteristics of effective data communication systems, including delivery, accuracy, timeliness, and jitter, while also explaining various network types like LAN, MAN, and WAN. Additionally, it discusses the advantages and disadvantages of different topologies such as mesh, star, bus, and ring.

Uploaded by

arulgania
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT I INTRODUCTION AND APPLICATION LAYER

Data Communication - Networks – Network Types – Protocol Layering – TCP/IP Protocol


suite –OSI Model – Introduction to Sockets - Application Layer protocols: HTTP – FTP –
Email protocols(SMTP - POP3 - IMAP - MIME) – DNS – SNMP

1.1 Data Communication


When we communicate, we are sharing information. This sharing can be local or
remote.
The term telecommunication, which includes telephony, telegraphy, and television,
means communication at a distance (tele is Greek for "far").
The word data refers to information presented in whatever form is agreed upon by
the parties creating and using the data.
“Data communications are the exchange of data between two devices via some
form of transmission medium such as a wire cable”.
For data communications to occur, the communicating devices must be part of a
communication system made up of a combination of hardware (physical equipment) and
software (programs).

The effectiveness of a data communications system depends on four fundamental


characteristics:

I. Delivery: The system must deliver data to the correct destination. Data must be received by
the intended device or user .

2. Accuracy: The system must deliver the data accurately. Data that have been altered in
transmission and left uncorrected are unusable.

3. Timeliness: The system must deliver data in a timely manner. Data delivered late are useless.

4. Jitter: Jitter refers to the variation in the packet arrival time. It is the uneven delay in
the delivery of audio or video packets.

1.1.1 Components

A data communications system has five components

1.Message: The message is the information (data) to be communicated. Popular forms of


information include text, numbers, pictures, audio, and video.
2.Sender: The sender is the device that sends the data message. It can be a computer,
workstation, telephone handset, video camera, and so on.

3. Receiver: The receiver is the device that receives the message. It can be a computer,
workstation, telephone handset, television, and so on.

4. Transmission medium: The transmission medium is the physical path by which a message
travels from sender to receiver. Some examples of transmission media include twisted-pair
wire, coaxial cable, fiber optic cable, and radio waves.

5. Protocol: A protocol is a set of rules that govern data communications. It represents an


agreement between the communicating devices. Without a protocol, two devices may be
connected but not communicating.

1.1.2 Data Representation

Information today comes in different forms such as text, numbers, images, audio, and
video.
Text
In data communications, text is represented as a bit pattern, a sequence of bits (O s or 1 s).
Different sets of bit patterns have been designed to represent text symbols. Each set is called a
code, and the process of representing symbols is called coding.
Unicode, which uses 32 bits to represent a symbol or character used in any language in the
world. The American Standard Code for Information Interchange (ASCII), now constitutes the
first 127 characters in Unicode and is also referred to as Basic Latin.

Numbers
Numbers are also represented by bit patterns. However, a code such as ASCII is not used
to represent numbers; the number is directly converted to a binary number to simplify
mathematical operations.

Images
Images are also represented by bit patterns. In its simplest form, an image is composed
of a matrix of pixels (picture elements), where each pixel is a small dot. The size of the
pixel depends on the resolution. The size and the value of the pattern depend on the image.

For an image made of only black-and-white dots (e.g., a chessboard), a I-bit pattern is enough
to represent a pixel. If an image is not made of pure white and pure black pixels, you can
increase the size of the bit pattern to include gray scale

There are several methods to represent color images. One method is called RGB, so called
because each color is made of a combination of three primary colors: red, green, and blue.
The intensity of each color is measured, and a bit pattern is assigned to it. Another method is
called YCM, in which a color is made of a combination of three other primary colors: yellow,
cyan, and magenta.

Audio
Audio refers to the recording or broadcasting of sound or music. Audio is by nature different
from text, numbers, or images.
Video
Video refers to the recording or broadcasting of a picture or movie. Video can either be
produced as a continuous entity (e.g., by a TV camera), or it can be a combination of
images, each a discrete entity, arranged to convey the idea of motion.

1.1.3 Data Flow / transmission mode

Communication between two devices can be simplex, half-duplex, or full-duplex as


shown in Figure 1.2.

Simplex
In simplex mode, the communication is unidirectional, as on a one-way street. Only one
of the two devices on a link can transmit; the other can only receive (see Figure 1.2a).
Keyboards and traditional monitors are examples of simplex devices.

Advantage of Simplex mode:


o In simplex mode, the station can utilize the entire bandwidth of the communication
channel, so that more data can be transmitted at a time.

Disadvantage of Simplex mode:


o Communication is unidirectional, so it has no inter-communication between
devices.

Half-Duplex
In half-duplex mode, each station can both transmit and receive, but not at the same time.
When one device is sending, the other can only receive, and vice versa (see Figure 1.2b)
Walkie-talkies and CB (citizens band) radios are both half-duplex systems.
The half-duplex mode is used in cases where there is no need for communication
in both directions at the same time.
Advantage of Half-duplex mode:
o In half-duplex mode, both the devices can send and receive the data and also can
utilize the entire bandwidth of the communication channel during the transmission
of data.

Disadvantage of Half-Duplex mode:


o In half-duplex mode, when one device is sending the data, then another has towait,
this causes the delay in sending the data at the right time.

Full-Duplex
In full-duplex mode (also called duplex), both stations can transmit and receive simultaneously
(see Figure 1.2c).The full-duplex mode is like a two-way street with traffic flowing in both
directions at the same time. One common example of full-duplex communication is the
telephone network. When two people are communicating by a telephone line, both can talk and
listen at the same time. The full-duplex mode is used when communication in both directions
is required all the time.

Advantage of Full-duplex mode:


o Both the stations can send and receive the data at the same time.

Disadvantage of Full-duplex mode:


o If there is no dedicated path exists between the devices, then the capacity of the
communication channel is divided into two parts.

1.2 NETWORKS
A network is a set of devices (often referred to as nodes) connected by communication links.
A node can be a computer, printer, or any other device capable of sending and/or receiving data
generated by other nodes on the network.

Distributed Processing
Most networks use distributed processing, in which a task is divided among multiple
computers. Instead of one single large machine being responsible for all aspects of a
process, separate computers (usually a personal computer or workstation) handle a
subset.

1.2.1 Network Criteria


A network must be able to meet a certain number of criteria. The most important of
these are performance, reliability, and security.

Performance
Performance can be measured in many ways, including transit time and response time.
Transit time is the amount of time required for a message to travel from one device to another.
Response time is the elapsed time between an inquiry and a response. The performance of a
network depends on a number of factors, including the number of users,
the type of transmission medium, the capabilities of the connected hardware, and the
efficiency of the software. Performance is often evaluated by two networking metrics:
throughput and delay. Throughput is an actual measurement of how fast data can be
transmitted. Latency/delay is time required for a message to completely arrive at the destination
from source. We often need more throughput and less delay. However, these two criteria are
often contradictory. If we try to send more data to the network, we may increase throughput
but we increase the delay because of traffic congestion in the network.

Reliability
In addition to accuracy of delivery, network reliability is measured by the frequency of
failure, the time it takes a link to recover from a failure, and the network's robustness in
a catastrophe.

Security
Network security issues include protecting data from unauthorized access, protecting
data from damage and development, and implementing policies and procedures for
recovery from breaches and data losses.

1.2.2 Physical Structures

Type of Connection / Line configuration

A network is two or more devices connected through links. A link is a communications


pathway that transfers data from one device to another. There are two possible types of
connections: point-to-point and multipoint.
Point-to-Point
A point-to-point connection provides a dedicated link between two devices. The entire capacity
of the link is reserved for transmission between those two devices. Most point-to-point
connections use an actual length of wire or cable to connect the two ends (see Figure 1.3a).
When you change television channels by infrared remote control, you are establishing a point-
to-point connection between the remote control and the television's control system.
Multipoint
A multipoint (also called multidrop) connection is one in which more than two specific devices
share a single link (see Figure 1.3b). In a multipoint environment, the capacity of the channel
is shared, either spatially or temporally. If several devices can use the link simultaneously, it is
a spatially shared connection. If users must take turns, it is a timeshared connection.
Physical Topology

The term physical topology refers to the way in which a network is laid out physically. Two or
more devices connect to a link; two or more links form a topology. The topology of a network
is the geometric representation of the relationship of all the links and linking devices (usually
called nodes) to one another. There are four basic topologies possible: mesh, star, bus, and ring
(see Figure 1.4).

Mesh Topology

• In a mesh topology, every device has a dedicated point-to-point link to


everyother device.
• The term dedicated means that the link carries traffic only between
the twodevices it connects.
• The number of physical links in a fully connected mesh network with n
nodes isgiven by n (n – 1) / 2.

Advantages of Mesh Topology Disadvantages of Mesh Topology


1. Each connection can carry its own data 1. Installation and configuration is
load. difficult.
2. It is robust. 2. Cabling cost is more.
3. Fault is diagnosed easily. 3. Bulk wiring is required.
4. Provides security and privacy.

Star Topology
• In a star topology, each device has a dedicated point-to-point link
only to acentral controller, usually called a hub.
• The devices are not directly linked to one another.
• The controller/hub acts as an exchange.
• If one device wants to send data to another, it sends the data to the
controller/hub ,which then relays the data to the other connected device.

Advantages of Star Topology Disadvantages of Star Topology


1. Fast performance with few nodes and 1. Cost of installation is high.
low network traffic. 2. Expensive to use.
2. Hub can be upgraded easily. 3. If the hub fails, then the wholenetwork
3. Easy to troubleshoot. is stopped.
4. Easy to setup and modify. 4. Performance is based on the hub thatis it
5. Only that node is affected which has depends on its capacity
failed, rest of the nodes can work
smoothly

Bus Topology

• Bus topology is a network type in which every computer and network


device isconnected to single cable.
• The long single cable acts as a backbone to link all the devices in a network.
• When it has exactly two endpoints, then it is called Linear Bus topology.
• It transmits data only in one direction.

Advantages of Bus Topology Disadvantages of Bus Topology


1. It is cost effective. 1. Cables fails then whole network fails.
2. Cable required is least compared to 2. If network traffic is heavy or nodes are
other network topology. more, the performance of the network
3. Used in small networks. decreases.
4. It is easy to understand. 3. Cable has a limited length.
5. Easy to expand joining two cables 4. It is slower than the ring topology.
together
Ring Topology

• In a ring topology, each device has a dedicated point-to-point


connection withonly the two devices on either side of it.
• A signal is passed along the ring in one direction, from device to device,
until itreaches its destination.
• Each device in the ring incorporates a repeater.
• When a device receives a signal intended for another device, its repeater
regenerates the bits and passes them along.

Advantages of Ring Topology Disadvantages of Ring Topology


1. Transmitting network is not affected by 1. Troubleshooting is difficult in ring
high traffic or by adding more nodes, as topology.
only the nodes having tokens can transmit 2. Adding or deleting the computers
data. disturbs the network activity.
2. Cheap to install and expand 3. Failure of one computer disturbs the
whole network

Hybrid Topology
• Hybrid Topology is a combination of one or more basic topologies.
• For example if one department in an office uses ring topology, the other
departments uses star and bus topology, then connecting these topologies will
result in Hybrid Topology.
• Hybrid Topology inherits the advantages and disadvantages of the topologies
included.
Advantages of Hybrid Topology Disadvantages of Hybrid Topology
1. Reliable as Error detecting and trouble 1. Complex in design.
shooting is easy. 2. Costly
2. Effective.
3. Scalable as size can be increased easily.
4. Flexible.

1.3 NETWORK TYPES


Different types of networks: LANs MANs and WANs.
1.3.1 Local Area Network

Local Area Network is a group of computers connected to each other in a small
area such as building, office.
• LAN is used for connecting two or more personal computers through a
communication medium such as twisted pair, coaxial cable, etc.
• It is less costly as it is built with inexpensive hardware such as hubs,
networkadapters, and ethernet cables.
• The data is transferred at an extremely faster rate in Local Area Network.
• LAN can be connected using a common cable or a Switch
Advantages of LAN Disadvantages of LAN
• Resource Sharing • High Setup Cost
• Software Applications Sharing. • Privacy Violations
• Easy and Cheap Communication • Data Security Threat
• Centralized Data. • LAN Maintenance Job
• Data Security • Covers Limited Area
• Internet Sharing
1.3.2 Metropolitan Area Network (MAN)

o A metropolitan area network is a network that covers a larger geographic area by


interconnecting a different LAN to form a larger network.
o It generally covers towns and cities (50 km)
o In MAN, various LANs are connected to each other through a telephone
exchange line.
o Communication medium used for MAN are optical fibers, cables etc.
o It has a higher range than Local Area Network(LAN).It is adequate for distributed
computing applications.

1.3.3 Wide Area Network (WAN)


o A Wide Area Network is a network that extends over a large
geographical areasuch as states or countries.
o A Wide Area Network is quite bigger network than the LAN.
o A Wide Area Network is not limited to a single location, but it spans over a
largegeographical area through a telephone line, fibre optic cable or satellite
links.
o The internet is one of the biggest WAN in the world.
o A Wide Area Network is widely used in the field of Business,
government, andeducation.
o WAN can be either a point-to-point WAN or Switched WAN.
Point-to-Point WAN
A point-to-point WAN is a network that connects two communicating devices through a
transmission medium (cable or air). Figure 1.9 shows an example of a point-to-point WAN.

Switched WAN
A switched WAN is a network with more than two ends. It is used in the backbone of a
global communications network today. Figure 1.10 shows an example of a switched WAN

Advantages of Wide Area Network: Disadvantages of Wide Area Network:


o Large Geographical area o Security issue
o Centralized data o Needs Firewall & antivirus software
o Exchange messages o High Setup cost
o Sharing of software and resources o Troubleshooting problems
o High bandwidth
Internetwork
Today, it is very rare to see a LAN or a WAN in isolation; they are connected to one
another. When two or more networks are connected, they make an internetwork, or internet.
As an example, assume that an organization has two offices, one on the east coast and the other
on the west coast.
Each office has a LAN that allows all employees in the office to communicate with
each other. To make the communication between employees at different offices possible, the
management leases a point-to-point dedicated WAN from a service provider, such as a
telephone company, and connects the two LANs.
Now the company has an internetwork, or a private internet (with lowercase i).
Communication between offices is now possible. Figure 1.11 shows this internet.

Types of Internetwork

Extranet Intranet
An extranet is used for information sharing.
The access to the extranet is restricted to only An intranet belongs to an organization which
those users who have login credentials. An is only accessible by the
extranet is the lowest level of internetworking. organization's employee or members. The
It can becategorized as MAN, WAN or other main aim of the intranet is to share the
computer networks. An extranet cannot have a information and resources among the
single LAN, atleast it must haveone connection organization employees. An intranet provides
to the external network. the facility to work in groups and for
teleconferences.

1.3.4 The Internet


An internet (note the lowercase i) is two or more networks that can communicate with each
other. The most notable internet is called the Internet (uppercase I) and is composed of
thousands of interconnected networks. Figure 1.13 shows a conceptual (not geographical)
view of the Internet.
The figure shows the Internet as several backbones, provider networks, and customer networks.
At the top level, the backbones are large networks owned by some communication companies.
The backbone networks are connected through some complex switching systems, called
peering points.

At the second level, there are smaller networks, called provider networks, that use the services
of the backbones for a fee. The provider networks are connected to backbones and sometimes
to other provider networks. The customer networks are networks at the edge of the Internet that
actually use the services provided by the Internet. They pay fees to provider networks for
receiving services.

Backbones and provider networks are also called Internet Service Providers (ISPs). The
backbones are often referred to as international ISPs; the provider networks are often referred
to as national or regional ISPs.

1.3.5 Accessing the Internet


The Internet today is an internetwork that allows any user to become part of it. The user,
however, needs to be physically connected to an ISP. The physical connection is normally
done through a point-to-point WAN (such as a telephone network, a cable network, a wireless
network, or other types of networks).

Using Telephone Networks

Today most residences and small businesses have telephone service, which means they are
connected to a telephone network. Because most telephone networks have already connected
themselves to the Internet, one option for residences and small businesses to connect to the
Internet is to change the voice line between the residence or business and the telephone center
to a point-to-point WAN. This can be done in two ways.

❏ Dial-up service. The first solution is to add a modem that converts data to voice to the
telephone line. The software installed on the computer dials the ISP and imitates making a
telephone connection. Unfortunately, the dial-up service is very slow, and when the line is used
for an Internet connection, it cannot be used for a telephone (voice)connection. It is only useful
for small residences and businesses with occasional connection to the Internet.

❏ DSL Service. Since the advent of the Internet, some telephone companies have upgraded
their telephone lines to provide higher-speed Internet services to residences or small businesses.
The digital subscriber line (DSL) service also allows the line to be used simultaneously for
voice and data communications.

Using Cable Networks

More and more residents over the last two decades have begun using cable TV services instead
of antennas to receive TV broadcasting. The cable companies have been upgrading their cable
networks and connecting to the Internet. A residence or a small business can be connected to
the Internet by using this service. It provides a higher-speed connection, but the speed varies
depending on the number of neighbors that use the same cable.

Using Wireless Networks

Wireless connectivity has recently become increasingly popular. A household or a small


business can use a combination of wireless and wired connections to access the Internet. With
the growing wireless WAN access, a household or a small business can be connected to the
Internet through a wireless WAN.

Direct Connection to the Internet

A large organization or a large corporation can itself become a local ISP and be connected to
the Internet. This can be done if the organization or the corporation leases a high-speed WAN
from a carrier provider and connects itself to a regional ISP. For example, a large university
with several campuses can create an internetwork and then connect the internetwork to the
Internet.

1.4 PROTOCOL LAYERING


➢ In networking, a protocol defines the rules that both the sender and receiver andall
intermediate devices need to follow to be able to communicate effectively.
➢ A protocol provides a communication service that the process use to exchange
messages.
➢ When communication is simple, we may need only one simple protocol.
➢ When the communication is complex, we may need to divide the task between different
layers, in which case we need a protocol at each layer, or protocol layering.
➢ Protocol layering is that it allows us to separate the services from the implementation.
➢ A layer needs to be able to receive a set of services from the lower layer and to give the
services to the upper layer.
➢ Any modification in one layer will not affect the other layers.
Basic Elements of Layered Architecture
➢ Service: It is a set of actions that a layer provides to the higher layer.
➢ Protocol: It defines a set of rules that a layer uses to exchange the information with
peer entity. These rules mainly concern about both the contents and order of the
messages used.
➢ Interface: It is a way through which the message is transferred from one layer to
another layer.
Features of Protocol Layering
1. It decomposes the problem of building a network into more manageable
components.
2. It provides a more modular design.

1.4.2 Principles of Protocol Layering

1. The first principle dictates that if we want bidirectional communication, we need to


make each layer so that it is able to perform two opposite tasks, one in each
direction.
2. The second principle that we need to follow in protocol layering is that the two
objects under each layer at both sites should be identical.

1.5 TCP/IP PROTOCOL SUITE (INTERNET ARCHITECTURE)

The TCP/IP architecture is also called as Internet architecture.


It is developed by the US Defense Advanced Research Project Agency (DARPA)for its
packet switched network (ARPANET).
TCP/IP is a protocol suite used in the Internet today.
It is a 5-layer model. The layers of TCP/IP are

1. Application layer
2. Transport Layer (TCP/UDP)
3. Network Layer
4. Datalink Layer
5. Physical Layer
1.5.1 Layered Architecture

To show how the layers in the TCP/IP protocol suite are involved in communication
between two hosts, we assume that we want to use the suite in a small internet made up of three
LANs (links), each with a link-layer switch. We also assume that the links are connected by
one router, as shown in Figure 1.18 (on next page). Let us assume that computer A
communicates with computer B.
As Figure 1.18 shows, we have five communicating devices in this communication:
source host (computer A), the link-layer switch in link 1, the router, the link-layer switch in
link 2, and the destination host (computer B). Each device is involved with a set of layers
depending on the role of the device in the internet. The two hosts are involved in all five layers.

1.5.2 Description of Each Layer

After understanding the concept of logical communication, we are ready to briefly discuss the
duty of each layer.

Application Layer
An application layer incorporates the function of top three OSI layers.
Anapplication layer is the topmost layer in the TCP/IP model.
It is responsible for handling high-level protocols, issues of representation.
This layer allows the user to interact with the application.
When one application layer protocol wants to communicate with another
application layer, it forwards its data to the transport layer.
Protocols such as FTP, HTTP, SMTP, POP3, etc running in the application layer
provides service to other program running on top of application layer

Transport Layer
The transport layer is responsible for the reliability, flow control, and correction
of data which is being sent over the network.
The two protocols used in the transport layer are User Datagram protocol and
Transmission control protocol.
o UDP – UDP provides connectionless service and end-to-end delivery of
transmission. It is an unreliable protocol as it discovers the errors but not
specify the error.
o TCP – TCP provides a full transport layer services to applications. TCP is
a reliable protocol as it detects the error and retransmits the damaged frames.
Network Layer
The network layer is the third layer of the TCP/IP model.
The main responsibility of the network layer is to send the packets from any
network, and they arrive at the destination irrespective of the route they take.
Network layer handle the transfer of information across multiple networks
through router and gateway .
IP protocol is used in this layer, and it is the most significant part of the entire
TCP/IP suite.
Data Link Layer
We have seen that an internet is made up of several links (LANs and WANs)
connected by routers. When the next link to travel is determined by the router, the
data-link layer is responsible for taking the datagram and moving it across the link.
Physical Layer
The physical layer is responsible for carrying individual bits in a frame across the
link.
The physical layer is the lowest level in the TCP/IP protocol suite.
The communication between two devices at the physical layer is still a logical
communication because there is another hidden layer, the transmission media, under
the physical layer.

1.6 THE OSI MODEL


An ISO standard that covers all aspects of network communications is the Open
Systems Interconnection (OSI) model. It was first introduced in the late 1970s.
An open system is a set of protocols that allows any two different systems to
communicate regardless of their underlying architecture. The purpose of the OSI model is to
show how to facilitate communication between different systems without requiring changes to
the logic of the underlying hardware and software.
The OSI model is not a protocol; it is a model for understanding and designing a
network architecture that is flexible, robust, and interoperable. The OSI model is a layered
framework for the design of network systems that allows communication between all types of
computer systems. It consists of seven separate but related layers, each of which defines a part
of the process of moving information across a network (see Figure 1.21).
1.6.1 Application Layer

This is the only layer that directly interacts with data from the user. Software
applications like web browsers and email clients rely on the application layer to initiate
communications. But it should be made clear that client software applications are not part of
the application layer; rather the application layer is responsible for the protocols and data
manipulation that the software relies on to present meaningful data to the user. Application
layer protocols include HTTP as well as SMTP (Simple Mail Transfer Protocol is one of the
protocols that enables email communications).

1.6.2 Presentation Layer

This layer is primarily responsible for preparing data so that it can be used by the
application layer; in other words, layer 6 makes the data presentable for applications to
consume. The presentation layer is responsible for translation, encryption, and compression of
data.

Two communicating devices communicating may be using different encoding methods,


so layer 6 is responsible for translating incoming data into a syntax that the application layer
of the receiving device can understand.

If the devices are communicating over an encrypted connection, layer 6 is responsible


for adding the encryption on the sender’s end as well as decoding the encryption on the
receiver's end so that it can present the application layer with unencrypted, readable data.

Finally the presentation layer is also responsible for compressing data it receives from
the application layer before delivering it to layer 5. This helps improve the speed and efficiency
of communication by minimizing the amount of data that will be transferred.
1.6.3 Session Layer

This is the layer responsible for opening and closing communication between the two
devices. The time between when the communication is opened and closed is known as the
session. The session layer ensures that the session stays open long enough to transfer all the
data being exchanged, and then promptly closes the session in order to avoid wasting resources.

This layer is responsible for the establishment of connection, maintenance of sessions,


authentication, and also ensures security.

1.6.4 Transport Layer

Layer 4 is responsible for end-to-end communication between the two devices. This
includes taking data from the session layer and breaking it up into chunks called segments
before sending it to layer 3. The transport layer on the receiving device is responsible for
reassembling the segments into data the session layer can consume.

The transport layer is also responsible for flow control and error control. Flow control
determines an optimal speed of transmission to ensure that a sender with a fast connection does
not overwhelm a receiver with a slow connection. The transport layer performs error control
on the receiving end by ensuring that the data received is complete, and requesting a
retransmission if it isn’t.
1.6.5 Network Layer

The network layer works for the transmission of data from one host to the other
located in different networks. It also takes care of packet routing i.e. selection of the shortest
path to transmit the packet, from the number of routes available. The sender & receiver’s IP
addresses are placed in the header by the network layer.

The functions of the Network layer are :

1. Routing: The network layer protocols determine which route is suitable from source
to destination. This function of the network layer is known as routing.
2. Logical Addressing: In order to identify each device on internetwork uniquely, the
network layer defines an addressing scheme. The sender & receiver’s IP addresses are
placed in the header by the network layer. Such an address distinguishes each device
uniquely and universally.

1.6.6 Data Link Layer

The data link layer is responsible for the node-to-node delivery of the message. The main
function of this layer is to make sure data transfer is error-free from one node to another, over
the physical layer. When a packet arrives in a network, it is the responsibility of DLL to
transmit it to the Host using its MAC address.
Data Link Layer is divided into two sublayers:

1. Logical Link Control (LLC)


2. Media Access Control (MAC)

The packet received from the Network layer is further divided into frames depending on
the frame size of NIC(Network Interface Card). DLL also encapsulates Sender and
Receiver’s MAC address in the header.

The Receiver’s MAC address is obtained by placing an ARP(Address Resolution


Protocol) request onto the wire asking “Who has that IP address?” and the destination host
will reply with its MAC address.
1.6.7 Physical Layer

This layer includes the physical equipment involved in the data transfer, such as the
cables and switches. This is also the layer where the data gets converted into a bit stream,
which is a string of 1s and 0s. The physical layer of both devices must also agree on a signal
convention so that the 1s can be distinguished from the 0s on both devices.

Summary of Layers
COMPARISON - OSI MODEL AND TCP/IP MODEL

S.No OSI MODEL TCP/IP MODEL


1 Defined before advent of internet Defined after the advent of Internet.
2 Service interface and protocols Service interface and protocols were
are clearly distinguished before notclearly distinguished before

3 Internetworking not supported TCP/IP supports Internet working

4 Strict layering Loosely layered

5 Protocol independent standard Protocol Dependant standard

6 Less Credible More Credible

7 All packets are reliably delivered TCP reliably delivers packets, IP


doesnot reliably deliver packets
1.7 Introduction to Sockets

A socket is one endpoint of a two way communication link between two programs
running on the network. The socket mechanism provides a means of inter-process
communication (IPC) by establishing named contact points between which the communication
take place.

Like ‘Pipe’ is used to create pipes and sockets is created using ‘socket’ system call.
The socket provides bidirectional FIFO Communication facility over the network. A socket
connecting to the network is created at each end of the communication. Each socket has a
specific address. This address is composed of an IP address and a port number.

Socket are generally employed in client server applications. The server creates a socket,
attaches it to a network port addresses then waits for the client to contact it. The client creates
a socket and then attempts to connect to the server socket. When the connection is established,
transfer of data takes place.

1.7.1 Socket Addresses

The interaction between a client and a server is two-way communication. In a two-way


communication, we need a pair of addresses:
local (sender) and remote (receiver).
The local address in one direction is the remote address in the other direction, and vice versa.
Because communication in the client/server paradigm is between two sockets, we need a pair
of socket addresses for communication:
a local socket address and a remote socket address.
A socket address should first define the computer on which a client or a server is running. A
computer in the Internet is uniquely defined by its IP address, a 32-bit integer in the current
Internet version. An application program can be defined by a port number, a 16-bit integer.
This means that a socket address should be a combination of an IP address and a port number
as shown in Figure 10.7.
Because a socket defines the end-point of the communication, we can say that a socket is
identified by a pair of socket addresses, a local and a remote.

1.7.2 Finding Socket Addresses

How can a client or a server find a pair of socket addresses for communication? The situation
is different for each site.

Server Site

The server needs a local (server) and a remote (client) socket address for communication.

Local Socket Address The local (server) socket address is provided by the operating system.
The operating system knows the IP address of the computer on which the server process is
running. The port number of a server process, however, needs to be assigned. If the server
process is a standard one defined by the Internet authority, a port number is already assigned
to it. When a server starts running, it knows the local socket address.

Remote Socket Address The remote socket address for a server is the socket address of the
client that makes the connection. Because the server can serve many clients, it does not know
beforehand the remote socket address for communication. The server can find this socket
address when a client tries to connect to the server. The client socket address, which is
contained in the request packet sent to the server, becomes the remote socket address that is
used for responding to the client.

Client Site

The client also needs a local (client) and a remote (server) socket address for communication.

Local Socket Address The local (client) socket address is also provided by the operating
system. The operating system knows the IP address of the computer on which the client is
running. The port number, however, is a 16- bit temporary integer that is assigned to a client
process each time the process needs to start the communication. The port number, however,
needs to be assigned from a set of integers defined by the Internet authority and called the
ephemeral (temporary) port numbers. The operating system, however, needs to guarantee that
the new port number is not used by any other running client process.

Remote Socket Address Finding the remote (server) socket address for a client, however, needs
more work. When a client process starts, it should know the socket address of the server it
wants to connect to. We will have two situations in this case.
Sometimes, the user who starts the client process knows both the server port number
and IP address of the computer on which the server is running. This usually occurs in situations
when we have written client and server applications and we want to test them
Although each standard application has a well-known port number, most of the time,
we do not know the IP address. This happens in situations such as when we need to contact a
web page, send an e-mail to a friend, or copy a file from a remote site. In these situations, the
server has a name, an identifier that uniquely defines the server process. Examples of these
identifiers are URLs, such as www.xxx.yyy, or e-mail addresses, such as [email protected].
The client process should now change this identifier (name) to the corresponding server socket
address.

1.8 Application Layer

• The application layer is the highest layer in the protocol suite.


• The application layer provides services to the user.
• Communication is provided using a logical connection, which means that the two
application layers assume that there is an imaginary direct connection through which
they can send and receive messages.
• The application layer is the only layer that provides services to the Internet user
• The application layer exchange messages with their peers on other machines
• Applications need their own protocols. These applications are part of network
protocol.

Types of Application Protocols:


Standard and Nonstandard Protocols

Standard Application-Layer Protocols

o There are several application-layer protocols that have been standardized


and documented by the Internet authority.

o Each standard protocol is a pair of computer programs that interact with the
user and the transport layer to provide a specific service to the user.

o Two very widely-used standardized application protocols:

SMTP: Simple Mail Transfer Protocol is used to exchange electronic


mail.

HTTP : Hyper Text Transport Protocol is used to communicate


between Web browsers and Web servers.

Nonstandard Application-Layer Protocols


o A programmer can create a nonstandard application-layer program if they can write two
programs that provide service to the user by interacting with the transport layer.
Application-Layer Paradigms

Two paradigms have been developed for Application Layer


1. Traditional Paradigm : Client-Server
2. New Paradigm : Peer-to-Peer

Client-Server Paradigm
o The traditional paradigm is called the client-server paradigm.
o It was the most popular Paradigm.
o In this paradigm, the service provider is an application program, called the server process; it
runs continuously, waiting for another application program, called the client process, to make
a connection through the Internet and ask for service.
o The server process must be running all the time; the client process is started when the client
needs to receive service.
o There are normally some server processes that can provide a specific type of service, but
there are many clients that request service from any of these server processes.

Peer-to-Peer(P2P) Paradigm
o A new paradigm, called the peer-to-peer paradigm has emerged to respond to the needs of
some new applications.
o In this paradigm, there is no need for a server process to be running all the time and waiting
for the client processes to connect.
o The responsibility is shared between peers.
o A computer connected to the Internet can provide service at one time and receive service at
another time.
o A computer can even provide and receive services at the same time.
Mixed Paradigm
o An application may choose to use a mixture of the two paradigms by combining the
advantages of both.
o For example, a light-load client-server communication can be used to find the address of
the peer that can offer a service.
o When the address of the peer is found, the actual service can be received from the peer by
using the peer-to-peer paradigm.

1.8.1 The HyperText Transfer Protocol (HTTP)

• The HyperText Transfer Protocol (HTTP) is used to define how the client- server programs
can be written to retrieve web pages from the Web.
• It is a protocol used to access the data on the World Wide Web (WWW).
• The HTTP protocol can be used to transfer the data in the form of plain text, hypertext,
audio, video, and so on.
• HTTP is a stateless request/response protocol that governs client/server communication.
• An HTTP client sends a request; an HTTP server returns a response.
• The server uses the port number 80; the client uses a temporary port number.
• HTTP uses the services of TCP , a connection-oriented and reliable protocol.
• HTTP is a text-oriented protocol. It contains embedded URL known as links.
• When hypertext is clicked, browser opens a new connection, retrieves file from
the server and displays the file.
• Each HTTP message has the general form

START_LINE <CRLF>
MESSAGE_HEADER <CRLF>
<CRLF> MESSAGE_BODY <CRLF>
where <CRLF> stands for carriage-return-line-feed.
Features of HTTP

o Connectionless protocol:
HTTP is a connectionless protocol. HTTP client initiates a request and waits for a response
from the server. When the server receives the request, the server processes the request and
sends back the response to the HTTP client after which the client disconnects the connection.
The connection between client and server exist only during the current request and response
time only.

o Media independent:
HTTP protocol is a media independent as data can be sent as long as both the client and
server know how to handle the data content. It is required for both the client and server to
specify the content type in MIME-type header.

o Stateless:
HTTP is a stateless protocol as both the client and server know each other only during the
current request. Due to this nature of the protocol, both the client and server do not retain the
information between various requests of the web pages.
HTTP Request And Response Messages
• The HTTP protocol defines the format of the request and response messages.

• Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.
• Response Message: The response message is sent by the server to the client that consists of
a status line, headers, and sometimes a body.

HTTP Request Message

• The first line in a request message is called a request line.


• After the request line, we can have zero or more request header lines.
• The body is an optional one. It contains the comment to be sent or the file to be published
on the website when the method is PUT or POST.

Request Line
• There are three fields in this request line - Method, URL and Version.
• The Method field defines the request types.
• The URL field defines the address and name of the corresponding web page.
• The Version field gives the version of the protocol; the most current version of
HTTP is 1.1.
• Some of the Method types are:
Request Header
• Each request header line sends additional information from the client to the server.
• Each header line has a header name, a colon, a space, and a header value.
• The value field defines the values associated with each header name.
• Headers defined for request message include:

Body
• The body can be present in a request message. It is optional.
• Usually, it contains the comment to be sent or the file to be published on the website when
the method is PUT or POST.

Conditional Request
• A client can add a condition in its request.
• In this case, the server will send the requested web page if the condition is met or inform
the client otherwise.
• One of the most common conditions imposed by the client is the time and date the web
page is modified.
• The client can send the header line If-Modified-Since with the request to tell the server that
it needs the page only if it is modified after a certain point in time.

HTTP Response Message

• The first line in a request message is called a status line.


• After the request line, we can have zero or more response header lines.
• The body is an optional one. The body is present unless the response is an error
message.
Status Line
• The Status line contains three fields - HTTP version , Status code, Status
phrase
• The first field defines the version of HTTP protocol, currently 1.1.
• The status code field defines the status of the request. It classifies the HTTP
result. It consists of three digits.
1xx–Informational, 2xx– Success, 3xx–Redirection,
4xx–Client error, 5xx–Server error
• The Status phrase field gives brief description about status code in text form.
• Some of the Status codes are

Response Header
• Each header provides additional information to the client.
• Each header line has a header name, a colon, a space, and a header value.
• Some of the response headers are:

Body
• The body contains the document to be sent from the server to the client.
• The body is present unless the response is an error message.

HTTP CONNECTIONS

• HTTP Clients and Servers exchange multiple messages over the same TCP connection.
• If some of the objects are located on the same server, we have two choices: to retrieve each
object using a new TCP connection or to make a TCP connection and retrieve them all.
• The first method is referred to as a non-persistent connection, the second as a persistent
connection.
• HTTP 1.0 uses non-persistent connections and HTTP 1.1 uses persistent connections .

Non-Persistent Connections
• In a non-persistent connection, one TCP connection is made for each request/response.
• Only one object can be sent over a single TCP connection
• The client opens a TCP connection and sends a request.
• The server sends the response and closes the connection.
• The client reads the data until it encounters an end-of-file marker.
• It then closes the connection.

Persistent Connections
• HTTP version 1.1 specifies a persistent connection by default.
• Multiple objects can be sent over a single TCP connection.
• In a persistent connection, the server leaves the connection open for more requests after
sending a response.
• The server can close the connection at the request of a client or if a time-out has been
reached.
• Time and resources are saved using persistent connections. Only one set of buffers and
variables needs to be set for the connection at each site.
• The round trip time for connection establishment and connection termination is saved.

Http Cookies

• An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie)
is a small piece of data sent from a website and stored on the user's computer by the user's web
browser while the user is browsing.

• HTTP is stateless , Cookies are used to add State.

• Cookies were designed to be a reliable mechanism for websites to remember stateful


information (such as items added in the shopping cart in an online store) or to record the user's
browsing activity (including clicking particular buttons, logging in, or recording which pages
were visited in the past).

• They can also be used to remember arbitrary pieces of information that the user previously
entered into form fields such as names, addresses, passwords, and credit card numbers.
Components of Cookie
A cookie consists of the following components:

1. Name
2. Value
3. Zero or more attributes (name/value pairs). Attributes store information such as
the cookie's expiration, domain, and flags.

Creating and Storing Cookies


The creation and storing of cookies depend on the implementation; however, the principle is
the same.
1. When a server receives a request from a client, it stores information about the client in a file
or a string. The information may include the domain name of the client, the contents of the
cookie (information the server has gathered about the client such as name, registration
number, and so on), a timestamp, and other information depending on the implementation.
2. The server includes the cookie in the response that it sends to the client.
3. When the client receives the response, the browser stores the cookie in the cookie directory,
which is sorted by the server domain name.

Using Cookies
• When a client sends a request to a server, the browser looks in the cookie directory to see if
it can find a cookie sent by that server.
• If found, the cookie is included in the request.
• When the server receives the request, it knows that this is an old client, not a new one.
• The contents of the cookie are never read by the browser or disclosed to the user. It is a
cookie made by the server and eaten by the server.

Types of Cookies

1.Authentication cookies
These are the most common method used by web servers to know whether the user is logged
in or not, and which account they are logged in with. Without such a mechanism, the site
would not know whether to send a page containing sensitive information, or require the user
to authenticate themselves by logging in.

2.Tracking cookies
These are commonly used as ways to compile individuals browsing histories.
3.Session cookie
A session cookie exists only in temporary memory while the user navigates the website. Web
browsers normally delete session cookies when the user closes the browser.

4.Persistent cookie
Instead of expiring when the web browser is closed as session cookies do, a persistent cookie
expires at a specific date or after a specific length of time. This means that, for the cookie's
entire lifespan , its information will be transmitted to the server every time the user visits the
website that it belongs to, or every time the user views a resource belonging to that website
from another website

Http Caching
 HTTP Caching enables the client to retrieve document faster and reduces load on the
server.
 HTTP Caching is implemented at Proxy server, ISP router and Browser.
 Server sets expiration date (Expires header) for each page, beyond which it is not cached.
 HTTP Cache document is returned to client only if it is an updated copy by checking
against If-Modified-Since header.
 If cache document is out-of-date, then request is forwarded to the server and response is
cached along the way.
 A web page will not be cached if no-cache directive is specified.

HTTP SECURITY
 HTTP does not provide security.
 However HTTP can be run over the Secure Socket Layer (SSL).
 In this case, HTTP is referred to as HTTPS.
 HTTPS provides confidentiality, client and server authentication, and data
integrity.

1.8.2 FTP (FILE TRANSFER PROTOCOL)

 FTP stands for File transfer protocol.


 FTP is a standard internet protocol provided by TCP/IP used for transmitting the files from
one host to another.
 It is mainly used for transferring the web page files from their creator to the computer that
acts as a server for other computers on the internet.
 It is also used for downloading the files to computer from other servers.
 Although we can transfer files using HTTP, FTP is a better choice to transfer large files or
to transfer files using different formats.

FTP OBJECTIVES
 It provides the sharing of files.
 It is used to encourage the use of remote computers.
 It transfers the data more reliably and efficiently.
FTP MECHANISM

 The above figure shows the basic model of the FTP.


 The FTP client has three components:
o user interface, control process, and data transfer process.
 The server has two components:
o server control process and server data transfer process.

FTP CONNECTIONS
 There are two types of connections in FTP - Control Connection and Data Connection.

 The two connections in FTP have different lifetimes.

 The control connection remains connected during the entire interactive FTP session.

 The data connection is opened and then closed for each file transfer activity. When a user
starts an FTP session, the control connection opens.

 While the control connection is open, the data connection can be opened and closed
multiple times if several files are transferred.

 FTP uses two well-known TCP ports:

o Port 21 is used for the control connection


o Port 20 is used for the data connection.
Control Connection:
o The control connection uses very simple rules for communication.
o Through control connection, we can transfer a line of command or line of response at a
time.
o The control connection is made between the control processes.
o The control connection remains connected during the entire interactive FTP session.
 Data Connection:
o The Data Connection uses very complex rules as data types may vary.
o The data connection is made between data transfer processes.
o The data connection opens when a command comes for transferring the files and closes
when the file is transferred.

FTP COMMUNICATION
 FTP Communication is achieved through commands and responses.
 FTP Commands are sent from the client to the server
 FTP responses are sent from the server to the client.
 FTP Commands are in the form of ASCII uppercase, which may or may not be followed by
an argument.
 Some of the most common commands are:
Every FTP command generates at least one response.
 A response has two parts: a three-digit number followed by text.
 The numeric part defines the code; the text part defines needed parameter.

FTP FILE TYPE


 FTP can transfer one of the following file types across the data connection:
ASCII file, EBCDIC file, or image file

FTP DATA STRUCTURE


 FTP can transfer a file across the data connection using one of the following data structure :
file structure, record structure, or page structure.
 The file structure format is the default one and has no structure. It is a continuous stream of
bytes.
 In the record structure, the file is divided into records. This can be used only with text files.
 In the page structure, the file is divided into pages, with each page having a page number
and a page header. The pages can be stored and accessed randomly or sequentially.

FTP TRANSMISSION MODE


 FTP can transfer a file across the data connection using one of the following three
transmission modes: stream mode, block mode, or compressed mode.
 The stream mode is the default mode; data are delivered from FTP to TCP as a continuous
stream of bytes.
 In the block mode, data can be delivered from FTP to TCP in blocks.
 In the compressed mode, data can be compressed and delivered from FTP to TCP.

FTP FILE TRANSFER


 File transfer occurs over the data connection under the control of the commands sent over
the control connection.
 File transfer in FTP means one of three things:
o retrieving a file (server to client)
o storing a file (client to server)
o directory listing (server to client).

FTP SECURITY
 FTP requires a password, the password is sent in plaintext which is unencrypted. This
means it can be intercepted and used by an attacker.
 The data transfer connection also transfers data in plaintext, which is insecure.
 To be secure, one can add a Secure Socket Layer between the FTP application layer and the
TCP layer.
 In this case FTP is called SSL-FTP.

1.8.3 EMAIL (SMTP, MIME, IMAP, POP)

 One of the most popular Internet services is electronic mail (E-mail).


 Email is one of the oldest network applications.
 The three main components of an Email are
1. User Agent (UA)
2. Messsage Transfer Agent (MTA) – SMTP
3. Messsage Access Agent (MAA) - IMAP , POP

 When the sender and the receiver of an e-mail are on the same system, we need only two
User Agents and no Message Transfer Agent
 When the sender and the receiver of an e-mail are on different system, we need two UA,
two pairs of MTA (client and server), and two MAA (client and server).

WORKING OF EMAIL
 When Alice needs to send a message to Bob, she runs a UA program to prepare the
message and send it to her mail server.
 The mail server at her site uses a queue (spool) to store messages waiting to be sent. The
message, however, needs to be sent through the Internet from Alice’s
site to Bob’s site using an MTA.
 Here two message transfer agents are needed: one client and one server.
 The server needs to run all the time because it does not know when a client will ask for a
connection.
 The client can be triggered by the system when there is a message in the queue to be sent.
 The user agent at the Bob site allows Bob to read the received message.
 Bob later uses an MAA client to retrieve the message from an MAA server running on the
second server.

USER AGENT (UA)


 The first component of an electronic mail system is the user agent (UA).
 It provides service to the user to make the process of sending and receiving a message
easier.
 A user agent is a software package that composes, reads, replies to, and forwards messages.
It also handles local mailboxes on the user computers.

 There are two types of user agents: Command-driven and GUI-based.


Command driven
o Command driven user agents belong to the early days of electronic mail.
o A command-driven user agent normally accepts a one character command from the
keyboard to perform its task.
o Some examples of command driven user agents are mail, pine, and elm.

GUI-based
o Modern user agents are GUI-based.
o They allow the user to interact with the software by using both the keyboard and the mouse.
o They have graphical components such as icons, menu bars, and windows that make the
services easy to access.
o Some examples of GUI-based user agents are Eudora and Outlook.

MESSAGE TRANSFER AGENT (MTA)


 The actual mail transfer is done through message transfer agents (MTA).
 To send mail, a system must have the client MTA, and to receive mail, a system must have
a server MTA.
 The formal protocol that defines the MTA client and server in the Internet is called Simple
Mail Transfer Protocol (SMTP).
MESSAGE ACCESS AGENT (MAA)
 MAA is a software that pulls messages out of a mailbox.
 POP3 and IMAP4 are examples of MAA.
ADDRESS FORMAT OF EMAIL
 E-mail address is userid @ domain where domain is hostname of the mail server.

MESSAGE FORMAT OF EMAIL


 Email message consists of two parts namely header and body.
 Each header line contains type and value separated by a colon (:).
 Some header contents are:

o From: identifier sender of the message.


o To: mail address of the recipient(s).
o Subject: says about purpose of the message.
o Date: timestamp of when the message was transmitted.

 Header is separated from the body by a blank line.


 Body contains the actual message.

 Email was extended in 1993 to carry many different types of data: audio, video, images,
Word documents, and so on.
 This extended version is known as MIME(Multipurpose Mail Extension).

1.8.4.1 SIMPLE MAIL TRANSFER PROTOCOL (SMTP)


 SMTP is the standard protocol for transferring mail between hosts in the TCP/IP protocol
suite.
 SMTP is not concerned with the format or content of messages themselves.
 SMTP uses information written on the envelope of the mail (message header), but does not
look at the contents (message body) of the envelope.

 SMTP clients and servers have two main components

o User Agents(UA) – Prepares the message, encloses it in an envelope.


o Mail Transfer Agent (MTA) – Transfers the mail across the internet

 SMTP also allows the use of Relays allowing other MTAs to relay the mail.
SMTP MAIL FLOW

 To begin, mail is created by a user-agent program in response to user input.


 Each created message consists of a header that includes the recipient's email address and
other information, and a message body containing the message to be sent.
 These messages are then queued in some fashion and provided as input to an SMTP Sender
program.

SMTP COMMANDS AND RESPONSES


 The operation of SMTP consists of a series of commands and responses exchanged between
the SMTP sender and SMTP receiver.
 The initiative is with the SMTP sender, who establishes the TCP connection.
 Once the connection is established, the SMTP sender sends commands over the connection
to the receiver.
 The command is from an MTA client to an MTA server; the response is from an MTA
server to the MTA client.

SMTP Commands
 Commands are sent from the client to the server. It consists of a keyword followed by zero
or more arguments. SMTP defines 14 commands.
SMTP Responses
 Responses are sent from the server to the client.
 A response is a three digit code that may be followed by additional textual information.
SMTP OPERATIONS

Basic SMTP operation occurs in three phases:


1. Connection Setup
2. Mail Transfer
3. Connection Termination

Connection Setup
 An SMTP sender will attempt to set up a TCP connection with a target host
when it has one or more mail messages to deliver to that host.
 The sequence is quite simple:
1. The sender opens a TCP connection with the receiver.
2. Once the connection is established, the receiver identifies itself with "Service Ready”.
3. The sender identifies itself with the HELO command.
4. The receiver accepts the sender's identification with "OK".
5. If the mail service on the destination is unavailable, the destination host returns a "Service
Not Available" reply in step 2, and the process is terminated.

Mail Transfer
 Once a connection has been established, the SMTP sender may send one or more messages
to the SMTP receiver.
 There are three logical phases to the transfer of a message:
1. A MAIL command identifies the originator of the message.
2. One or more RCPT commands identify the recipients for this message.
3. A DATA command transfers the message text.

Connection Termination
 The SMTP sender closes the connection in two steps.
 First, the sender sends a QUIT command and waits for a reply.
 The second step is to initiate a TCP close operation for the TCP connection.
 The receiver initiates its TCP close after sending its reply to the QUIT command.
Limitations Of Smtp
 SMTP cannot transmit executable files or other binary objects.
 SMTP cannot transmit text data that includes national language characters, as these are
represented by 8-bit codes with values of 128 decimal or higher, and SMTP is limited to 7-bit
ASCII.
 SMTP servers may reject mail message over a certain size.
 SMTP gateways that translate between ASCII and the character code EBCDIC do not use a
consistent set of mappings, resulting in translation problems.
 Some SMTP implementations do not adhere completely to the SMTP standards
defined.
 Common problems include the following:
1. Deletion, addition, or recording of carriage return and linefeed.
2. Truncating or wrapping lines longer than 76 characters.
3. Removal of trailing white space (tab and space characters).
4. Padding of lines in a message to the same length.
5. Conversion of tab characters into multiple-space characters.

1.8.4.2 MULTIPURPOSE INTERNET MAIL EXTENSION (MIME)

 SMTP provides a basic email service, while MIME adds multimedia capability to
SMTP.
 MIME is an extension to SMTP and is used to overcome the problems and limitations
of SMTP.
 Email system was designed to send messages only in ASCII format.

• Languages such as French, Chinese, etc., are not supported.


• Image, audio and video files cannot be sent.

 MIME adds the following features to email service:

• Be able to send multiple attachments with a single message;


• Unlimited message length;
• Use of character sets other than ASCII code;
• Use of rich text (layouts, fonts, colors, etc)
• Binary attachments (executables, images, audio or video files, etc.), which
may be divided if needed.
 MIME is a protocol that converts non-ASCII data to 7-bit NVT(Network
Virtual Terminal) ASCII and vice-versa.

MIME HEADERS
 Using headers, MIME describes the type of message content and the encoding
used.
 Headers defined in MIME are:
• MIME-Version- current version, i.e., 1.1
• Content-Type - message type (text/html, image/jpeg, application/pdf)
• Content-Transfer-Encoding - message encoding scheme (eg base64).
• Content-Id - unique identifier for the message.
• Content-Description - describes type of the message body.

MIME CONTENT TYPES


 There are seven different major types of content and a total of 14 subtypes.
 In general, a content type declares the general type of data, and the subtype specifies a
particular format for that type of data.
 MIME also defines a multipart type that says how a message carrying more than one data
type is structured.
 This is like a programming language that defines both base types (e.g., integers and floats)
and compound types (e.g., structures and arrays).
 One possible multipart subtype is mixed, which says that the message contains a set of
independent data pieces in a specified order.
 Each piece then has its own header line that describes the type of that piece.
 The table below lists the MIME content types:

ENCODING FORMATS OF MIME


 MIME uses various encoding formats to convert binary data into the ASCII character set.
 To transfer binary data, MIME offers five encoding formats which can be used in the
header transfer-encoding:
• 7-bit : 7-bit text format (for messages without accented characters);
• 8-bit : 8-bit text format;
• quoted-printable : Quoted-Printable format, recommended for messages which use a 7-bit
alphabet (such as when there are accent marks);
• base-64 : Base 64, for sending binary files as attachments;
• binary : binary format; not recommended.
 Since MIME is very open, it can use third-party encoding formats such as:
• BinHex : A proprietary format belonging to Apple
• Uuencode : for UNIX-to-UNIX encoding
• Xencode : for binary-to-text encoding
MESSAGE TRANSFER IN MIME

 MTA is a mail daemon (send mail) active on hosts having mailbox, used to send an email.
 Mail passes through a sequence of gateways before it reaches the recipient mail server.
 Each gateway stores and forwards the mail using Simple mail transfer protocol (SMTP).
 SMTP defines communication between MTAs over TCP on port 25.
 In an SMTP session, sending MTA is client and receiver is server. In each exchange:
 Client posts a command (HELO, MAIL, RCPT, DATA, QUIT, VRFY, etc.)
 Server responds with a code (250, 550, 354, 221, 251 etc) and an explanation.
 Client is identified using HELO command and verified by the server
 Client forwards message to server, if server is willing to accept.
 Message is terminated by a line with only single period (.) in it.
 Eventually client terminates the connection.

1.8.4.3 IMAP (INTERNET MAIL ACCESS PROTOCOL)

 IMAP is an Application Layer Internet protocol that allows an e-mail client to access e-mail
on a remote mail server.
 It is a method of accessing electronic mail messages that are kept on a possibly shared mail
server.
 IMAP is a more capable wire protocol.
 IMAP is similar to SMTP in many ways.
 IMAP is a client/server protocol running over TCP on port 143.
 IMAP allows multiple clients simultaneously connected to the same mailbox, and through
flags stored on the server, different clients accessing the same mailbox at the same or different
times can detect state changes made by other clients.
 In other words, it permits a "client" email program to access remote message stores as if
they were local.
 For example, email stored on an IMAP server can be manipulated from a desktop computer
at home, a workstation at the office, and a notebook computer while travelling, without the
need to transfer messages or files back and forth between these computers.
 IMAP can support email serving in three modes:
 Offline
 Online
Users may connect to the server, look at what email is available, and access it online. This
looks to the user very much like having local spool files, but they’re on the mail server.
 Disconnected operation
A mail client connects to the server, can make a “cache” copy of selected messages, and
disconnects from the server. The user can then work on the messages offline, and connect to
the server later and resynchronize the server status with the cache.

OPERATION OF IMAP
 The mail transfer begins with the client authenticating the user and identifying the mailbox
they want to access.
 Client Commands
LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE, and LOGOUT
 Server Responses
OK, NO (no permission), BAD (incorrect command),
 When user wishes to FETCH a message, server responds in MIME format.
 Message attributes such as size are also exchanged.
 Flags are used by client to report user actions.
SEEN, ANSWERED, DELETED, RECENT

IMAP4
 The latest version is IMAP4. IMAP4 is more powerful and more complex.
 IMAP4 provides the following extra functions:

• A user can check the e-mail header prior to downloading.


• A user can search the contents of the e-mail for a specific string of characters
prior to downloading.
• A user can partially download e-mail. This is especially useful if bandwidth is limited and
the e-mail contains multimedia with high bandwidth requirements.
• A user can create, delete, or rename mailboxes on the mail server.
• A user can create a hierarchy of mailboxes in a folder for e-mail storage

Advantages Of IMAP
 With IMAP, the primary storage is on the server, not on the local machine.
 Email being put away for storage can be foldered on local disk, or can be foldered on the
IMAP server.
 The protocol allows full user of remote folders, including a remote folder hierarchy and
multiple inboxes.
 It keeps track of explicit status of messages, and allows for user-defined status.
 Supports new mail notification explicitly.
 Extensible for non-email data, like netnews, document storage, etc.
 Selective fetching of individual MIME body parts.
 Server-based search to minimize data transfer.
 Servers may have extensions that can be negotiated.
1.8.4.4 POST OFFICE PROTOCOL (POP3)

 Post Office Protocol (POP3) is an application-layer Internet standard protocol used by local
e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection.
 There are two versions of POP.
• The first, called POP2, became a standard in the mid-80's and requires SMTP to send
messages.
• The current version, POP3, can be used with or without SMTP. POP3 uses TCP/IP port 110.
 POP is a much simpler protocol, making implementation easier.
 POP supports offline access to the messages, thus requires less internet usage time
 POP does not allow search facility.
 In order to access the messages, it is necessary to download them.
 It allows only one mailbox to be created on server.
 It is not suitable for accessing non mail data.
 POP mail moves the message from the email server onto the local computer, although there
is usually an option to leave the messages on the email server as well.
 POP treats the mailbox as one store, and has no concept of folders.
 POP works in two modes namely, delete and keep mode.
• In delete mode, mail is deleted from the mailbox after retrieval. The delete mode is normally
used when the user is working at their permanent computer and can save and organize the
received mail after reading or replying.
• In keep mode, mail after reading is kept in mailbox for later retrieval. The keep mode is
normally used when the user accesses her mail away from their primary computer .

 POP3 client is installed on the recipient computer and POP server on the mail server.
 Client opens a connection to the server using TCP on port 110.
 Client sends username and password to access mailbox and to retrieve messages.
POP3 Commands

POP commands are generally abbreviated into codes of three or four letters
The following describes some of the POP commands:
1. UID - This command opens the connection
2. STAT - It is used to display number of messages currently in the mailbox
3. LIST - It is used to get the summary of messages
4. RETR -This command helps to select a mailbox to access the messages
5. DELE - It is used to delete a message
6. RSET - It is used to reset the session to its initial state
7. QUIT - It is used to log off the session

Advantages of IMAP over POP


 IMAP is more powerful and more complex than POP.
 User can check the e-mail header prior to downloading.
 User can search e-mail for a specific string of characters prior to downloading.
 User can download partially, very useful in case of limited bandwidth.
 User can create, delete, or rename mailboxes on the mail server.

1.9 DNS (DOMAIN NAME SYSTEM)

 Domain Name System was designed in 1984.


 DNS is used for name-to-address mapping.
 The DNS provides the protocol which allows clients and servers to communicate with each
other.
 Eg: Host name like www.yahoo.com is translated into numerical IP addresses like
207.174.77.131
 Domain Name System (DNS) is a distributed database used by TCP/IP applications to map
between hostnames and IP addresses and to provide electronic mail routing information.
 Each site maintains its own database of information and runs a server program that other
systems across the Internet can query.

WORKING OF DNS

The following six steps shows the working of a DNS. It maps the host name to an IP
address:
1. The user passes the host name to the file transfer client.
2. The file transfer client passes the host name to the DNS client.
3. Each computer, after being booted, knows the address of one DNS server. The DNS client
sends a message to a DNS server with a query that gives the file transfer server name using
the known IP address of the DNS server.
4. The DNS server responds with the IP address of the desired file transfer server.
5. The DNS server passes the IP address to the file transfer client.
6. The file transfer client now uses the received IP address to access the file transfer server.

NAME SPACE
 To be unambiguous, the names assigned to machines must be carefully selected from a
name space with complete control over the binding between the names and IP address.
 The names must be unique because the addresses are unique.
 A name space that maps each address to a unique name can be organized in two ways: flat
(or) hierarchical.

Flat Name Space


• In a flat name space, a name is assigned to an address.
• A name in this space is a sequence of characters without structure.
• The main disadvantage of a flat name space is that it cannot be used in a large system such
as Internet because it must be centrally controlled to avoid ambiguity and duplication.
Hierarchical Name Space
• In a hierarchical name space, each name is made of several parts.
• The first part can define the organization, the second part can define the name, the third part
can define departments, and so on.
• In this case, the authority to assign and control the name spaces can be decentralized.
• A central authority can assign the part of the name that defines the nature of the organization
and the name.
• The responsibility for the rest of the name can be given to the organization itself. Suffixes
can be added to the name to define host or resources.
• The management of the organization need not worry that the prefix chosen for a host is
taken by another organization because even if part of an address is the same, the whole
address is different.
• The names are unique without the need to be assigned by a central authority.
• The central authority controls only part of the name, not the whole name.

DOMAIN NAME SPACE


 To have a hierarchical name space, a domain name space was designed. In this design, the
names are defined in an inverted-tree structure with the root at the top.
 Each node in the tree has a label, which is a string with a maximum of 63 characters.
 The root label is a null string.
 DNS requires that children of a node have different labels, which guarantees the uniqueness
of the domain names.

 Each node in the tree has a label, which is a string with a maximum of 63 characters.
 The root label is a null string (empty string). DNS requires that children of a node (nodes
that branch from the same node) have different labels, which guarantees the uniqueness of the
domain names.

Domain Name
• Each node in the tree has a label called as domain name.
• A full domain name is a sequence of labels separated by dots (.)
• The domain names are always read from the node up to the root.
• The last label is the label of the root (null).
• This means that a full domain name always ends in a null label, which means the last
character is a dot because the null string is nothing.
• If a label is terminated by a null string, it is called a fully qualified domain name (FQDN).
• If a label is not terminated by a null string, it is called a partially qualified domain name
(PQDN).
Domain
• A domain is a subtree of the domain name space.
• The name of the domain is the domain name of the node at the top of the sub- tree.
• A domain may itself be divided into domains.

DISTRIBUTION OF NAME SPACE


 The information contained in the domain name space must be stored.
 But it is very inefficient and also not reliable to have just one computer store such a huge
amount of information.
 It is inefficient because responding to requests from all over the world, places a heavy load
on the system.
 It is not reliable because any failure makes the data inaccessible.
 The solution to these problems is to distribute the information among many computers
called DNS servers.
HIERARCHY OF NAME SERVERS
 The way to distribute information among DNS servers is to divide the whole space into
many domains based on the first level.
 Let the root stand-alone and create as many domains as there are first level nodes.
 Because a domain created this way could be very large,
 DNS allows domains to be divided further into smaller domains.
 Thus we have a hierarchy of servers in the same way that we have a hierarchy of names.

ZONE
 What a server is responsible for, or has authority over, is called a zone.
 The server makes a database called a zone file and keeps all the information for every node
under that domain.
 If a server accepts responsibility for a domain and does not divide the domains into smaller
domains, the domain and zone refer to the same thing.
 But if a server divides its domain into sub domains and delegates parts of its authority to
other servers, domain and zone refer to different things.
 The information about the nodes in the sub domains is stored in the servers at the lower
levels, with the original server keeping some sort of references to these lower level servers.
 But still, the original server does not free itself from responsibility totally.
 It still has a zone, but the detailed information is kept by the lower level servers.

ROOT SERVER
 A root sever is a server whose zone consists of the whole tree.
 A root server usually does not store any information about domains but delegates its
authority to other servers, keeping references to those servers.
 Currently there are more than 13 root servers, each covering the whole domain
name space.
 The servers are distributed all around the world.

PRIMARY AND SECONDARY SERVERS


 DNS defines two types of servers: primary and secondary.
 A Primary Server is a server that stores a file about the zone for which it is an authority.
• Primary Servers are responsible for creating, maintaining, and updating the zone file.
• Primary Server stores the zone file on a local disc.
 A secondary server is a server that transfers the complete information about a zone from
another server (Primary or Secondary) and stores the file on its local disc.
 If updating is required, it must be done by the primary server, which sends the
updated version to the secondary.
 A primary server loads all information from the disk file; the secondary server
loads all information from the primary server.

DNS IN THE INTERNET


 DNS is a protocol that can be used in different platforms.
 In the Internet, the domain name space (tree) is divided into three different
sections - Generic domains, Country domains, and Inverse domain.
Generic Domains
 The generic domains define registered hosts according to their generic behavior.
 Each node in the tree defines a domain, which is an index to the domain name space
database.
 The first level in the generic domains section allows seven possible three character levels.
 These levels describe the organization types as listed in following table.

Country Domains
 The country domains section follows the same format as the generic domains but uses two
characters for country abbreviations
 E.g.; in for India, us for United States etc) in place of the three character organizational
abbreviation at the first level.
 Second level labels can be organizational, or they can be more specific, national
designation.
 India for example, uses state abbreviations as a subdivision of the country domain us. (e.g.,
ca.in.)
Inverse Domains
 Mapping an address to a name is called Inverse domain.
 The client can send an IP address to a server to be mapped to a domain name and it is called
PTR(Pointer) query.
 To answer queries of this kind, DNS uses the inverse domain.

DNS RESOLUTION
 Mapping a name to an address or an address to a name is called name address resolution.
 DNS is designed as a client server application.
 A host that needs to map an address to a name or a name to an address calls a DNS client
named a Resolver.
 The Resolver accesses the closest DNS server with a mapping request.
 If the server has the information, it satisfies the resolver; otherwise, it either refers the
resolver to other servers or asks other servers to provide the information.
 After the resolver receives the mapping, it interprets the response to see if it is a real
resolution or an error and finally delivers the result to the process that requested it.
 A resolution can be either recursive or iterative.
Recursive Resolution

• The application program on the source host calls the DNS resolver (client) to find the IP
address of the destination host. The resolver, which does not know this address, sends the
query to the local DNS server of the source (Event 1)
• The local server sends the query to a root DNS server (Event 2)
• The Root server sends the query to the top-level-DNS server(Event 3)
• The top-level DNS server knows only the IP address of the local DNS server at the
destination. So it forwards the query to the local server, which knows the IP address of the
destination host (Event 4)
• The IP address of the destination host is now sent back to the top-level DNS server(Event 5)
then back to the root server (Event 6), then back to the source DNS server, which may cache it
for the future queries (Event 7), and finally back to the source host (Event 8)

Iterative Resolution
• In iterative resolution, each server that does not know the mapping, sends the IP address of
the next server back to the one that requested it.
• The iterative resolution takes place between two local servers.
• The original resolver gets the final answer from the destination local server.
• The messages shown by Events 2, 4, and 6 contain the same query.
• However, the message shown by Event 3 contains the IP address of the top- level domain
server.
• The message shown by Event 5 contains the IP address of the destination local DNS server
• The message shown by Event 7 contains the IP address of the destination.
• When the Source local DNS server receives the IP address of the destination, it sends it to
the resolver (Event 8).

DNS CACHING
 Each time a server receives a query for a name that is not in its domain, it needs to search
its database for a server IP address.
 DNS handles this with a mechanism called caching.
 When a server asks for a mapping from another server and receives the response, it stores
this information in its cache memory before sending it to the client.
 If the same or another client asks for the same mapping, it can check its cache
memory and resolve the problem.
 However, to inform the client that the response is coming from the cache memory and not
from an authoritative source, the server marks the response as unauthoritative.
 Caching speeds up resolution. Reduction of this search time would increase efficiency, but
it can also be problematic.
 If a server caches a mapping for a long time, it may send an outdated mapping to the client.
 To counter this, two techniques are used.
 First, the authoritative server always adds information to the mapping called time to live
(TTL). It defines the time in seconds that the receiving server can cache the information. After
that time, the mapping is invalid and any query must be sent again to the authoritative server.
 Second, DNS requires that each server keep a TTL counter for each mapping it caches. The
cache memory must be searched periodically and those mappings with an expired TTL must
be purged.

DNS RESOURCE RECORDS (RR)


• The zone information associated with a server is implemented as a set of resource records.
• In other words, a name server stores a database of resource records.
• A resource record is a 5-tuple structure : (Domain Name, Type, Class, TTL, Value)
• The domain name identifies the resource record.
• The type defines how the value should be interpreted.
• The value defines the information kept about the domain name.
• The TTL defines the number of seconds for which the information is valid.
• The class defines the type of network.
Types of Resource Records

DNS MESSAGES
 DNS has two types of messages: query and response.
 Both types have the same format.
 The query message consists of a header and question section.
 The response message consists of a header, question section, answer section,
authoritative section, and additional section .

 Header
• Both query and response messages have the same header format with
some fields set to zero for the query messages.
• The header fields are as follows:

• The identification field is used by the client to match the response with the query.
• The flag field defines whether the message is a query or response. It also includes status of
error.
• The next four fields in the header define the number of each record type in the message.
 Question Section
• The question section consists of one or more question records. It is present in both query and
response messages.
 Answer Section
• The answer section consists of one or more resource records. It is present only in response
messages.
 Authoritative Section
• The authoritative section gives information (domain name) about one or more authoritative
servers for the query.
 Additional Information Section
• The additional information section provides additional information that may help the
resolver.

DNS CONNECTIONS

 DNS can use either UDP or TCP.


 In both cases the well-known port used by the server is port 53.
 UDP is used when the size of the response message is less than 512 bytes because most
UDP packages have a 512-byte packet size limit.
 If the size of the response message is more than 512 bytes, a TCP connection is used.

DNS REGISTRARS

 New domains are added to DNS through a registrar. A fee is charged.


 A registrar first verifies that the requested domain name is unique and then enters it into the
DNS database.
 Today, there are many registrars; their names and addresses can be found at
http://www.intenic.net
 To register, the organization needs to give the name of its server and the IP address of the
server.
 For example, a new commercial organization named wonderful with a server named ws and
IP address 200.200.200.5, needs to give the following information to one of the registrars:
Domain name: ws.wonderful.com IP address: 200.200.200.5.

DDNS (DYNAMIC DOMAIN NAME SYSTEM)

 In DNS, when there is a change, such as adding a new host, removing a host, or changing
an IP address, the change must be made to the DNS master file.
 The DNS master file must be updated dynamically.
 The Dynamic Domain Name System (DDNS) is used for this purpose.
 In DDNS, when a binding between a name and an address is determined, the information is
sent to a primary DNS server.
 The primary server updates the zone.
 The secondary servers are notified either actively or passively.
 In active notification, the primary server sends a message to the secondary servers about the
change in the zone, whereas in passive notification, the secondary servers periodically check
for any changes.
 In either case, after being notified about the change, the secondary server requests
information about the entire zone (called the zone transfer).
 To provide security and prevent unauthorized changes in the DNS records, DDNS can use
an authentication mechanism.
DNS SECURITY
 DNS is one of the most important systems in the Internet infrastructure; it provides crucial
services to Internet users.
 Applications such as Web access or e-mail are heavily dependent on the proper operation of
DNS.
 DNS can be attacked in several ways including:

• Attack on Confidentiality - The attacker may read the response of a DNS


server to find the nature or names of sites the user mostly accesses. This type
of information can be used to find the user’s profile. To prevent this attack,
DNS messages need to be confidential.
• Attack on authentication and integrity - The attacker may intercept the
response of a DNS server and change it or create a totally new bogus response
to direct the user to the site or domain the attacker wishes the user to access.
This type of attack can be prevented using message origin authentication and
message integrity.
• Attack on denial-of-service - The attacker may flood the DNS server to
overwhelm it or eventually crash it. This type of attack can be prevented using
the provision against denial-of-service attack.

 To protect DNS, IETF has devised a technology named DNS Security (DNSSEC) that
provides message origin authentication and message integrity using a security service called
digital signature.
 DNSSEC, however, does not provide confidentiality for the DNS messages.
 There is no specific protection against the denial-of-service attack in the specification of
DNSSEC. However, the caching system protects the upper- level servers against this attack to
some extent.

1.10 SNMP (SIMPLE NETWORK MANAGEMENT PROTOCOL)

 The Simple Network Management Protocol (SNMP) is a framework for managing devices
in an internet using the TCP/IP protocol suite.
 SNMP is an application layer protocol that monitors and manages routers, distributed over a
network.
 It provides a set of operations for monitoring and managing the internet.
 SNMP uses services of UDP on two well-known ports: 161 (Agent) and 162 (manager).
 SNMP uses the concept of manager and agent.

SNMP MANAGER
• A manager is a host that runs the SNMP client program
• The manager has access to the values in the database kept by the agent.
• A manager checks the agent by requesting the information that reflects the behavior of the
agent.
• A manager also forces the agent to perform a certain function by resetting values in the
agent database.
• For example, a router can store in appropriate variables the number of packets received and
forwarded.
• The manager can fetch and compare the values of these two variables to see if the router is
congested or not.

SNMP AGENT
• The agent is a router that runs the SNMP server program.
• The agent is used to keep the information in a database while the manager is used to access
the values in the database.
• For example, a router can store the appropriate variables such as a number of packets
received and forwarded while the manager can compare these variables to determine whether
the router is congested or not.
• Agents can also contribute to the management process.
• A server program on the agent checks the environment, if something goes wrong, the agent
sends a warning message to the manager.

SNMP MANAGEMENT COMPONENTS


• Management of the internet is achieved through simple interaction between a manager and
agent.
• Management is achieved through the use of two protocols:
o Structure of Management Information (SMI)
o Management Information Base (MIB).

Structure of Management Information (SMI)


• To use SNMP, we need rules for naming objects.
• SMI is a protocol that defines these rules.
• SMI is a guideline for SNMP
• It emphasizes three attributes to handle an object: name, data type, and
encoding method.
• Its functions are:
 To name objects.
 To define the type of data that can be stored in an object.
 To show how to encode data for transmission over the network.

Name
 SMI requires that each managed object (such as a router, a variable in a router, a value,etc.)
have a unique name. To name objects globally.
 SMI uses an object identifier, which is a hierarchical identifier based on a tree structure.
 The tree structure starts with an unnamed root. Each object can be defined using a sequence
of integers separated by dots.
 The tree structure can also define an object using a sequence of textual names separated by
dots.
Type of data
 The second attribute of an object is the type of data stored in it.
 To define the data type, SMI uses Abstract Syntax Notation One (ASN.1) definitions.
 SMI has two broad categories of data types: simple and structured.
 The simple data types are atomic data types. Some of them are taken directly from ASN.1;
some are added by SMI.
 SMI defines two structured data types: sequence and sequence of.

 Sequence - A sequence data type is a combination of simple data types,


not necessarily of the same type.
 Sequence of - A sequence of data type is a combination of simple data
types all of the same type or a combination of sequence data types all of
the same type.

Encoding data
 SMI uses another standard, Basic Encoding Rules (BER), to encode data to be transmitted
over the network.
 BER specifies that each piece of data be encoded in triplet format (TLV): tag, length, value

Management Information Base (MIB)

The Management Information Base (MIB) is the second component used in network
management.

• Each agent has its own MIB, which is a collection of objects to be managed.
• MIB classifies objects under groups.

MIB Variables

MIB variables are of two types namely simple and table.

• Simple variables are accessed using group-id followed by variable-id and 0


• Tables are ordered as column-row rules, i.e., column by column from top to bottom.
Only leaf elements are accessible in a table type.
SNMP MESSAGES/PDU

SNMP is request/reply protocol that supports various operations using PDUs.


SNMP defines eight types of protocol data units (or PDUs):

GetRequest, GetNext-Request, GetBulkRequest, SetRequest, Response, Trap,


InformRequest, and Report

GetRequest

 The GetRequest PDU is sent from the manager (client) to the agent (server)
to retrieve the value of a variable or a set of variables.

GetNextRequest

 The GetNextRequest PDU is sent from the manager to the agent to retrieve
the value of a variable.

GetBulkRequest

 The GetBulkRequest PDU is sent from the manager to the agent to retrieve a
large amount of data. It can be used instead of multiple GetRequest and
GetNextRequest PDUs.

SetRequest

 The SetRequest PDU is sent from the manager to the agent to set (store) a
value in a variable.
Response

 The Response PDU is sent from an agent to a manager in response to


GetRequest or GetNextRequest. It contains the value(s) of the variable(s)
requested by the manager.

Trap

 The Trap PDU is sent from the agent to the manager to report an event. For
example, if the agent is rebooted, it informs the manager and reports the time
of rebooting.

InformRequest

 The InformRequest PDU is sent from one manager to another remote


manager to get the value of some variables from agents under the control of the
remote manager. The remote manager responds with a Response PDU.

Report

 The Report PDU is designed to report some types of errors between


managers.
CS3591 – Computer Networks Unit 2

UNIT – II : TRANSPORT LAYER


Introduction - Transport-Layer Protocols: UDP – TCP: Connection
Management – Flow control -Congestion Control - Congestion avoidance
(DECbit, RED) – SCTP – Quality of Service.

2.1. INTRODUCTION
 The transport layer is the fourth layer of the OSI model and is the core of the Internet
model.
 It responds to service requests from the session layer and issues service requests to
the network Layer.
 The transport layer provides transparent transfer of data between hosts.
 It provides end-to-end control and information transfer with the quality of service
needed by the application program.
 It is the first true end-to-end layer, implemented in all End Systems (ES).

TRANSPORT LAYER FUNCTIONS / SERVICES


 The transport layer is located between the network layer and the application layer.
 The transport layer is responsible for providing services to the application layer; it
receives services from the network layer.
 The services that can be provided by the transport layer are
1. Process-to-Process Communication
2. Addressing : Port Numbers
3. Encapsulation and Decapsulation
4. Multiplexing and Demultiplexing
5. Flow Control
6. Error Control
7. Congestion Control
1
CS3591 – Computer Networks Unit 2

Process-to-Process Communication
 The Transport Layer is responsible for delivering data to the appropriate application
process on the host computers.
 This involves multiplexing of data from different application processes, i.e. forming
data packets, and adding source and destination port numbers in the header of each
Transport Layer data packet.
 Together with the source and destination IP address, the port numbers constitutes a
network socket, i.e. an identification address of the process-to-process
communication.

Addressing: Port Numbers


 Ports are the essential ways to address multiple entities in the same location.
 Using port addressing it is possible to use more than one network-based application
at the same time.
 Three types of Port numbers are used :
 Well-known ports - These are permanent port numbers. They range between
0 to 1023.These port numbers are used by Server Process.
 Registered ports - The ports ranging from 1024 to 49,151 are not assigned or
controlled.
 Ephemeral ports (Dynamic Ports) – These are temporary port numbers. They
range between 49152–65535.These port numbers are used by Client Process.

Encapsulation and Decapsulation


 To send a message from one process to another, the transport-layer protocol
encapsulates and decapsulates messages.
 Encapsulation happens at the sender site. The transport layer receives the data and
adds the transport-layer header.
 Decapsulation happens at the receiver site. When the message arrives at the
destination transport layer, the header is dropped and the transport layer delivers the
message to the process running at the application layer.

Multiplexing and Demultiplexing


 Whenever an entity accepts items from more than one source, this is referred to as
multiplexing (many to one).
 Whenever an entity delivers items to more than one source, this is referred to as
demultiplexing (one to many).
 The transport layer at the source performs multiplexing
 The transport layer at the destination performs demultiplexing

Flow Control
 Flow Control is the process of managing the rate of data transmission between two
nodes to prevent a fast sender from overwhelming a slow receiver.
 It provides a mechanism for the receiver to control the transmission speed, so that the
receiving node is not overwhelmed with data from transmitting node.
2
CS3591 – Computer Networks Unit 2

Error Control
 Error control at the transport layer is responsible for
1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.
 Error Control involves Error Detection and Error Correction

Congestion Control
 Congestion in a network may occur if the load on the network (the number of
packets sent to the network) is greater than the capacity of the network (the number
of packets a network can handle).
 Congestion control refers to the mechanisms and techniques that control the
congestion and keep the load below the capacity.
 Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
 Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.

PORT NUMBERS
 A transport-layer protocol usually has several responsibilities.
 One is to create a process-to-process communication.
 Processes are programs that run on hosts. It could be either server or client.
 A process on the local host, called a client, needs services from a process usually
on the remote host, called a server.
 Processes are assigned a unique 16-bit port number on that host.
 Port numbers provide end-to-end addresses at the transport layer
 They also provide multiplexing and demultiplexing at this layer.
3
CS3591 – Computer Networks Unit 2

 The port numbers are integers between 0 and 65,535 .

ICANN (Internet Corporation for Assigned Names and Numbers) has divided the port
numbers into three ranges:
 Well-known ports
 Registered
 Ephemeral ports (Dynamic Ports)

WELL-KNOWN PORTS
 These are permanent port numbers used by the servers.
 They range between 0 to 1023.
 This port number cannot be chosen randomly.
 These port numbers are universal port numbers for servers.
 Every client process knows the well-known port number of the corresponding server
process.
 For example, while the daytime client process, a well-known client program, can
use an ephemeral (temporary) port number, 52,000, to identify itself, the daytime
server process must use the well-known (permanent) port number 13.

4
CS3591 – Computer Networks Unit 2

EPHEMERAL PORTS (DYNAMIC PORTS)


 The client program defines itself with a port number, called the ephemeral port
number.
 The word ephemeral means “short-lived” and is used because the life of a client is
normally short.
 An ephemeral port number is recommended to be greater than 1023.
 These port number ranges from 49,152 to 65,535 .
 They are neither controlled nor registered. They can be used as temporary or private
port numbers.

REGISTERED PORTS
 The ports ranging from 1024 to 49,151 are not assigned or controlled.

2.2 TRANSPORT LAYER PROTOCOLS

 Three protocols are associated with the Transport layer.


 They are
(1) UDP –User Datagram Protocol
(2) TCP – Transmission Control Protocol
(3) SCTP - Stream Control Transmission Protocol

 Each protocol provides a different type of service and should be used appropriately.

5
CS3591 – Computer Networks Unit 2

UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity
and efficiency in applications where error control can be provided by the application-layer
process.

TCP - TCP is a reliable connection-oriented protocol that can be used in any application
where reliability is important.

SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP
and TCP in an effort to create a better protocol for multimedia communication.

2.3 USER DATAGRAM PROTOCOL (UDP)

 User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol.


 UDP adds process-to-process communication to best-effort service provided by IP.
 UDP is a very simple protocol using a minimum of overhead.
 UDP is a simple demultiplexer, which allows multiple processes on each host to
communicate.
 UDP does not provide flow control , reliable or ordered delivery.
 UDP can be used to send small message where reliability is not expected.
 Sending a small message using UDP takes much less interaction between the sender
and receiver.
 UDP allow processes to indirectly identify each other using an abstract locator called
port or mailbox

UDP PORTS
 Processes (server/client) are identified by an abstract locator known as port.
 Server accepts message at well known port.
 Some well-known UDP ports are 7–Echo, 53–DNS, 111–RPC, 161–SNMP, etc.
 < port, host > pair is used as key for demultiplexing.
 Ports are implemented as a message queue.
 When a message arrives, UDP appends it to end of the queue.
 When queue is full, the message is discarded.
 When a message is read, it is removed from the queue.
 When an application process wants to receive a message, one is removed from the
front of the queue.
 If the queue is empty, the process blocks until a message becomes available.
6
CS3591 – Computer Networks Unit 2

UDP DATAGRAM (PACKET) FORMAT


 UDP packets are known as user datagrams .
 These user datagrams, have a fixed-size header of 8 bytes made of four fields, each
of 2 bytes (16 bits).

Source Port Number


 Port number used by process on source host with 16 bits long.
 If the source host is client (sending request) then the port number is an temporary
one requested by the process and chosen by UDP.
 If the source is server (sending response) then it is well known port number.

Destination Port Number


 Port number used by process on Destination host with 16 bits long.
 If the destination host is the server (a client sending request) then the
port number is a well known port number.
 If the destination host is client (a server sending response) then port number
is an temporary one copied by server from the request packet.

7
CS3591 – Computer Networks Unit 2

Length
 This field denotes the total length of the UDP Packet (Header plus data)
 The total length of any UDP datagram can be from 0 to 65,535 bytes.

Checksum
 UDP computes its checksum over the UDP header, the contents of the message
body, and something called the pseudoheader.
 The pseudoheader consists of three fields from the IP header—protocol number,
source IP address, destination IP address plus the UDP length field.
Data
 Data field defines tha actual payload to be transmitted.
 Its size is variable.

UDP SERVICES
Process-to-Process Communication
 UDP provides process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.

Connectionless Services
 UDP provides a connectionless service.
 There is no connection establishment and no connection termination .
 Each user datagram sent by UDP is an independent datagram.
 There is no relationship between the different user datagrams even if they are
 coming from the same source process and going to the same destination program.
 The user datagrams are not numbered.
 Each user datagram can travel on a different path.

Flow Control
 UDP is a very simple protocol.
 There is no flow control, and hence no window mechanism.
 The receiver may overflow with incoming messages.
 The lack of flow control means that the process using UDP should provide for this
service, if needed.

Error Control
 There is no error control mechanism in UDP except for the checksum.
 This means that the sender does not know if a message has been lost or duplicated.
 When the receiver detects an error through the checksum, the user datagram is
silently discarded.
8
CS3591 – Computer Networks Unit 2

 The lack of error control means that the process using UDP should provide for this
service, if needed.

Checksum
 UDP checksum calculation includes three sections: a pseudoheader, the UDP header,
and the data coming from the application layer.
 The pseudoheader is the part of the header in which the user datagram is to be
encapsulated with some fields filled with 0s.

Optional Inclusion of Checksum


 The sender of a UDP packet can choose not to calculate the checksum.
 In this case, the checksum field is filled with all 0s before being sent.
 In the situation where the sender decides to calculate the checksum,
but it happens that the result is all 0s, the checksum is changed to all 1s
before the packet is sent.
 In other words, the sender complements the sum two times.

Congestion Control
 Since UDP is a connectionless protocol, it does not provide congestion control.
 UDP assumes that the packets sent are small and sporadic(occasionally or at irregular
intervals) and cannot create congestion in the network.
 This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.

Encapsulation and Decapsulation


 To send a message from one process to another, the UDP protocol encapsulates and
decapsulates messages.

Queuing
 In UDP, queues are associated with ports.
 At the client site, when a process starts, it requests a port number from the operating
system.
 Some implementations create both an incoming and an outgoing queue associated
with each process.
 Other implementations create only an incoming queue associated with each process.

Multiplexing and Demultiplexing


 In a host running a transport protocol suite, there is only one UDP but possibly
several processes that may want to use the services of UDP.
 To handle this situation, UDP multiplexes and demultiplexes.

9
CS3591 – Computer Networks Unit 2

APPLICATIONS OF UDP
 UDP is used for management processes such as SNMP.
 UDP is used for route updating protocols such as RIP.
 UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software
 UDP is suitable for a process with internal flow and error control mechanisms such
as Trivial File Transfer Protocol (TFTP).
 UDP is suitable for a process that requires simple request-response communication
with little concern for flow and error control.
 UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.

2.4 TRANSMISSION CONTROL PROTOCOL (TCP)

 TCP is a reliable, connection-oriented, byte-stream protocol.


 TCP guarantees the reliable, in-order delivery of a stream of bytes. It is a full-duplex
protocol, meaning that each TCP connection supports a pair of byte streams, one
flowing in each direction.
 TCP includes a flow-control mechanism for each of these byte streams that allow the
receiver to limit how much data the sender can transmit at a given time.
 TCP supports a demultiplexing mechanism that allows multiple application programs
on any given host to simultaneously carry on a conversation with their peers.
 TCP also implements congestion-control mechanism. The idea of this mechanism is
to prevent sender from overloading the network.
 Flow control is an end to end issue, whereas congestion control is concerned with
how host and network interact.

TCP SERVICES
Process-to-Process Communication
 TCP provides process-to-process communication using port numbers.
Stream Delivery Service
 TCP is a stream-oriented protocol.
 TCP allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.
 TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet.
 The sending process produces (writes to) the stream and the receiving process
consumes (reads from) it.

10
CS3591 – Computer Networks Unit 2

Full-Duplex Communication
 TCP offers full-duplex service, where data can flow in both directions at the same
time.
 Each TCP endpoint then has its own sending and receiving buffer, and segments
move in both directions.

Multiplexing and Demultiplexing


TCP performs multiplexing at the sender and demultiplexing at the receiver.

Connection-Oriented Service
 TCP is a connection-oriented protocol.
 A connection needs to be established for each pair of processes.
 When a process at site A wants to send to and receive data from another
process at site B, the following three phases occur:
1. The two TCP’s establish a logical connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.

Reliable Service
 TCP is a reliable transport protocol.
 It uses an acknowledgment mechanism to check the safe and sound arrival of data.

TCP SEGMENT
 A packet in TCP is called a segment.
 Data unit exchanged between TCP peers are called segments.
 A TCP segment encapsulates the data received from the application layer.
 The TCP segment is encapsulated in an IP datagram, which in turn is encapsulated in
a frame at the data-link layer.

11
CS3591 – Computer Networks Unit 2

 TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP
connection and the receiver reads bytes out of the TCP connection.
 TCP does not, itself, transmit individual bytes over the Internet.
 TCP on the source host buffers enough bytes from the sending process to fill a
reasonably sized packet and then sends this packet to its peer on the destination host.
 TCP on the destination host then empties the contents of the packet into a receive
buffer, and the receiving process reads from this buffer at its leisure.
 TCP connection supports byte streams flowing in both directions.
 The packets exchanged between TCP peers are called segments, since each one
carries a segment of the byte stream.

TCP PACKET FORMAT


 Each TCP segment contains the header plus the data.
 The segment consists of a header of 20 to 60 bytes, followed by data from the
application program.
 The header is 20 bytes if there are no options and up to 60 bytes if it contains
options.

SrcPort and DstPort―port number of source and destination process.


SequenceNum―contains sequence number, i.e. first byte of data segment.
Acknowledgment― byte number of segment, the receiver expects next.
HdrLen―Length of TCP header as 4-byte words.
Flags― contains six control bits known as flags.
o URG — segment contains urgent data.
o ACK — value of acknowledgment field is valid.
o PUSH — sender has invoked the push operation.
o RESET — receiver wants to abort the connection.
o SYN — synchronize sequence numbers during connection establishment.
o FIN — terminates the TCP connection.

12
CS3591 – Computer Networks Unit 2

Advertised Window―defines receiver’s window size and acts as flow control.


Checksum―It is computed over TCP header, Data, and pseudo header containing IP fields
(Length, SourceAddr & DestinationAddr).
UrgPtr ― used when the segment contains urgent data. It defines a value that must be
added to the sequence number.
Options - There can be up to 40 bytes of optional information in the TCP header.

2.5 TCP CONNECTION MANAGEMENT


 TCP is connection-oriented.
 A connection-oriented transport protocol establishes a logical path between the
source and destination.
 All of the segments belonging to a message are then sent over this logical path.
 In TCP, connection-oriented transmission requires three phases:
Connection Establishment, Data Transfer and Connection Termination.

Connection Establishment
 While opening a TCP connection the two nodes(client and server) want to agree on a
set of parameters.
 The parameters are the starting sequence numbers that is to be used for their
respective byte streams.
 Connection establishment in TCP is a three-way handshaking.

1. Client sends a SYN segment to the server containing its initial sequence number (Flags
= SYN, SequenceNum = x)
2. Server responds with a segment that acknowledges client’s segment and specifies its
initial sequence number (Flags = SYN + ACK, ACK = x + 1 SequenceNum = y).
3. Finally, client responds with a segment that acknowledges server’s sequence number
(Flags = ACK, ACK = y + 1).
13
CS3591 – Computer Networks Unit 2

 The reason that each side acknowledges a sequence number that is one larger
than the one sent is that the Acknowledgment field actually identifies the “next
sequence number expected,”
 A timer is scheduled for each of the first two segments, and if the expected
response is not received, the segment is retransmitted.

Data Transfer
 After connection is established, bidirectional data transfer can take place.
 The client and server can send data and acknowledgments in both directions.
 The data traveling in the same direction as an acknowledgment are carried on the
same segment.
 The acknowledgment is piggybacked with the data.

Connection Termination
 Connection termination or teardown can be done in two ways :
Three-way Close and Half-Close

Three-way Close—Both client and server close simultaneously.

 Client sends a FIN segment.


 The FIN segment can include last
chunk of data.
 Server responds with FIN + ACK
segment to inform its closing.
 Finally, client sends an ACK
segment

Half-Close—Client stops sending but receives data.


 Client half-closes the
connection by sending a FIN
segment.
 Server sends an ACK segment.
 Data transfer from client to the
server stops.
 After sending all data, server sends
FIN segment to client, which is
acknowledged by the client.

14
CS3591 – Computer Networks Unit 2

STATE TRANSITION DIAGRAM


 To keep track of all the different events happening during connection establishment,
connection termination, and data transfer, TCP is specified as the finite state machine
(FSM).
 The transition from one state to another is shown using directed lines.
 States involved in opening and closing a connection is shown above and below
ESTABLISHED state respectively.
 States Involved in TCP :

15
CS3591 – Computer Networks Unit 2

Opening a TCP Connection


1. Server invokes a passive open on TCP, which causes TCP to move to LISTEN state
2. Client does an active open, which causes its TCP to send a SYN segment to the server
and move to SYN_SENT state.
3. When SYN segment arrives at the server, it moves to SYN_RCVD state and responds
with a SYN + ACK segment.
4. Arrival of SYN + ACK segment causes the client to move to ESTABLISHED state
and sends an ACK to the server.
5. When ACK arrives, the server finally moves to ESTABLISHED state.

Closing a TCP Connection


 Client / Server can independently close its half of the connection or simultaneously.
Transitions from ESTABLISHED to CLOSED state are:
One side closes:
ESTABLISHED → FIN_WAIT_1 → FIN_WAIT_2 → TIME_WAIT → CLOSED
Other side closes:
ESTABLISHED → CLOSE_WAIT → LAST_ACK → CLOSED
Simultaneous close:
ESTABLISHED → FIN_WAIT_1 → CLOSING → TIME_WAIT → CLOSED

2.6 TCP FLOW CONTROL


 TCP uses a variant of sliding window known as adaptive flow control that:
o guarantees reliable delivery of data
o ensures ordered delivery of data
oenforces flow control at the sender
 Receiver advertises its window size to the sender using AdvertisedWindow field.
 Sender thus cannot have unacknowledged data greater than AdvertisedWindow.

16
CS3591 – Computer Networks Unit 2

Send Buffer
 Sending TCP maintains send buffer which contains 3 segments
(1) acknowledged data
(2) unacknowledged data
(3) data to be transmitted.
 Send buffer maintains three pointers
(1) LastByteAcked, (2) LastByteSent, and (3) LastByteWritten
such that:
LastByteAcked ≤ LastByteSent ≤ LastByteWritten
 A byte can be sent only after being written and only a sent byte can be
acknowledged.
 Bytes to the left of LastByteAcked are not kept as it had been acknowledged.

Receive Buffer
 Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
 Receive buffer maintains three pointers namely
(1) LastByteRead, (2) NextByteExpected, and (3) LastByteRcvd
such that:
LastByteRead ≤ NextByteExpected ≤ LastByteRcvd + 1
 A byte cannot be read until that byte and all preceding bytes have been received.
 If data is received in order, then NextByteExpected = LastByteRcvd + 1
 Bytes to the left of LastByteRead are not buffered, since it is read by the application.

Flow Control in TCP


 Size of send and receive buffer is MaxSendBuffer and MaxRcvBuffer respectively.
 Sending TCP prevents overflowing of send buffer by maintaining
LastByteWritten − LastByteAcked ≤ MaxSendBuffer
 Receiving TCP avoids overflowing its receive buffer by maintaining
LastByteRcvd − LastByteRead ≤ MaxRcvBuffer
 Receiver throttles the sender by having AdvertisedWindow based on free space
17
CS3591 – Computer Networks Unit 2

available for buffering.


AdvertisedWindow = MaxRcvBuffer − ((NextByteExpected − 1) – LastByteRead)
 Sending TCP adheres to AdvertisedWindow by computing EffectiveWindow that
limits how much data it should send.
EffectiveWindow = AdvertisedWindow − (LastByteSent − LastByteAcked)
 When data arrives, LastByteRcvd moves to its right and AdvertisedWindow shrinks.
 Receiver acknowledges only, if preceding bytes have arrived.
 AdvertisedWindow expands when data is read by the application.
o If data is read as fast as it arrives then
AdvertisedWindow = MaxRcvBuffer
o If data is read slowly, it eventually leads to a AdvertisedWindow of size 0.
 AdvertisedWindow field is designed to allow sender to keep the pipe full.

TCP TRANSMISSION
 TCP has three mechanism to trigger the transmission of a segment.
 They are
o Maximum Segment Size (MSS) - Silly Window Syndrome
o Timeout - Nagle’s Algorithm

Silly Window Syndrome


 When either the sending application program creates data slowly or the receiving
application program consumes data slowly, or both, problems arise.
 Any of these situations results in the sending of data in very small segments, which
reduces the efficiency of the operation.
 This problem is called the silly window syndrome.
 The sending TCP may create a silly window syndrome if it is serving an application
program that creates data slowly, for example, 1 byte at a time.
 The application program writes 1 byte at a time into the buffer of the sending TCP.
 The result is a lot of 1-byte segments that are traveling through an internet.
 The solution is to prevent the sending TCP from sending the data byte by byte.
 The sending TCP must be forced to wait and collect data to send in a larger block.

18
CS3591 – Computer Networks Unit 2

Nagle’s Algorithm
 If there is data to send but is less than MSS, then we may want to wait some amount
of time before sending the available data
 If we wait too long, then it may delay the process.
 If we don’t wait long enough, it may end up sending small segments resulting in
Silly Window Syndrome.
 The solution is to introduce a timer and to transmit when the timer expires
 Nagle introduced an algorithm for solving this problem

2.7 TCP CONGESTION CONTROL


 Congestion occurs if load (number of packets sent) is greater than capacity of the
network (number of packets a network can handle).
 When load is less than network capacity, throughput increases proportionally.
 When load exceeds capacity, queues become full and the routers discard some
packets and throughput declines sharply.
 When too many packets are contending for the same link
o The queue overflows
o Packets get dropped
o Network is congested
 Network should provide a congestion control mechanism to deal with such a
situation.
 TCP maintains a variable called CongestionWindow for each connection.
 TCP Congestion Control mechanisms are:
19
CS3591 – Computer Networks Unit 2

1. Additive Increase / Multiplicative Decrease (AIMD)


2. Slow Start
3. Fast Retransmit and Fast Recovery

Additive Increase / Multiplicative Decrease (AIMD)


 TCP source initializes CongestionWindow based on congestion level in the network.
 Source increases CongestionWindow when level of congestion goes down and
decreases the same when level of congestion goes up.
 TCP interprets timeouts as a sign of congestion and reduces the rate of transmission.
 On timeout, source reduces its CongestionWindow by half, i.e., multiplicative
decrease. For example, if CongestionWindow = 16 packets, after timeout it is 8.
 Value of CongestionWindow is never less than maximum segment size (MSS).
 When ACK arrives CongestionWindow is incremented marginally, i.e., additive
increase.
Increment = MSS × (MSS/CongestionWindow)
CongestionWindow += Increment
 For example, when ACK arrives for 1 packet, 2 packets are sent. When ACK for both
packets arrive, 3 packets are sent and so on.
 CongestionWindow increases and decreases throughout lifetime of the connection.

20
CS3591 – Computer Networks Unit 2

 When CongestionWindow is plotted as a function of time, a saw-tooth pattern


results.

Slow Start
 Slow start is used to increase CongestionWindow exponentially from a cold start.
 Source TCP initializes CongestionWindow to one packet.
 TCP doubles the number of packets sent every RTT on successful transmission.
 When ACK arrives for first packet TCP adds 1 packet to CongestionWindow and
sends two packets.
 When two ACKs arrive, TCP increments CongestionWindow by 2 packets and sends
four packets and so on.
 Instead of sending entire permissible packets at once (bursty traffic), packets are sent
in a phased manner, i.e., slow start.
 Initially TCP has no idea about congestion, henceforth it increases
CongestionWindow rapidly until there is a timeout. On timeout:
CongestionThreshold = CongestionWindow/ 2
CongestionWindow = 1
 Slow start is repeated until CongestionWindow reaches CongestionThreshold and
thereafter 1 packet per RTT.

21
CS3591 – Computer Networks Unit 2

 The congestion window trace will look like

Fast Retransmit And Fast Recovery


 TCP timeouts led to long periods of time during which the connection went dead
while waiting for a timer to expire.
 Fast retransmit is a heuristic approach that triggers retransmission of a dropped
packet sooner than the regular timeout mechanism. It does not replace regular
timeouts.
 When a packet arrives out of order, receiving TCP resends the same
acknowledgment (duplicate ACK) it sent last time.
 When three duplicate ACK arrives at the sender, it infers that corresponding packet
may be lost due to congestion and retransmits that packet. This is called fast
retransmit before regular timeout.
 When packet loss is detected using fast retransmit, the slow start phase is replaced by
additive increase, multiplicative decrease method. This is known as fast recovery.
 Instead of setting CongestionWindow to one packet, this method uses the ACKs that
are still in pipe to clock the sending of packets.
 Slow start is only used at the beginning of a connection and after regular timeout. At
other times, it follows a pure AIMD pattern.

22
CS3591 – Computer Networks Unit 2

 For example, packets 1 and 2 are received whereas packet 3 gets lost.
o Receiver sends a duplicate ACK for packet 2 when packet 4 arrives.
o Sender receives 3 duplicate ACKs after sending packet 6 retransmits packet 3.
o When packet 3 is received, receiver sends cumulative ACK up to packet 6.
 The congestion window trace will look like

2.8 TCP CONGESTION AVOIDANCE


 Congestion avoidance mechanisms prevent congestion before it actually occurs.
 These mechanisms predict when congestion is about to happen and then to reduce the
rate at which hosts send data just before packets start being discarded.
 TCP creates loss of packets in order to determine bandwidth of the connection.
 Routers help the end nodes by intimating when congestion is likely to occur.
 Congestion-avoidance mechanisms are:
o DEC bit - Destination Experiencing Congestion Bit
o RED - Random Early Detection

Dec Bit - Destination Experiencing Congestion Bit


 The first mechanism developed for use on the Digital Network Architecture (DNA).

 The idea is to evenly split the responsibility for congestion control between the
routers and the end nodes.

 Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.

 This notification is implemented by setting a binary congestion bit in the packets that
flow through the router; hence the name DECbit.

23
CS3591 – Computer Networks Unit 2

 The destination host then copies this congestion bit into the ACK it sends back to the
source.

 The Source checks how many ACK has DEC bit set for previous window packets.

 If less than 50% of ACK have DEC bit set, then source increases its congestion
window by 1 packet

 Otherwise, decreases the congestion window by 87.5%.

 Finally, the source adjusts its sending rate so as to avoid congestion.

 Increase by 1, decrease by 0.875 rule was based on AIMD for stabilization.

 A single congestion bit is added to the packet header.

 Using a queue length of 1 as the trigger for setting the congestion bit.

 A router sets this bit in a packet if its average queue length is greater than or equal to
1 at the time the packet arrives.

Computing average queue length at a router using DEC bit

 Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
 It calculates the average queue length by dividing the curve area with time interval.

Red - Random Early Detection


 The second mechanism of congestion avoidance is called as Random Early
Detection (RED).

24
CS3591 – Computer Networks Unit 2

 Each router is programmed to monitor its own queue length, and when it detects that
there is congestion, it notifies the source to adjust its congestion window.
 RED differs from the DEC bit scheme by two ways:
a. In DECbit, explicit notification about congestion is sent to source, whereas
RED implicitly notifies the source by dropping a few packets.

b. DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner. Drop each arriving packet with some
drop probability whenever the queue length exceeds some drop level. This
idea is called early random drop.

Computation of average queue length using RED

 AvgLen = (1 − Weight) × AvgLen + Weight × SampleLen


where 0 < Weight < 1 and
SampleLen – is the length of the queue when a
sample measurement is made.
 The queue length is measured every time a new packet arrives at the gateway.

 RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold
 When a packet arrives at a gateway it compares Avglen with these two values
according to the following rules.

25
CS3591 – Computer Networks Unit 2

2.9 STREAM CONTROL TRANSMISSION PROTOCOL (SCTP)

 Stream Control Transmission Protocol (SCTP) is a reliable, message-oriented


transport layer protocol.
 SCTP has mixed features of TCP and UDP.
 SCTP maintains the message boundaries and detects the lost data, duplicate data as
well as out-of-order data.
 SCTP provides the Congestion control as well as Flow control.
 SCTP is especially designed for internet applications as well as multimedia
communication.

SCTP SERVICES

Process-to-Process Communication
 SCTP provides process-to-process communication.

Multiple Streams
 SCTP allows multistream service in each connection, which is called association in
SCTP terminology.
 If one of the streams is blocked, the other streams can still deliver their data.

Multihoming
 An SCTP association supports multihoming service.
 The sending and receiving host can define multiple IP addresses in each end for an
association.
 In this fault-tolerant approach, when one path fails, another interface can be used for
data delivery without interruption.

26
CS3591 – Computer Networks Unit 2

Full-Duplex Communication
 SCTP offers full-duplex service, where data can flow in both directions at the same
time. Each SCTP then has a sending and receiving buffer and packets are sent in both
directions.

Connection-Oriented Service
 SCTP is a connection-oriented protocol.
 In SCTP, a connection is called an association.
 If a client wants to send and receive message from server , the steps are :
Step1: The two SCTPs establish the connection with each other.
Step2: Once the connection is established, the data gets exchanged in both the
directions.
Step3: Finally, the association is terminated.

Reliable Service
 SCTP is a reliable transport protocol.
 It uses an acknowledgment mechanism to check the safe and sound arrival of data.

SCTP PACKET FORMAT

An SCTP packet has a mandatory general header and a set of blocks called chunks.

General Header
 The general header (packet header) defines the end points of each association to
which the packet belongs
 It guarantees that the packet belongs to a particular association
 It also preserves the integrity of the contents of the packet including the header itself.
 There are four fields in the general header.
Source port
This field identifies the sending port.

Destination port
This field identifies the receiving port that hosts use to route the packet to the
appropriate endpoint/application.

27
CS3591 – Computer Networks Unit 2

Verification tag
A 32-bit random value created during initialization to distinguish stale packets
from a previous connection.
Checksum
The next field is a checksum. The size of the checksum is 32 bits. SCTP uses
CRC-32 Checksum.

Chunks
 Control information or user data are carried in chunks.
 Chunks have a common layout.
 The first three fields are common to all chunks; the information field depends on the
type of chunk.

 The type field can define up to 256 types of chunks. Only a few have been defined so
far; the rest are reserved for future use.
 The flag field defines special flags that a particular chunk may need.
 The length field defines the total size of the chunk, in bytes, including the type, flag,
and length fields.

Types of Chunks
 An SCTP association may send many packets, a packet may contain several chunks,
and chunks may belong to different streams.
 SCTP defines two types of chunks - Control chunks and Data chunks.
 A control chunk controls and maintains the association.
 A data chunk carries user data.

28
CS3591 – Computer Networks Unit 2

SCTP ASSOCIATION
 SCTP is a connection-oriented protocol.
 A connection in SCTP is called an association to emphasize multihoming.
 SCTP Associations consists of three phases:
 Association Establishment
 Data Transfer
 Association Termination

Association Establishment
 Association establishment in SCTP requires a four-way handshake.
 In this procedure, a client process wants to establish an association with a server
process using SCTP as the transport-layer protocol.
 The SCTP server needs to be prepared to receive any association (passive open).
 Association establishment, however, is initiated by the client (active open).

 The client sends the first packet, which contains an INIT chunk.
 The server sends the second packet, which contains an INIT ACK chunk. The INIT
ACK also sends a cookie that defines the state of the server at this moment.
 The client sends the third packet, which includes a COOKIE ECHO chunk. This is a
very simple chunk that echoes, without change, the cookie sent by the server. SCTP
allows the inclusion of data chunks in this packet.
 The server sends the fourth packet, which includes the COOKIE ACK chunk that
acknowledges the receipt of the COOKIE ECHO chunk. SCTP allows the inclusion
of data chunks with this packet.

Data Transfer
 The whole purpose of an association is to transfer data between two ends.
 After the association is established, bidirectional data transfer can take place.
 The client and the server can both send data.
 SCTP supports piggybacking.

29
CS3591 – Computer Networks Unit 2

 Types of SCTP data Transfer :


1. Multihoming Data Transfer
 Data transfer, by default, uses the primary address of the destination.
 If the primary is not available, one of the alternative addresses is used.
 This is called Multihoming Data Transfer.

2. Multistream Delivery
 SCTP can support multiple streams, which means that the sender process
can define different streams and a message can belong to one of these
streams.
 Each stream is assigned a stream identifier (SI) which uniquely defines
that stream.
 SCTP supports two types of data delivery in each stream: ordered (default)
and unordered.

Association Termination
 In SCTP,either of the two parties involved in exchanging data (client or server) can
close the connection.
 SCTP does not allow a “half closed” association. If one end closes the association,
the other end must stop sending new data.
 If any data are left over in the queue of the recipient of the termination request, they
are sent and the association is closed.
 Association termination uses three packets.

SCTP FLOW CONTROL


 Flow control in SCTP is similar to that in TCP.
 Current SCTP implementations use a byte-oriented window for flow control.

Receiver Site
 The receiver has one buffer (queue) and three variables.

30
CS3591 – Computer Networks Unit 2

 The queue holds the received data chunks that have not yet been read by the process.
 The first variable holds the last TSN received, cumTSN.
 The second variable holds the available buffer size; winsize.
 The third variable holds the last accumulative acknowledgment, lastACK.
 The following figure shows the queue and variables at the receiver site.

 When the site receives a data chunk, it stores it at the end of the buffer (queue) and
subtracts the size of the chunk from winSize.
 The TSN number of the chunk is stored in the cumTSN variable.
 When the process reads a chunk, it removes it from the queue and adds the size of the
removed chunk to winSize (recycling).
 When the receiver decides to send a SACK, it checks the value of lastAck; if it is less
than cumTSN, it sends a SACK with a cumulative TSN number equal to the
cumTSN.
 It also includes the value of winSize as the advertised window size.

Sender Site
 The sender has one buffer (queue) and three variables: curTSN, rwnd, and inTransit.
 We assume each chunk is 100 bytes long. The buffer holds the chunks produced by
the process that either have been sent or are ready to be sent.
 The first variable, curTSN, refers to the next chunk to be sent.
 All chunks in the queue with a TSN less than this value have been sent, but not
acknowledged; they are outstanding.
 The second variable, rwnd, holds the last value advertised by the receiver (in bytes).
 The third variable, inTransit, holds the number of bytes in transit, bytes sent but not
yet acknowledged.
 The following figure shows the queue and variables at the sender site.

31
CS3591 – Computer Networks Unit 2

 A chunk pointed to by curTSN can be sent if the size of the data is less than or equal
to the quantity rwnd - inTransit.
 After sending the chunk, the value of curTSN is incremented by 1 and now points to
the next chunk to be sent.
 The value of inTransit is incremented by the size of the data in the transmitted chunk.
 When a SACK is received, the chunks with a TSN less than or equal to the
cumulative TSN in the SACK are removed from the queue and discarded. The sender
does not have to worry about them anymore.
 The value of inTransit is reduced by the total size of the discarded chunks.
 The value of rwnd is updated with the value of the advertised window in the SACK.

SCTP ERROR CONTROL


 SCTP is a reliable transport layer protocol.
 It uses a SACK chunk to report the state of the receiver buffer to the sender.
 Each implementation uses a different set of entities and timers for the receiver and
sender sites.

Receiver Site
 The receiver stores all chunks that have arrived in its queue including the out-of-
order ones. However, it leaves spaces for any missing chunks.
 It discards duplicate messages, but keeps track of them for reports to the sender.
 The following figure shows a typical design for the receiver site and the state of the
receiving queue at a particular point in time.

 The available window size is 1000 bytes.


 The last acknowledgment sent was for data chunk 20.
 Chunks 21 to 23 have been received in order.
 The first out-of-order block contains chunks 26 to 28.
 The second out-of-order block contains chunks 31 to 34.
 A variable holds the value of cumTSN.

32
CS3591 – Computer Networks Unit 2

 An array of variables keeps track of the beginning and the end of each block that is
out of order.
 An array of variables holds the duplicate chunks received.
 There is no need for storing duplicate chunks in the queue and they will be discarded.

Sender Site
 At the sender site, it needs two buffers (queues): a sending queue and a
retransmission queue.
 Three variables were used - rwnd, inTransit, and curTSN as described in the previous
section.
 The following figure shows a typical design.

 The sending queue holds chunks 23 to 40.


 The chunks 23 to 36 have already been sent, but not acknowledged; they are
outstanding chunks.
 The curTSN points to the next chunk to be sent (37).
 We assume that each chunk is 100 bytes, which means that 1400 bytes of data
(chunks 23 to 36) is in transit.
 The sender at this moment has a retransmission queue.
 When a packet is sent, a retransmission timer starts for that packet (all data chunks in
that packet).
 Some implementations use one single timer for the entire association, but other
implementations use one timer for each packet.

SCTP CONGESTION CONTROL

 SCTP is a transport-layer protocol with packets subject to congestion in the network.


 The SCTP designers have used the same strategies for congestion control as those
used in TCP.
NOTE : REFER TCP CONGESTION CONTROL

33
CS3591 Computer Networks Unit 2

Quality of Service(QoS)
Quality of Service(QoS) is basically the ability to provide different priority to different
applications, users, or data flows, or in order to guarantee a certain level of performance to the
flow of data.In other words, we can also define Quality of Service as something that the flow
seeks to attain.QoS is basically the overall performance of the computer network. Mainly the
performance of the network is seen by the user of the Network.
Flow Characteristics
Given below are four types of characteristics that are mainly attributed to the flow and these are
as follows:
• Reliability
• Delay
• Jitter
• Bandwidth
Reliability
It is one of the main characteristics that the flow needs. If there is a lack of reliability then
it simply means losing any packet or losing an acknowledgement due to which retransmission is
needed.Reliability becomes more important for electronic mail, file transfer, and for internet
access.
Delay
Another characteristic of the flow is the delay in transmission between the source and
destination. During audio conferencing, telephony, video conferencing, and remote conferencing
there should be a minimum delay.
Jitter
It is basically the variation in the delay for packets that belongs to the same flow. Thus
Jitter is basically the variation in the packet delay. Higher the value of jitter means there is a
large delay and the low jitter means the variation is small.
Bandwidth
The different applications need different bandwidth.
How to achieve Quality of Service?
Quality of Service, which can be done by using some tools and techniques, like jitter
buffer and traffic shaping.
Jitter buffer
This is a temporary storage buffer which is used to store the incoming data packets, it is
used in packet-based networks to ensure that the continuity of the data streams doesn't get
disturbed, it does that by smoothing out the packet arrival times during periods of network
congestion.
Traffic shaping
This technique which is also known as packet shaping is a congestion control or
management technique that helps to regulate network data transfer by delaying the flow of least
important or least necessary data packets.

34
CS3591 Computer Networks Unit 2

There are 2 types of Quality of Service Solutions:

Stateless solution: Here, the server is not required to keep or store the server information or
session details to itself. The routers maintain no fine-grained state about traffic, one positive
factor of this is, that it's scalable and robust. But also, it has weak services as there is no
guarantee about the kind of performance delay in a particular application which we encounter. In
the stateless solution, the server and client are loosely coupled and can act.

Stateful solution: Here, the server is required to maintain the current state and session
information, the routers maintain per-flow state as the flow is very important in providing the
Quality-of-Service which is providing powerful services such as guaranteed services and high
resource utilization, provides protection, and is much less scalable and robust. Here, the server
and client are tightly bounded.

Quality of Service Parameters:

QoS can be measured quantitatively by using several parameters


Packet loss: it happens when the network links become congested and the routers and switches
start dropping the packets. When these packets are dropped during real-time communication,
such as audio or video, these sessions can experience jitter and gaps in speech.
Jitter: occurs as the result of network congestion, timing drift, and route changes. And also, too
much jitter can degrade the quality of audio communication.
Latency: is the time delay, which is taken by a packet to travel from its source to its destination.
For a great system, latency should be as low as possible, ideally, it should be close to zero.
Bandwidth: is the capacity of a network channel to transmit maximum possible data through the
channel in a certain amount of time. QoS optimizes a network by managing its bandwidth and
setting the priorities for those applications which require more resources as compared to other
applications.
Mean opinion score: it is a metric for rating the audio quality which uses a five-point scale, with
a five indicating the highest or best quality.

Implementing Quality of Service:


We can implement Quality of service through three of the following existing models:
Best Effort: if we are applying this model then, it means that we are prioritizing all the data
packets equally. But since we all setting the priority order like this, then there is no guarantee
that all the data packets will be delivered, but it will put up the best effort to deliver all of them.
Point to remember is, that the best-effort model is applied when networks haven’t configured
with the QoS policies or incase their network infrastructure does not support QoS.

35
CS3591 Computer Networks Unit 2

Integrated Services: or IntServ, this QoS model reserves the bandwidth along a specific path on
the network. The applications ask the network's resource reservation for themselves and
parallelly the network devices monitor the flow of packets to make sure network resources can
accept packets. Point to remember: while implementing Integrated Services Model, the IntServ-
capable routers and resource reservation protocol are necessary. This model has limited
scalability and high consumption of the network resources.
Differentiated Services: in this QoS model, the network elements such as routers and switches
are configured to serve multiple categories of traffic with different priority orders. A company
can categorize the network traffic based on its requirements. Eg. Assigning higher priority to
audio traffic etc.
Difference between Integrated Services and Differentiated Services:

Integrated Services Differentiated Services

This Architecture mainly specifies the This Architecture mainly specifies a simple and
elements to guarantee Quality of scalable mechanism for classifying and managing the
Service (QoS) on the network. traffic of the network and also provides QoS on the
modern IP networks.

These services mainly involve the prior These services mark the packets with the priority and
reservation of the resources before then sends it to the network and there is no concept of
sending in order to achieve Quality of prior reservation.
Service.

It is also known as IntServ. It is also known as DiffSer.

These are not Scalable These are Scalable.

These involve per flow Setup These involve long term Setup

In this end to end service scope is In this domain service scope is involved
available.

36
UNIT III NETWORK LAYER 7

Switching : Packet Switching - Internet protocol - IPV4 – IP Addressing – Subnetting - IPV6,


ARP, RARP, ICMP, DHCP

NETWORK LAYER
• The network layer in the TCP/IP protocol suite is responsible for the host-to host
delivery of datagrams.
• It provides services to the transport layer and receives services from the datalink
layer.
• The network layer translates the logical addresses into physical addresses.
• It determines the route from the source to the destination and also manages the traffic
problems such as switching, routing and controls the congestion of data packets.
• The main role of the network layer is to move the packets from sending host to the
receiving host.
Services provided by network layer
PACKETIZING
• The first duty of the network layer is definitely packetizing.
• This means encapsulating the payload (data received from upper layer) in a network-
layer packet at the source and decapsulating the payload from the network-layer
packet at the destination.
• The network layer is responsible for delivery of packets from a sender to a receiver
without changing or using the contents.
ROUTING AND FORWARDING
Routing
• The network layer is responsible for routing the packet from its source to the
destination.
• The network layer is responsible for finding the best one among these possible
• routes.
• The network layer needs to have some specific strategies for defining the best
route.
• Routing is the concept of applying strategies and running routing protocols to
create the decision-making tables for each router.
• These tables are called as routing tables.
Forwarding
• Forwarding can be defined as the action applied by each router when a packet
arrives at one of its interfaces.
• The decision-making table, a router normally uses for applying this action is
called the forwarding table.
• When a router receives a packet from one of its attached networks, it needs to
forward the packet to another attached network.
ERROR CONTROL
 The network layer in the Internet does not directly provide error control.
 It adds a checksum field to the datagram to control any corruption in the header, but
not in the whole datagram.
 This checksum prevents any changes or corruptions in the header of the datagram.
 The Internet uses an auxiliary protocol called ICMP, that provides some kind of error
control if the datagram is discarded or has some unknown information in the header.
FLOW CONTROL
 Flow control regulates the amount of data a source can send without overwhelming
the receiver.
 The network layer in the Internet, however, does not directly provide any flow
control.
 The datagrams are sent by the sender when they are ready, without any attention to
the readiness of the receiver.
 Flow control is provided for most of the upper-layer protocols that use the services of
the network layer, so another level of flow control makes the network layer more
complicated and the whole system less efficient.
CONGESTION CONTROL
 Another issue in a network-layer protocol is congestion control.
 Congestion in the network layer is a situation in which too many datagrams are
present in an area of the Internet.
 Congestion may occur if the number of datagrams sent by source computers is beyond
the capacity of the network or routers.
 In this situation, some routers may drop some of the datagrams.
SECURITY
 Another issue related to communication at the network layer is security.
 To provide security for a connectionless network layer, we need to have another
virtual level that changes the connectionless service to a connection oriented service.
This virtual layer is called as called IPSec (IP Security).

2.1 SWITCHING
• The technique of transferring the information from one computer network to another
network is known as switching.
• Switching in a computer network is achieved by using switches.
• A switch is a small hardware device which is used to join multiple computers
together with one local area network (LAN).
• Switches are devices capable of creating temporary connections between two or
more devices linked to the switch.
• Switches are used to forward the packets based on MAC addresses.
• A Switch is used to transfer the data only to the device that has been addressed. It
verifies the destination address to route the packet appropriately.
• It is operated in full duplex mode.
• It does not broadcast the message as it works with limited bandwidth.
Advantages of Switching:
 Switch increases the bandwidth of the network.
 It reduces the workload on individual PCs as it sends the information to only that
device which has been addressed.
 It increases the overall performance of the network by reducing the traffic on the
network.
 There will be less frame collision as switch creates the collision domain for each
connection.
Disadvantages of Switching:
 A Switch is more expensive than network bridges.
 A Switch cannot determine the network connectivity issues easily.
 Proper designing and configuration of the switch are required to handle multicast
packets.

Types of Switching Techniques

PACKET SWITCHING

 The packet switching is a switching technique in which the message is sent in one
go, but it is divided into smaller pieces, and they are sent individually.
 The message splits into smaller pieces known as packets and packets are given a
unique number to identify their order at the receiving end.
 Every packet contains some information in its headers such as source address,
 destination address and sequence number.
 Packets will travel across the network, taking the shortest path as possible.
 All the packets are reassembled at the receiving end in correct order.
 If any packet is missing or corrupted, then the message will be sent to resend the
 message.
 If the correct order of the packets is reached, then the acknowledgment message
 will be sent.

Advantages of Packet Switching:

 Cost-effective: In packet switching technique, switching devices do not require


massive secondary storage to store the packets, so cost is minimized to some extent.
Therefore, we can say that the packet switching technique is a cost effective
technique.
 Reliable: If any node is busy, then the packets can be rerouted. This ensures that the
Packet Switching technique provides reliable communication.
 Efficient: Packet Switching is an efficient technique. It does not require any
established path prior to the transmission, and many users can use the same
communication channel simultaneously, hence makes use of available bandwidth very
efficiently.

Disadvantages of Packet Switching:

 Packet Switching technique cannot be implemented in those applications that require


low delay and high-quality services.
 The protocols used in a packet switching technique are very complex and requires
high implementation cost.
 If the network is overloaded or corrupted, then it requires retransmission of lost
packets. It can also lead to the loss of critical information if errors are nor recovered.

APPROACHES OF PACKET SWITCHING


There are two approaches to Packet Switching:
 Datagram Packet switching
 Virtual Circuit Switching
Datagram Packet switching
 It is a packet switching technology in which packet is known as a datagram, is
considered as an independent entity.
 Each packet contains the information about the destination and switch uses this
information to forward the packet to the correct destination.
 The packets are reassembled at the receiving end in correct order.
 In Datagram Packet Switching technique, the path is not fixed.
 Intermediate nodes take the routing decisions to forward the packets.
 Datagram Packet Switching is also known as connectionless switching.
 There are no setup or teardown phases.
 Each packet is treated the same by a switch regardless of its source or destination.

In this example, all four packets (or datagrams) belong to the same message, but may travel
different paths to reach their destination.

Routing Table
In this type of network, each switch (or packet switch) has a routing table which is
based
on the destination address. The routing tables are dynamic and are updated periodically. The
destination addresses and the corresponding forwarding output ports are recorded in the
tables.
Delay in a datagram network

The packet travels through two switches.


There are three transmission times (3T),three propagation delays (slopes 3t of the lines), and
two waiting times (w1 + w2).
We ignore the processing time in each switch.
Total delay = 3T + 3t + w1 + w2

Virtual Circuit Switching


 Virtual Circuit Switching is also known as connection-oriented switching.
 In the case of Virtual circuit switching, a virtual connection is established before the
messages are sent.
 Call request and call accept packets are used to establish the connection between
sender and receiver.
 In this case, the path is fixed for the duration of a logical connection.

Virtual Circuit Identifier (VCI)


A virtual circuit identifier (VCI) that uniquely identifies the connection at this switch.
A VCI, unlike a global address, is a small number that has only switch scope; it is used by a
frame between two switches. When a frame arrives at a switch, it has a VCI; when it leaves,
it has a different VCI.

Virtual Circuit Table


Every Virtual Circuit (VC) maintains a table called Virtual Circuit table. One entry in the VC
table on a single switch contains the following :
 An incoming interface on which packets for this VC arrive at the switch
 An outgoing interface in which packets for this VC leave the switch
 A outgoing VCI that will be used for outgoing packets
Example :
Source A sends a frame to Source B through Switch 1, Switch 2 and Switch 3.

Types of Virtual Circuits


There are two broad classes of Virtual Circuits. They are
1. PVC – Permanent Virtual Circuit
 Network Administrator will configure the state
 The virtual circuit is permanent (PVC)
2. SVC – Switched Virtual Circuit
 A host can send messages into the network to cause the state to be established. This is
referred as signaling.
 A host may set up and delete such a VC dynamically without the involvement of a
network administrator

Delay in Virtual-Circuit Networks

The packet is traveling through two switches (routers).


 There are three transmission times (3T ), three propagation times (3t), data transfer
depicted by the sloping lines, a setup delay (which includes transmission and
propagation in two directions), and a teardown delay (which includes transmission
and propagation in one direction).
Total delay = 3T + 3t + Setup delay + Teardown delay
Difference between virtual circuit switching and Datagram switching

Virtual Circuit Switching Datagram Switching

Connection oriented Connection less

Packets may be delivered out of order


Ensures in order delivery

No reordering is required Reordering is required

A dedicated path exists for data No dedicated path exists for data
transfer transfer
All the packets take the same path All the packets may not take the same
path
Resources are allocated on demand
No resources are allocated
using 1st packet

Packet oriented Packet oriented

Dynamic Bandwidth Dynamic bandwidth

Reliable Unreliable

Less overheads Higher overheads

Implemented at data link layer Implemented at network layer

Provides better efficiency than Provides better efficiency than


circuit switched systems message switched systems
Examples- X.25, Frame relay
Example- Internet

2.2 Internet Protocol

 The Internet Protocol is the key tool used


today to build scalable, heterogeneous internetworks.
 IP runs on all the nodes (both hosts and routers) in a collection of networks
 IP defines the infrastructure that allows these nodes and networks to function
as a single logical internetwork.
IP SERVICE MODEL
 Service Model defines the host-to-host services that we want to provide
 The main concern in defining a service model for an internetwork is that we
can provide a host-to-host service only if this service can somehow be
provided over each of the underlying physical networks.
 The Internet Protocol is the key tool used today to build scalable,
heterogeneous internetworks.
 The IP service model can be thought of as having two parts:
• A GLOBAL ADDRESSING SCHEME - which provides a way to
identify all hosts in the internetwork
• A DATAGRAM DELIVERY MODEL – A connectionless model of
data delivery.
IP PACKET FORMAT / IP DATAGRAM FORMAT
 A key part of the IP service model is the type of packets that can be carried.
 The IP datagram consists of a header followed by a number of bytes of data.

FIELD DESCRIPTION
Version Specifies the version of IP. Two versions exists – IPv4 and IPv6.
HLen Specifies the length of the header
TOS An indication of the parameters of the quality of service desired such as Precedence,
(Type of Service) Delay, Throughput and Reliability.
Length Length of the entire datagram, including the header. The maximum size of an IP
datagram is 65,535(210 )bytes
Ident (Identification) Uniquely identifies the packet sequence number. Used for fragmentation and re-
assembly.
Flags Used to control whether routers are allowed to fragment a packet. If a packet is
fragmented , this flag value is 1.If not, flag value is 0.
Offset (Fragmentation Indicates where in the datagram, this fragment belongs. The fragment offset is
offset) measured in units of 8 octets (64 bits). The first fragment has offset zero.
TTL Indicates the maximum time the datagram is allowed to remain in the network. If this
(Time to Live) field contains the value zero, then the datagram must be destroyed.
Protocol Indicates the next level protocol used in the data portion of the datagram
Checksum Used to detect the processing errors introduced into the packet
Source Address The IP address of the original sender of the packet.
Destination Address The IP address of the final destination of the packet.
Options This is optional field. These options may contain values for options such as Security,
Record Route, Time Stamp, etc
Pad Used to ensure that the internet header ends on a 32 bit boundary. The padding is zero.
IP DATAGRAM - FRAGMENTATION AND REASSEMBLY
Fragmentation :
 Every network type has a maximum transmission unit (MTU), which is the
largest IP datagram that it can carry in a frame.

 Fragmentation of a datagram will only be necessary if the path to the


destination includes a network with a smaller MTU.
 When a host sends an IP datagram, it can choose any size that it wants.
 Fragmentation typically occurs in a router when it receives a datagram that it
wants to forward over a network that has an MTU that is smaller than the
received datagram.
 Each fragment is itself a self-contained IP datagram that is transmitted over a
sequence of physical networks, independent of the other fragments.
 Each IP datagram is re-encapsulated for each physical network over which it
travels.
 For example , if we consider an Ethernet network to accept packets up to 1500
bytes long.
 This leaves two choices for the IP service model:
• Make sure that all IP datagrams are small enough to fit inside one
packet on any network technology
• Provide a means by which packets can be fragmented and reassembled
when they are too big to go over a given network technology.
 Fragmentation produces smaller, valid IP datagrams that can be readily
reassembled into the original datagram upon receipt, independent of the order
of their arrival.
Example:

 The original packet starts at the client; the fragments are reassembled at the
server.
 The value of the identification field is the same in all fragments, as is the value
of the flags field with the more bit set for all fragments except the last.
 Also, the value of the offset field for each fragment is shown.
 Although the fragments arrived out of order at the destination, they can be
correctly reassembled.

 The value of the offset field is always relative to the original datagram.
 Even if each fragment follows a different path and arrives out of order, the
final destination host can reassemble the original datagram from the fragments
received (if none of them is lost) using the following strategy:
1) The first fragment has an offset field value of zero.
2) Divide the length of the first fragment by 8. The second fragment has
an offset value equal to that result.
3) Divide the total length of the first and second fragment by 8. The third
fragment has an offset value equal to that result.
4) Continue the process. The last fragment has its M bit set to 0.
5) Continue the process. The last fragment has a more bit value of 0.

Reassembly:
 Reassembly is done at the receiving host and not at each router.
 To enable these fragments to be reassembled at the receiving host, they all
carry the same identifier in the Ident field.
 This identifier is chosen by the sending host and is intended to be unique
among all the datagrams that might arrive at the destination from this source
over some reasonable time period.
 Since all fragments of the original datagram contain this identifier, the
reassembling host will be able to recognize those fragments that go together.
 For example, if a single fragment is lost, the receiver will still attempt to
reassemble the datagram, and it will eventually give up and have to garbage-
collect the resources that were used to perform the failed reassembly.
 Hosts are now strongly encouraged to perform “path MTU discovery,” a
process by which fragmentation is avoided by sending packets that are small
enough to traverse the link with the smallest MTU in the path from sender to
receiver.
IP SECURITY
There are three security issues that are particularly applicable to the IP protocol:
(1) Packet Sniffing (2) Packet Modification and (3) IP Spoofing.
Packet Sniffing
 An intruder may intercept an IP packet and make a copy of it.
 Packet sniffing is a passive attack, in which the attacker does not change the
contents of the packet.
 This type of attack is very difficult to detect because the sender and the
receiver may never know that the packet has been copied.
 Although packet sniffing cannot be stopped, encryption of the packet can
make the attacker’s effort useless.
 The attacker may still sniff the packet, but the content is not detectable.
Packet Modification
 The second type of attack is to modify the packet.
 The attacker intercepts the packet, changes its contents, and sends the new
packet to the receiver.
 The receiver believes that the packet is coming from the original sender.
 This type of attack can be detected using a data integrity mechanism.
 The receiver, before opening and using the contents of the message, can use
this mechanism to make sure that the packet has not been changed during the
transmission.
IP Spoofing
 An attacker can masquerade as somebody else and create an IP packet that
carries the source address of another computer.
 An attacker can send an IP packet to a bank pretending that it is coming from
one of the customers.
 This type of attack can be prevented using an origin authentication mechanism
IP Sec
 The IP packets today can be protected from the previously mentioned attacks
using a protocol called IPSec (IP Security).
 This protocol is used in conjunction with the IP protocol.
 IPSec protocol creates a connection-oriented service between two entities in
which they can exchange IP packets without worrying about the three attacks
such as Packet Sniffing, Packet Modification and IP Spoofing.
 IP Sec provides the following four services:
1) Defining Algorithms and Keys : The two entities that want to create a
secure channel between themselves can agree on some available
algorithms and keys to be used for security purposes.
2) Packet Encryption : The packets exchanged between two parties can
be encrypted for privacy using one of the encryption algorithms and a
shared key agreed upon in the first step. This makes the packet sniffing
attack useless.
3) Data Integrity : Data integrity guarantees that the packet is not
modified during the transmission. If the received packet does not pass
the data integrity test, it is discarded.This prevents the second attack,
packet modification.
Origin Authentication : IPSec can authenticate the origin of the packet to be sure that the
packet is not created by an imposter. This can prevent IP spoofing attacks.
2.3 IPV4 ADDRESSES

 The identifier used in the IP layer of the TCP/IP protocol suite to identify the
connection of each device to the Internet is called the Internet address or IP address.
 Internet Protocol version 4 (IPv4) is the fourth version in the development of the
Internet Protocol (IP) and the first version of the protocol to be widely deployed.
 IPv4 is described in IETF publication in September 1981.
 The IP address is the address of the connection, not the host or the router. An IPv4
address is a 32-bit address that uniquely and universally defines the connection .
 If the device is moved to another network, the IP address may be changed.
 IPv4 addresses are unique in the sense that each address defines one, and only one,
connection to the Internet.
 If a device has two connections to the Internet, via two networks, it has two IPv4
addresses.
 Pv4 addresses are universal in the sense that the addressing system must be accepted
by any host that wants to be connected to the Internet.

IPV4 ADDRESS SPACE


 IPv4 defines addresses has an address space.
 An address space is the total number of addresses used by the protocol.
 If a protocol uses b bits to define an address, the address space is 2b because each bit
can have two different values (0 or 1).
 IPv4 uses 32-bit addresses, which means that the address space is 232 or
4,294,967,296 (more than four billion). 4 billion devices could be connected to the
Internet.
IPV4 ADDRESS NOTATION
There are three common notations to show an IPv4 address:
(i) binary notation (base 2),
(ii) dotted-decimal notation (base 256),
(iii) hexadecimal notation (base 16).

In binary notation, an IPv4 address is displayed as 32 bits. To make the address more
readable, one or more spaces are usually inserted between bytes (8 bits).
In dotted-decimal notation,IPv4 addresses are usually written in decimal form with a decimal
point (dot) separating the bytes. Each number in the dotted-decimal notation is between 0 and
255.
In hexadecimal notation, each hexadecimal digit is equivalent to four bits. This means that a
32-bit address has 8 hexadecimal digits. This notation is often used in network programming.
HIERARCHY IN IPV4 ADDRESSING
In any communication network that involves delivery, the addressing system is
hierarchical.
 A 32-bit IPv4 address is also hierarchical, but divided only into two parts.
 The first part of the address, called the prefix, defines the network(Net ID); the second
part of the address, called the suffix, defines the node (Host ID).
 The prefix length is n bits and the suffix length is (32-n) bits.

 A prefix can be fixed length or variable length.


 The network identifier in the IPv4 was first designed as a fixed-length prefix.
 This scheme is referred to as classful addressing.
 The new scheme, which is referred to as classless addressing, uses a variable
length network prefix.

CATEGORIES OF IPV4 ADDRESSING

There are two broad categories of IPv4 Addressing techniques. They are
 Classful Addressing
 Classless Addressing

CLASSFUL ADDRESSING
 An IPv4 address is 32-bit long(4 bytes).
 An IPv4 address is divided into sub-classes:
Classful Network Architecture

Class A

• In Class A, an IP address is assigned to those networks that contain large


number of hosts.
• The network ID is 8 bits long.
• The host ID is 24 bits long.
• In Class A, the first bit in higher order bits of the first octet is always set to 0
and the remaining 7 bits determine the network ID.
• The 24 bits determine the host ID in any network.
• The total number of networks in Class A = 27 = 128 network address
• The total number of hosts in Class A = 224 - 2 = 16,777,214 host address

Class B
• In Class B, an IP address is assigned to those networks that range from small-
sized to large-sized networks.
• The Network ID is 16 bits long.
• The Host ID is 16 bits long.
• In Class B, the higher order bits of the first octet is always set to 10, and
the remaining14 bits determine the network ID.
• The other 16 bits determine the Host ID.
• The total number of networks in Class B = 214 = 16384 network address
• The total number of hosts in Class B = 216 - 2 = 65534 host address

Class C
• In Class C, an IP address is assigned to only small-sized networks.
• The Network ID is 24 bits long.
• The host ID is 8 bits long.
• In Class C, the higher order bits of the first octet is always set to 110, and the
remaining 21 bits determine the network ID.
• The 8 bits of the host ID determine the host in a network.
• The total number of networks = 221 = 2097152 network address
• The total number of hosts = 28 - 2 = 254 host address

Class D
• In Class D, an IP address is reserved for multicast addresses.
• It does not possess subnetting.
• The higher order bits of the first octet is always set to 1110, and the remaining
bits determines the host ID in any network.

Class E
• In Class E, an IP address is used for the future use or for the research
and development purposes.
• It does not possess any subnetting.
• The higher order bits of the first octet is always set to 1111, and the remaining
bits determines the host ID in any network.
Address Depletion in Classful Addressing

 The reason that classful addressing has become obsolete is address depletion.
 Since the addresses were not distributed properly, the Internet was faced with the
problem of the addresses being rapidly used up.
 This results in no more addresses available for organizations and individuals that
needed to be connected to the Internet.
 To understand the problem, let us think about class A.
 This class can be assigned to only 128 organizations in the world, but each
organization needs to have a single network with 16,777,216 nodes .
 Since there may be only a few organizations that are this large, most of the addresses
in this class were wasted (unused).
 Class B addresses were designed for midsize organizations, but many of the addresses
in this class also remained unused.
 Class C addresses have a completely different flaw in design. The number of
addresses that can be used in each network (256) was so small that most companies
were not comfortable using a block in this address class.
 Class E addresses were almost never used, wasting the whole class.
Advantage of Classful Addressing

 Although classful addressing had several problems and became obsolete, it had one
advantage.
 Given an address, we can easily find the class of the address and, since the prefix
length for each class is fixed, we can find the prefix length immediately.
 In other words, the prefix length in classful addressing is inherent in the address; no
extra information is needed to extract the prefix and the suffix.

Subnetting and Supernetting

To alleviate address depletion, two strategies were proposed and implemented:


(i) Subnetting and (ii) Supernetting.

Subnetting
 In subnetting, a class A or class B block is divided into several subnets.
 Each subnet has a larger prefix length than the original network.
 For example, if a network in class A is divided into four subnets, each subnet has a
prefix of nsub = 10.
 At the same time, if all of the addresses in a network are not used, subnetting allows
the addresses to be divided among several organizations.
CLASSLESS ADDRESSING
 In 1996, the Internet authorities announced a new architecture called classless
addressing.
 In classless addressing, variable-length blocks are used that belong to no classes.
 We can have a block of 1 address, 2 addresses, 4 addresses, 128 addresses, and so on.
 In classless addressing, the whole address space is divided into variable length
blocks.
 The prefix in an address defines the block (network); the suffix defines
the node (device).
 Theoretically, we can have a block of 20, 21, 22, …..,232 addresses.
 The number of addresses in a block needs to be a power of 2. An organization
can be granted one block of addresses.

 The prefix length in classless addressing is variable.


 We can have a prefix length that ranges from 0 to 32.
 The size of the network is inversely proportional to the length of the prefix.
 A small prefix means a larger network; a large prefix means a smaller network.
 The idea of classless addressing can be easily applied to classful addressing.
 An address in class A can be thought of as a classless address in which the prefix
length is 8.
 An address in class B can be thought of as a classless address in which the prefix is
16, and so on. In other words, classful addressing is a special case of classless
addressing.
Notation used in Classless Addressing
 The notation used in classless addressing is informally referred to as slash
notation and formally as classless interdomain routing or CIDR.

For example , 192.168.100.14 /24 represents the IP address 192.168.100.14 and, its subnet
mask 255.255.255.0, which has 24 leading 1-bits.
Address Aggregation
 One of the advantages of the CIDR strategy is address aggregation
(sometimes called address summarization or route summarization).
 When blocks of addresses are combined to create a larger block, routing can be done
based on the prefix of the larger block.
 ICANN assigns a large block of addresses to an ISP.
 Each ISP in turn divides its assigned block into smaller subblocks and grants the
subblocks to its customers.
Special Addresses in IPv4
 There are five special addresses that are used for special purposes: this-host address,
limited-broadcastaddress, loopback address, private addresses, and multicast
addresses.
This-host Address
 The only address in the block 0.0.0.0/32 is called the this-host address.
 It is used whenever a host needs to send an IP datagram but it does
not know its own address to use as the source address.
Limited-broadcast Address
 The only address in the block 255.255.255.255/32 is called
the limited- broadcast address.
 It is used whenever a router or a host needs to send a datagram to all
devices in a network.
 The routers in the network, however, block the packet having this
address as the destination;the packet cannot travel outside the
network.
Loopback Address
 The block 127.0.0.0/8 is called the loopback address.
 A packet with one of the addresses in this block as the
destination address never leaves the host; it will remain in the host.
Private Addresses
 Four blocks are assigned as private addresses: 10.0.0.0/8,
172.16.0.0/12, 192.168.0.0/16, and 169.254.0.0/16.

Multicast Addresses
 The block 224.0.0.0/4 is reserved for multicast addresses.

2.4 Subnetting

More levels of hierarchy can be created using subnetting. An organization (or an


ISP) that is granted a range of addresses may divide the range into several subranges
and assign each subrange to a subnetwork (or subnet). Note that nothing stops the
organization from creating more levels. A subnetwork can be divided into several sub-
subnetworks. A sub-subnetwork can be divided into several sub-sub-subnetworks, and so
on.

Designing Subnets
The subnetworks in a network should be carefully designed to enable the
routing of packets. We assume the total number of addresses granted to the
organization is N, the prefix length is n, the assigned number of addresses to each
subnetwork is Nsub, and the prefix length for each subnetwork is nsub. Then the
following steps need to be carefully followed to guarantee the proper operation of the
subnetworks.
 The number of addresses in each subnetwork should be a power of 2.
 The prefix length for each subnetwork should be found using the
following formula:

nsub = 32 − log2 Nsub

 The starting address in each subnetwork should be divisible by the number


of addresses in that subnetwork. This can be achieved if we first assign addresses to
larger subnetworks.
Finding Information about Each Subnetwork
After designing the subnetworks, the information about each subnetwork, such
as first and last address, can be found using the process we described to find
the information about each network in the Internet.
Example
An organization is granted a block of addresses with the beginning address
14.24.74.0/24. The organization needs to have three subblocks of addresses to use in its
three subnets: one subblock of 10 addresses, one subblock of 60 addresses, and one
subblock of 120 addresses. Design the subblocks.
Solution
There are 232–24 = 256 addresses in this block. The first address is
14.24.74.0/24; the last address is 14.24.74.255/24. To satisfy the third
requirement, we assign addresses to subblocks, starting with the largest and ending with
the smallest one.

a. The number of addresses in the largest subblock, which requires 120 addresses,
is not a power of 2. We allocate 128 addresses. The subnet mask for this subnet
can be found as n1 = 32 − log2 128 = 25. The first address in this block is
14.24.74.0/25; the last address is 14.24.74.127/25.

b. The number of addresses in the second largest subblock, which requires


60 addresses, is not a power of 2 either. We allocate 64 addresses. The
subnet mask for this subnet can be found as n2 = 32 − log2 64 = 26. The first
address in this block is 14.24.74.128/26; the last address is 14.24.74.191/26.

c. The number of addresses in the smallest subblock, which requires 10 addresses, is


not a power of 2 either. We allocate 16 addresses. The subnet mask for this
subnet can be found as n3 = 32 − log2 16 = 28. The first address in this block is
14.24.74.192/28; the last address is 14.24.74.207/28.
If we add all addresses in the previous subblocks, the result is 208 addresses,
which means 48 addresses are left in reserve. The first address in this range is
14.24.74.208. The last address is 14.24.74.255. We don’t know about the prefix length yet.
Figure 7.10 shows the configuration of blocks. It shows the first address in each block.
2.5 IPV6 - NEXT GENERATION IP

• IPv6 was evolved to solve address space problem and offers rich set
of services.
• Some hosts and routers will run IPv4 only, some will run IPv4 and IPv6 and
some will run IPv6 only.

DRAWBACKS OF IPV4
• Despite subnetting and CIDR, address depletion is still a long-term problem.
• Internet must accommodate real-time audio and video transmission that
requires minimum delay strategies and reservation of resources.
• Internet must provide encryption and authentication of data for some
applications

FEATURES OF IPV6
1. Better header format - IPv6 uses a new header format in which options are
separated from the base header and inserted, when needed, between the base
header and the data. This simplifies and speeds up the routing process because
most of the options do not need to be checked by routers.
2. New options - IPv6 has new options to allow for additional functionalities.
3. Allowance for extension - IPv6 is designed to allow the extension of the
protocol if required by new technologies or applications.
4. Support for resource allocation - In IPv6, the type-of-service field has been
removed, but two new fields, traffic class and flow label, have been added to
enable the source to request special handling of the packet. This mechanism
can be used to support traffic such as real-time audio and video.

Additional Features :
1. Need to accommodate scalable routing and addressing
2. Support for real-time services
3. Security support
4. Autoconfiguration -
The ability of hosts to automatically configure themselves with
such information as their own IP address and domain name.
5. Enhanced routing functionality, including support for mobile hosts
6. Transition from ipv4 to ipv6

ADDRESS SPACE ALLOCATION OF IPV6


 IPv6 provides a 128-bit address space to handle up to 3.4 × 1038 nodes.
 IPv6 uses classless addressing, but classification is based on MSBs.
 The address space is subdivided in various ways based on the leading bits.
 The current assignment of prefixes is listed in Table
 A node may be assigned an “IPv4-compatible IPv6 address” by zero-
extending a 32-bit IPv4 addressto128 bits.
 A node that is only capable of understanding IPv4 can be assigned an
“IPv4- mapped IPv6 address” by prefixing the 32-bit IPv4 address
with 2 bytes of all 1s and then zero-extending the result to 128 bits.

GLOBAL UNICAST
* Large chunks (87%) of address space are left unassigned for future use.
* IPv6 defines two types of local addresses for private networks.
o Link local - enables a host to construct an address that
need not be globally unique.
o Site local - allows valid local address for use in a
isolated site with several subnets.
* Reserved addresses start with prefix of eight 0's.

o Unspecified address is used when a host does not know its address
o Loopback address is used for testing purposes before connecting
o Compatible address is used when IPv6 hosts uses IPv4 network
o Mapped address is used when a IPv6 host communicates with a IPv4
host
* IPv6 defines anycast address, assigned to a set of interfaces.
* Packet with anycast address is delivered to only one of the nearest interface.

ADDRESS NOTATION OF IPV6


* Standard representation of IPv6 address is x : x : x : x : x : x : x : x
where x is a16-bit hexadecimal address separated by colon (:).
For example,
47CD : 1234 : 4422 : ACO2 : 0022 : 1234 : A456 : 0124
* IPv6 address with contiguous 0 bytes can be written compactly.
For example,
47CD : 0000 : 0000 : 0000 : 0000 : 0000 : A456 : 0124 → 47CD : : A456
: 0124
* IPv4 address is mapped to IPv6 address by prefixing the 32-bit IPv4
address with 2 bytes of 1s and then zero-extending the result to 128
bits.
For example,
128. 96.33.81 → : : FFFF : 128.96.33.81
This notation is called as CIDR notation or slash notation.
ADDRESS AGGREGATION OF IPV6
* IPv6 provides aggregation of routing information to reduce the
burden on routers.
* Aggregation is done by assigning prefixes at continental level.
* For example, if all addresses in Europe have a common prefix, then
routers in other continents would need one routing table entry for all
networks in Europe.

 Prefix - All addresses in the same continent have a common prefix


 RegistryID ― identifies the continent
 ProviderID ― identifies the provider for Internet access, i.e., ISP.
 SubscriberID ― specifies the subscriber identifier
 SubnetID ― contains subnet of the subscriber.
 InterfaceID ―contains link level or physical address.
PACKET FORMAT OF IPV6
* IPv6 base header is 40 bytes long.

 Version — specifies the IP version, i.e., 6.


 Traffic Class — defines priority of the packet with respect
to traffic congestion. It is either congestion-controlled or non-
congestion controlled
 Flow Label — provides special handling for a particular flow of
data. Router handles different flows with the help of a flow table.
 Payload Len — gives length of the packet, excluding IPv6 header.
 Next Header — Options are specified as a header following
IP header. NextHeader contains a pointer to optional headers.
 Hop Limit — Gives the TTL value of a packet.
 Source Address / Destination Address — 16-byte addresses of
source and destination host
Extension Headers
* Extension header provides greater functionality to IPv6.
* Base header may be followed by six extension headers.
* Each extension header contains a NextHeader field to identify the
header following it.

 Hop-by-Hop — source host passes information to all routers visited by the


packet.
 Destination — source host information is passed to the destination only.
 Source Routing — routing information provided by the source host.
 Fragmentation — In IPv6, only the source host can fragment.
Source uses a path MTU discovery technique to find
smallest MTU on the path.
 Authentication — used to validate the sender and ensures data integrity.
 ESP (Encrypted Security Payload) — provides confidentiality against
eavesdropping.
ADVANCED CAPABILITIES OF IPV6
* Auto Configuration — Auto or stateless configuration of IP address
to hosts without the need for a DHCP server, i.e., plug and play.
* Advanced Routing — Enhanced routing support for mobile hosts is provided.
* Additional Functions ― Enhanced routing functionality with
support for mobile hosts.
* Security ― Encryption and authentication options provide
confidentiality and integrity.
* Resource allocation ― Flow label enables the source to
request special handling of real-time audio and video packets

ADVANTAGES OF IPV6
* Address space ― IPv6 uses 128-bit address whereas IPv4 uses 32-bit
address. Hence IPv6 has huge address space whereas IPv4 faces
address shortage problem.
* Header format ― Unlike IPv4, optional headers are separated from
base header in IPv6. Each router thus need not process unwanted
addition information.
* Extensible ― Unassigned IPv6 addresses can accommodate needs of
future technologies.

Dual-Stack Operation and Tunneling


* In dual-stack, nodes run both IPv6 and IPv4, uses Version field to
decide which stack should process an arriving packet.
* IPv6 packet is encapsulated with an IPv4 packet as it travels through
an IPv4 network. This is known as tunneling and packet contains
tunnel endpoint as its destination address.
Network Address Translation
* NAT enables hosts on a network to use Internet with local addresses.
* Addresses reserved for internal use range from 172.16.0.0 to 172.31.255.255
* Organization must have single connection to the Internet through a
router that runs the NAT software.

2.6 ICMPV4 - INTERNET CONTROL MESSAGE PROTOCOL VERSION 4

 ICMP is a network-layer protocol.


 It is a companion to the IP protocol.
 Internet Control Message Protocol (ICMP) defines a collection of error
messages that are sent back to the source host whenever a router or host
is unable to process an IP datagram successfully.

ICMP MESSAGE TYPES


 ICMP messages are divided into two broad categories: error-reporting
messages and query messages.
 The error-reporting messages report problems that a router or a host
(destination) may encounter when it processes an IP packet.
 The query messages help a host or a network manager get specific
information from a router or another host.

ICMP Error – Reporting Messages


 Destination Unreachable―When a router cannot route a datagram, the
datagram is discarded and sends a destination unreachable message to
source host.
 Source Quench―When a router or host discards a datagram due to
congestion, it sends a source-quench message to the source host. This
message acts as flow control.
 Time Exceeded―Router discards a datagram when TTL field becomes 0
and a time exceeded message is sent to the source host.
 Parameter Problem―If a router discovers ambiguous or missing value in
any field of the datagram, it discards the datagram and sends parameter
problem message to source.
 Redirection―Redirect messages are sent by the default router to inform the
source host to update its forwarding table when the packet is routed on a
wrong path.

ICMP Query Messages

 Echo Request & Reply―Combination of echo request and reply messages


determines whether two systems communicate or not.
 Timestamp Request & Reply―Two machines can use the timestamp request and
reply messages to determine the round-trip time (RTT).
 Address Mask Request & Reply―A host to obtain its subnet mask, sends an
address mask request message to the router, which responds with an address mask
reply message.
 Router Solicitation/Advertisement―A host broadcasts a router solicitation
message to know about the router. Router broadcasts its routing information with
router advertisement message.
ICMP MESSAGE FORMAT
 An ICMP message has an 8-byte header and a variable-size data section.

Type Defines the type of the message


Code Specifies the reason for the particular message type
Checksum Used for error detection
Rest of the header Specific for each message type
Data Used to carry information
Identifier Used to match the request with the reply
Sequence Number Sequence Number of the ICMP packet

ICMP DEBUGGING TOOLS


Two tools are used for debugging purpose. They are (1) Ping (2) Traceroute
Ping
 The ping program is used to find if a host is alive and responding.
 The source host sends ICMP echo-request messages; the destination, if
alive, responds with ICMP echo-reply messages.
 The ping program sets the identifier field in the echo-request and echo-
reply message and starts the sequence number from 0; this number is
incremented by 1 each time a new message is sent.
 The ping program can calculate the round-trip time.
 It inserts the sending time in the data section of the message.
 When the packet arrives, it subtracts the arrival time from the departure
time to get the round-trip time (RTT).
$ ping google.com
Traceroute or Tracert
 The traceroute program in UNIX or tracert in Windows can be used to trace the
path of a packet from a source to the destination.
 It can find the IP addresses of all the routers that are visited along the path.
 The program is usually set to check for the maximum of 30 hops (routers) to be
visited.
 The number of hops in the Internet is normally less than this.
$ traceroute google.com
2.7 DHCP – DYNAMIC HOST CONFIGURATION PROTOCOL
 The dynamic host configuration protocol is used to simplify the installation and maintenance
of networked computers.
 DHCP is derived from an earlier protocol called BOOTP.
 Ethernet addresses are configured into network by manufacturer and they are unique.
 IP addresses must be unique on a given internetwork but also must reflect the structure
of the internetwork
 Most host Operating Systems provide a way to manually configure the IP
information for the host
 Drawbacks of manual configuration :
1. A lot of work to configure all the hosts in a large network
2. Configuration process is error-prune
 It is necessary to ensure that every host gets the correct network number and that no two
hosts receive the same IP address.
 For these reasons, automated configuration methods are required.
 The primary method uses a protocol known as the Dynamic Host Configuration Protocol
(DHCP).
 The main goal of DHCP is to minimize the amount of manual configuration required
for a host.
 If a new computer is connected to a network, DHCP can provide it with all the necessary
information for full system integration into the network.
 DHCP is based on a client/server model.
 DHCP clients send a request to a DHCP server to which the server responds with an IP
address
 DHCP server is responsible for providing configuration information to hosts.
 There is at least one DHCP server for an administrative domain.
 The DHCP server can function just as a centralized repository for host configuration
information.
 The DHCP server maintains a pool of available addresses that it hands out to hosts on
demand

 A newly booted or attached host sends a DHCPDISCOVER message to a special IP


address (255.255.255.255., which is an IP broadcast address.
 This means it will be received by all hosts and routers on that network.
 DHCP uses the concept of a relay agent. There is at least one relay agent on each
network.
 DHCP relay agent is configured with the IP address of the DHCP server.
 When a relay agent receives a DHCPDISCOVER message, it unicasts it to the DHCP
server and awaits the response, which it will then send back to the requesting client.

DHCP Message Format


• A DHCP packet is actually sent using a protocol called the User
Datagram Protocol (UDP).
2.8 ADDRESS RESOLUTION PROTOCOL (ARP)

 ARP stands for Address Resolution Protocol.


 ARP is the most important protocol of the Data Link Layer.
 ARP is a network layer protocol used to convert a IP address (Network/Logical
address) into a MAC Address (Hardware /Physical address).
 The computer programs/applications use logical address (IP address) to
send/receive messages, however the actual communication happens over the

physical address (MAC address).


 To send a datagram over a network, we need both the logical and physical address.
 IP addresses are made up of 32 bits whereas MAC addresses are made up of 48
bits.
 ARP enables each host to build a table of IP address and corresponding
physical address.
 ARP relies on broadcast support from physical networks.
 The Address Resolution Protocol is a request and response protocol.
 types of ARP messages are:
1. ARP request
2. ARP reply
ARP Operation
 maintains a cache table in which MAC addresses are mapped to IP addresses.
 a host wants to send an IP datagram to a host,it first checks for a mapping in the cache
table.
 no mapping is found, it needs to invoke the Address Resolution Protocol over the
network.
 does this by broadcasting an ARP query onto the network.
 query contains the target IP address.
 host receives the query and checks to see if it matches its IP address.
 it does match, the host sends a response message that contains its link- layer address
(MAC Address) back to the originator of the query.
 Originator adds the information contained in this response to its ARP table.
Example,
To determine system B’s physical (MAC) address, system A broadcasts an ARP
request containing B’s IP address to all machines on its network.

 l nodes except the destination discard the packet but update their ARP table.
 host (System B)constructs an ARP Response packet
 Response is unicast and sent back to the source host (System A).
 Stores target Logical & Physical address pair in its ARP table from ARP Response.
 Target node does not exist on same network, ARP request is sent to default router.
ARP Packet

1. Hardware type: This is 16 bits field defining the type of the network on which
ARP is running. Ethernet is given the type 1.
2. Protocol type: This is 16 bits field defining the protocol. The value of this field
for the 1Pv4 protocol is 0800H.
3. Hardware length: This is an 8 bits field defining the length of the physical
address in bytes. Ethernet is the value 6.
4. Protocol length: This is an 8 bits field defining the length of the logical address in
bytes. For the 1Pv4 protocol the value is 4.
5. Operation: This is a 16 bits field defining the type of packet. Packet types are
ARP request (1), ARP reply (2).
6. Sender hardware address: This is a variable length field defining the physical
address of the sender. For example, for Ethernet this field is 6 bytes long.
7. Sender protocol address: This is also a variable length field defining the logical
address of the sender. For the IP protocol, this field is 4 bytes Jong.
8. Target hardware address: This is a variable length field defining the physical
address of the target. For Ethernet this field is 6 bytes long. For ARP request
message, this field is all 0’s because the sender does not know the physical
address of the target.
9. Target protocol address: This is also a variable length field defining the logical
address of the target. For the 1Pv4 protocol, this field is 4 bytes long.

2.9 RARP – Reverse ARP

Reverse Address Resolution protocol (RARP) allows a host to convert its


MAC address to the corresponding IP address. Reverse Address Resolution Protocol (RARP) is
a protocol a physical machine in a local area network (LAN) can use to request its IP address. It
does this by sending the device's physical address to a specialized RARP server that is on the
same LAN and is actively listening for RARP requests.

How does RARP Work


• A network administrator creates a table in a RARP server that maps physical interface or
media access control (MAC) addresses to corresponding IP addresses.
• This table is referenced by devices seeking to dynamically learn their IP address.
• When a new RARP-enabled device connects, its RARP client program sends its MAC
address to the RARP server for receiving an IP address.
• The RARP request is sent as a data link layer broadcast, so it's crucial for the RARP
server to be on the same LAN as the devices requesting IP address information.
The general RARP process flow follows these steps:

 Device 1 connects to the local network and sends an RARP broadcast to all devices on
the subnet. In the RARP broadcast, the device sends its physical MAC address and
requests an IP address it can use.
 Because a broadcast is sent, device 2 receives the broadcast request. However, since it is
not a RARP server, device 2 ignores the request.
 The broadcast message also reaches the RARP server. The server processes the packet
and attempts to find device 1's MAC address in the RARP lookup table. If one is found,
the RARP server returns the IP address assigned to the device. In this case, the IP address
is 51.100.102.
CS3591-Computer Networks UNIT IV

UNIT IV ROUTING

Routing and protocols: Unicast routing - Distance Vector Routing - RIP - Link State
Routing - OSPF - Path-vector routing - BGP - Multicast Routing: DVMRP - PIM.

4.1 Routing and protocols

• A host or a router has a routing table with an entry for each destination, or a
combination of destinations, to route IP packets. Routing table can be either static or
dynamic.
• A static routing table contains information entered manually. The administrator enters
the route for each destination into the table.
• Dynamic routing table is updated periodically by using one of the dynamic routing
protocols such as RIP, OSPE or BGP.
• The main function of the network layer is to route packets from source to destination.
To accomplish a route through the network must be selected, generally more than one
route is possible. The selection of route is generally based on some performance
criteria. The simplest criteria are to choose shortest route through the network.
• The shortest route means a route that passes through the least number of nodes. This
shortest route selection results in least number of hops per packet. A routing algorithm
is designed to perform this task. The routing algorithm is a part of network layer
software.

4.1.1 Properties of routing algorithm


Certain properties which are desirable in a routing algorithm are - Correctness,
simplicity, robustness, stability, fairness, optimality and efficiency.
1. Correctness and simplicity are self-explanatory.
2. Robustness means the ability to cope with changes in the topology and traffic without
requiring all jobs in hosts to be aborted and network to be rebooted every time.
3. Stability refers to equilibrium state of algorithm. It is the technique that reacts to
changing conditions such as congestions. Under any conditions the network must not
react too slow or experience unstable swings from one extreme to another.
4. Some performance criteria may favor the exchange of data packets between nearby
stations and discourage the exchange between distant stations. Some compromise is
needed between fairness and optimality.

4.1.2 Routing algorithm classification


Routing algorithm can be classified in several ways. Based on their
responsiveness it
can be classified into two types -
1. Static (non-adaptive) Routing Algorithms.
2. Dynamic (adaptive) Routing Algorithms.

JP COLLEGE OF ENGINEERING 1
CS3591-Computer Networks UNIT IV

1. Static (non-adaptive) routing algorithms


In static routing the network topology determines the initial paths. The
precalculated paths are then loaded to the routing table and are fixed for a longer
period. Static routing is suitable for small networks. Static routing becomes cumber
some for bigger networks.
The disadvantage of static routing is its inability to respond quickly to network failure.
2. Dynamic (Adaptive) routing algorithms
Dynamic routing algorithms change their routing decision if there is change in
topology, traffic. Each router continuously checks the network status by communicating
with neighbors. Thus a change in network topology is eventually propagated to all the
routers. Based on this information gathered, each router computes the suitable path to
the destination.
The disadvantage of dynamic routing is its complexity in the router.

4.1.3 Routing tables


Once the routing decision is made, this information is to be stored in routing
table so
that the router knows how to forward a packet. In virtual circuit packet switching, the
routing table contains each incoming packet number and outgoing packet number and
output port to which the packet is to forward. In datagram networks, routing table
contains the next hop to which to forward the packet, based on the destination address.

4.1.4 Advantages and Disadvantages of Static Routing


Advantages
1. Minimal CPU /Memory overhead.
2. Granular control on how traffic is routed.
3. Simple to configure and maintain.
4. Secure as only defined routes can be accessed.
5. Bandwidth is not used for sending routing updates.
Disadvantages
1. Manual update of routes after changes
2. Explicit addition of routes for all networks
3. Impractical on large network.

4.1.5 Advantages and Disadvantages of Dynamic Routing


Advantages
1. Simpler to configure on larger networks.
2. Will dynamically choose a different (or better) route if a link goes down.
3. Ability to load balance between multiple links.
Disadvantages
1. Updates are shared between routers, thus consuming bandwidth.
2. Routing protocols put additional load on router CPU /RAM.
3. The choice of the ''best route" is in the hands of the routing protocol, and not the
network administrator.

JP COLLEGE OF ENGINEERING 2
CS3591-Computer Networks UNIT IV

4.1.5 Difference between Static and Dynamic Routing:


S.NO Static Routing Dynamic Routing

In static routing routes are user- In dynamic routing, routes are updated
1.
defined. according to the topology.

Static routing does not use complex Dynamic routing uses complex routing
2.
routing algorithms. algorithms.

Static routing provides high or


3. Dynamic routing provides less security.
more security.

4. Static routing is manual. Dynamic routing is automated.

Static routing is implemented in Dynamic routing is implemented in large


5.
small networks. networks.

In static routing, additional In dynamic routing, additional resources


6.
resources are not required. are required.

In static routing, failure of the link In dynamic routing, failure of the link
7.
disrupts the rerouting. does not interrupt the rerouting.

Less Bandwidth is required in More Bandwidth is required in Dynamic


8.
Static Routing. Routing.

Static Routing is difficult to


9. Dynamic Routing is easy to configure.
configure.

Another name for static routing is Another name for dynamic routing is
10.
non-adaptive routing. adaptive routing.
4.1.6 Design Goals
Routing algorithms often have one or more of the following design goals:
1. Optimality
2. Simplicity and low overhead
3. Robustness and stability
4. Rapid convergence
5. Flexibility.

1. Optimality
Optimality refers to the ability of the routing algorithm to select the best route.
The best route depends on the metrics and metric weightings used to make the
calculation. For example, one routing algorithm might use number of hops and delay,
but might weight delay more heavily in the calculation. Naturally, routing protocols
must strictly define their metric calculation algorithms.

JP COLLEGE OF ENGINEERING 3
CS3591-Computer Networks UNIT IV

2. Simplicity
Routing algorithms are also designed to be as simple as possible. In other words,
the routing algorithm must offer its functionality efficiently, with a minimum of
software and utilization overhead. Efficiency is particularly important when the
software implementing the routing algorithm must run on a computer with limited
physical resources.
3. Robustness
Routing algorithms must be robust. In other words, they should perform
correctly in the face of unusual or unforeseen circumstances such as hardware failures,
high load conditions and incorrect implementations. Because routers are located at
network junction points, they can cause considerable problems when they fail. The best
routing algorithms are often those that have withstood the test of time and proven
stable under a variety of network conditions.
4. Rapid _convergence
Routing algorithms must converge rapidly. Convergence is the process of
agreement, by all routers, on optimal routes. When a network event causes routes to
either go down or become available, routers distribute routing update messages.
Routing update messages permeate networks, simulating recalculation of optimal
routes and eventually causing all routers to agree on these routes. Routing algorithms
that converge slowly can cause routing loops or network outages.
5. Flexibility
Routing algorithms should also be flexible. In other words, routing algorithms
should quickly and accurately adapt to a variety of network circumstances. For example,
assume that a network segment has gone down. Many routing algorithms, on becoming
aware of this problem, will quickly select the next-best path for all routes normally
using that segment. Routing algorithms can be programmed to adapt to changes in
network bandwidth, router queue size, network delay, and other variables.

4.1.7 Optimally Principle


• Optimally principle states that if router J is on the optimal path from router I
to router K, then the optimal path from J to K also falls along the same route. ·
• Suppose route from I to J is r1 and rest of the route is called r2. If a route better
than r2 is existed from J to K, it could be concatenated with r1 to improve the
route from I to K, so that r1r2 is optimal.

Following Fig. shows the subnet and sink tree with distance metric is measured as
the number of hops.
 Sink tree is not necessarily unique, other trees with the same path lengths
may exist.
 Sink tree does not contain any loops, so each packet will be delivered within a
finite and bounded number of hops.

JP COLLEGE OF ENGINEERING 4
CS3591-Computer Networks UNIT IV

4.2 UNICAST ROUTING


 Routing is the process of selecting best paths in a network.
 In unicast routing, a packet is routed, hop by hop, from its source to its
destination by the help of forwarding tables.
 Routing a packet from its source to its destination means routing the packet from
a source router (the default router of the source host) to a destination router (the
router connected to the destination network).
 The source host needs no forwarding table because it delivers its packet to the
default router in its local network.
 The destination host needs no forwarding table either because it receives the
packet from its default router in its local network.
 Only the intermediate routers in the networks need forwarding tables.

NETWORK AS A GRAPH
 The Figure below shows a graph representing a network.

 The nodes of the graph, labeled A through G, may be hosts, switches, routers, or
networks.
 The edges of the graph correspond to the network links.
 Each edge has an associated cost.

JP COLLEGE OF ENGINEERING 5
CS3591-Computer Networks UNIT IV

 The basic problem of routing is to find the lowest-cost path between any two
nodes, where the cost of a path equals the sum of the costs of all the edges that
make up the path.
 This static approach has several problems:
 It does not deal with node or link failures.
 It does not consider the addition of new nodes or links.
 It implies that edge costs cannot change.
 For these reasons, routing is achieved by running routing protocols among the
nodes.
 These protocols provide a distributed, dynamic way to solve the problem of
finding the lowest-cost path in the presence of link and node failures and
changing edge costs.

UNICAST ROUTING ALGORITHMS

 There are three main classes of routing protocols:


1) Distance Vector Routing Algorithm – Routing Information Protocol
2) Link State Routing Algorithm – Open Shortest Path First Protocol
3) Path-Vector Routing Algorithm - Border Gateway Protocol
DISTANCE VECTOR ROUTING
ROUTING INFORMATION PROTOCOL (RIP)
BELLMAN - FORD ALGORITHM

4.3 DISTANCE VECTOR ROUTING (DSR)


 Distance vector routing is distributed, i.e., algorithm is run on all nodes.
 Each node knows the distance (cost) to each of its directly connected neighbors.
 Nodes construct a vector (Destination, Cost, NextHop) and distributes to its
neighbors.
 Nodes compute routing table of minimum distance to every other node via
NextHop using information obtained from its neighbors.

Initial State

 In given network, cost of each link is 1 hop.


 Each node sets a distance of 1 (hop) to its immediate neighbor and
cost to itself as 0.
 Distance for non-neighbors is marked as unreachable with value ∞ (infinity).
 For node A, nodes B, C, E and F are reachable, whereas nodes D and G
are unreachable.

JP COLLEGE OF ENGINEERING 6
CS3591-Computer Networks UNIT IV

 The initial table for all the nodes are given below

 Each node sends its initial table (distance vector) to neighbors and receives
their estimate.
 Node A sends its table to nodes B, C, E & F and receives tables from nodes B, C, E
& F.
 Each node updates its routing table by comparing with each of its neighbor's
table
 For each destination, Total Cost is computed as:
 Total Cost = Cost (Node to Neighbor) + Cost (Neighbor to Destination)
 If Total Cost < Cost then
 Cost = Total Cost and NextHop = Neighbor
 Node A learns from C's table to reach node D and from F's table to reach
node G.
 Total Cost to reach node D via C = Cost (A to C) + Cost(C to D)
Cost = 1 + 1 = 2.
 Since 2 < ∞, entry for destination D in A's table is changed to (D, 2, C)
 Total Cost to reach node G via F = Cost(A to F) + Cost(F to G) = 1 + 1 = 2
 Since 2 < ∞, entry for destination G in A's table is changed to (G, 2, F)
 Each node builds complete routing table after few exchanges amongst its
neighbors.

JP COLLEGE OF ENGINEERING 7
CS3591-Computer Networks UNIT IV

 System stabilizes when all nodes have complete routing information, i.e.,
convergence.
 Routing tables are exchanged periodically or in case of triggered update.
 The final distances stored at each node is given below:

Updation of Routing Tables


There are two different circumstances under which a given node decides to send a
routing update to its neighbors.

Periodic Update
 In this case, each node automatically sends an update message every so often,
even if nothing has changed.
 The frequency of these periodic updates varies from protocol to protocol, but it
is typically on the order of several seconds to several minutes.

Triggered Update
 In this case, whenever a node notices a link failure or receives an update from
one of its neighbors that causes it to change one of the routes in its routing table.

JP COLLEGE OF ENGINEERING 8
CS3591-Computer Networks UNIT IV
 Whenever a node’s routing table changes, it sends an update to its neighbors,
which may lead to a change in their tables, causing them to send an update to
their neighbors.

4.4 ROUTING INFORMATION PROTOCOL (RIP)


 RIP is an intra-domain routing protocol based on distance-vector algorithm.
Example

 Routers advertise the cost of reaching networks. Cost of reaching each link is 1
hop. For example, router C advertises to A that it can reach network 2, 3 at cost 0
(directly connected), networks 5, 6 at cost 1 and network 4 at cost 2.
 Each router updates cost and next hop for each network number.
 Infinity is defined as 16, i.e., any route cannot have more than 15 hops. Therefore
RIP can be implemented on small-sized networks only.
 Advertisements are sent every 30 seconds or in case of triggered update.

 Command - It indicates the packet type.


Value 1 represents a request packet. Value 2 represents a response packet.
 Version - It indicates the RIP version number. For RIPv1, the value is 0x01.
 Address Family Identifier - When the value is 2, it represents the IP protocol.
 IP Address - It indicates the destination IP address of the route. It can be the
addresses of only the natural network segment.
 Metric - It indicates the hop count of a route to its destination.

Count-To-Infinity (or) Loop Instability Problem


 Suppose link from node A to E goes down.
 Node A advertises a distance of ∞ to E to its neighbors
 Node B receives periodic update from C before A’s update
reaches B
 Node B updated by C, concludes that E can be reached in 3 hops via C
 Node B advertises to A as 3 hops to reach E
 Node A in turn updates C with a distance of 4 hops to E and so on
 Thus nodes update each other until cost to E reaches infinity, i.e., no
convergence.
 Routing table does not stabilize.
 This problem is called loop instability or count to infinity
JP COLLEGE OF ENGINEERING 9
CS3591-Computer Networks UNIT IV
Solution to Count-To-Infinity (or) Loop Instability Problem :
 Infinity is redefined to a small number, say 16.
 Distance between any two nodes can be 15 hops maximum. Thus distance
vector routing cannot be used in large networks.
 When a node updates its neighbors, it does not send those routes it learned
from each neighbor back to that neighbor. This is known as split horizon.
 Split horizon with poison reverse allows nodes to advertise routes it learnt
from a node back to that node, but with a warning message.
LINK STATE ROUTING (LSR)
OPEN SHORTEST PATH PROTOCOL (OSPF)
DIJKSTRA’S ALGORITHM

4.5 LINK STATE ROUTING(LSR)

 Each node knows state of link to its neighbors and cost.


 Nodes create an update packet called link-state packet (LSP) that contains:
 ID of the node
 List of neighbors for that node and associated cost
 64-bit Sequence number
 Time to live
 Link-State routing protocols rely on two mechanisms:
 Reliable flooding of link-state information to all other nodes
 Route calculation from the accumulated link-state knowledge

Reliable Flooding
 Each node sends its LSP out on each of its directly connected links.
 When a node receives LSP of another node, checks if it has an LSP already for
that node.
 If not, it stores and forwards the LSP on all other links except the incoming
one.
 Else if the received LSP has a bigger sequence number, then it is stored and
forwarded. Older LSP for that node is discarded.
 Otherwise discard the received LSP, since it is not latest for that node.
 Thus recent LSP of a node eventually reaches all nodes, i.e., reliable flooding.

(a) (b) (c) (d)

JP COLLEGE OF ENGINEERING 10
CS3591-Computer Networks UNIT IV
 Flooding of LSP in a small network is as follows:
 When node X receives Y’s LSP (fig a), it floods onto its neighbors A
and C (fig b)
 Nodes A and C forward it to B, but does not sends it back to X (fig c).
 Node B receives two copies of LSP with same sequence number.
 Accepts one LSP and forwards it to D (fig d). Flooding is complete.
 LSP is generated either periodically or when there is a change in the topology.

Route Calculation
 Each node knows the entire topology, once it has LSP from every other node.
 Forward search algorithm is used to compute routing table from the received
LSPs.
 Each node maintains two lists, namely Tentative and Confirmed with entries of
the form (Destination, Cost, NextHop).

DIJKSTRA’S SHORTEST PATH ALGORITHM (FORWARD SEARCH


ALGORITHM)
1. Each host maintains two lists, known as Tentative and Confirmed
2. Initialize the Confirmed list with an entry for the Node (Cost = 0).
3. Node just added to Confirmed list is called Next. Its LSP is examined.
4. For each neighbor of Next, calculate cost to reach each neighbor as Cost (Node to
Next) + Cost (Next to Neighbor).
a. If Neighbor is neither in Confirmed nor in Tentative list, then add
(Neighbor, Cost, NextHop) to Tentative list.
b. If Neighbor is in Tentative list, and Cost is less than existing cost, then
replace the entry with (Neighbor, Cost, NextHop).
5. If Tentative list is empty then Stop, otherwise move least cost entry from
Tentative list to Confirmed list. Go to Step 2.

Example :

JP COLLEGE OF ENGINEERING 11
CS3591-Computer Networks UNIT IV
4.6 OPEN SHORTEST PATH FIRST PROTOCOL (OSPF)
 OSPF is a non-proprietary widely used link-state routing protocol.
 OSPF Features are:
 Authentication―Malicious host can collapse a network by advertising to
reach every host with cost 0. Such disasters are averted by authenticating
routing updates.
 Additional hierarchy―Domain is partitioned into areas, i.e., OSPF is
more scalable.
 Load balancing―Multiple routes to the same place are assigned same
cost. Thus traffic is distributed evenly.

Link State Packet Format

=> Version ― represents the current version, i.e., 2.


=> Type ― represents the type (1–5) of OSPF message.
Type 1 - “hello” message, Type 2 - request, Type 3 – send ,
Type 4 - acknowledge the receipt of link state messages ,
Type 5 - reserved
=> SourceAddr ― identifies the sender
=> AreaId ― 32-bit identifier of the area in which the node is located
=> Checksum ― 16-bit internet checksum
=> Authentication type ― 1 (simple password), 2 (cryptographic authentication).
=> Authentication ― contains password or cryptographic checksum

Difference Between Distance-Vector And Link-State Algorithms

Distance vector Routing Link state Routing


Each node talks only to its directly Each node talks to all other nodes, but it
connected neighbors, but it tells them tells them only what it knows for sure
everything it has learned (i.e., only the state of its directly
(i.e., distance to all nodes). connected links).

JP COLLEGE OF ENGINEERING 12
CS3591-Computer Networks UNIT IV

PATH VECTOR ROUTING (PVR)


BORDER GATEWAY PROTOCOL (BGP)

4.7 PATH VECTOR ROUTING(PVR)

 Path-vector routing is an asynchronous and distributed routing algorithm.


 The Path-vector routing is not based on least-cost routing.
 The best route is determined by the source using the policy it imposes on the
route.
 In other words, the source can control the path.
 Path-vector routing is not actually used in an internet, and is mostly designed to
route a packet between ISPs.

Spanning Trees
 In path-vector routing, the path from a source to all destinations is determined
by the best spanning tree.
 The best spanning tree is not the least-cost tree.
 It is the tree determined by the source when it imposes its own policy.
 If there is more than one route to a destination, the source can choose the route
that meets its policy best.
 A source may apply several policies at the same time.
 One of the common policies uses the minimum number of nodes to be visited.
Another common policy is to avoid some nodes as the middle node in a route.
 The spanning trees are made, gradually and asynchronously, by each node. When
a node is booted, it creates a path vector based on the information it can obtain
about its immediate neighbor.
 A node sends greeting messages to its immediate neighbors to collect these
pieces of information.
 Each node, after the creation of the initial path vector, sends it to all its
immediate neighbors.
 Each node, when it receives a path vector from a neighbor, updates its path
vector using the formula

 The policy is defined by selecting the best of multiple paths.


 Path-vector routing also imposes one more condition on this equation.
 If Path (v, y) includes x, that path is discarded to avoid a loop in the path.
 In other words, x does not want to visit itself when it selects a path to y.

Example:
 The Figure below shows a small internet with only five nodes.
 Each source has created its own spanning tree that meets its policy.
 The policy imposed by all sources is to use the minimum number of nodes to
reach a destination.
 The spanning tree selected by A and E is such that the communication does not
pass through D as a middle node.
 Similarly, the spanning tree selected by B is such that the communication does
not pass through C as a middle node.

JP COLLEGE OF ENGINEERING 13
CS3591-Computer Networks UNIT IV

Path Vectors made at booting time


 The Figure below shows all of these path vectors for the example.
 Not all of these tables are created simultaneously.
 They are created when each node is booted.
 The figure also shows how these path vectors are sent to immediate neighbors
after they have been created.

Updating Path Vectors


 The Figure below shows the path vector of node C after two events.
 In the first event, node C receives a copy of B’s vector, which improves its
vector: now it knows how to reach node A.
 In the second event, node C receives a copy of D’s vector, which does not
change its vector.
 The vector for node C after the first event is stabilized and serves as its
forwarding table.

JP COLLEGE OF ENGINEERING 14
CS3591-Computer Networks UNIT IV
4.8 BORDER GATEWAY PROTOCOL (BGP)
 The Border Gateway Protocol version (BGP) is the only interdomain routing
protocol used in the Internet today.
 BGP4 is based on the path-vector algorithm. It provides information about the
reachability of networks in the Internet.
 BGP views internet as a set of autonomous systems interconnected
arbitrarily.

 Each AS have a border router (gateway), by which packets enter and leave that
AS. In above figure, R3 and R4 are border routers.
 One of the router in each autonomous system is designated as BGP speaker.
 BGP Speaker exchange reachability information with other BGP speakers,
known as external BGP session.
 BGP advertises complete path as enumerated list of AS (path vector) to reach a
particular network.
 Paths must be without any loop, i.e., AS list is unique.
 For example, backbone network advertises that networks 128.96 and 192.4.153
can be reached along the path <AS1, AS2, AS4>.

 If there are multiple routes to a destination, BGP speaker chooses one based on
policy.
 Speakers need not advertise any route to a destination, even if one exists.
 Advertised paths can be cancelled, if a link/node on the path goes down. This
negative advertisement is known as withdrawn route.
 Routes are not repeatedly sent. If there is no change, keep alive messages are
sent.

JP COLLEGE OF ENGINEERING 15
CS3591-Computer Networks UNIT IV

iBGP - interior BGP


 A Variant of BGP
 Used by routers to update routing information learnt from other speakers to
routers inside the autonomous system.
 Each router in the AS is able to determine the appropriate next hop for all
prefixes.

4.9 UNICAST ROUTING PROTOCOLS

 A protocol is more than an algorithm.


 A protocol needs to define its domain of operation, the messages exchanged,
communication between routers, and interaction with protocols in other
domains.
 A routing protocol specifies how routers communicate with each other,
distributing information that enables them to select routes between any
two nodes on a computer network.
 Routers perform the "traffic directing" functions on the Internet; data packets
are forwarded through the networks of the internet from router to router until
they reach their destination computer.
 Routing algorithms determine the specific choice of route.
 Each router has a prior knowledge only of networks attached to it directly.
 A routing protocol shares this information first among immediate neighbors, and
then throughout the network. This way, routers gain knowledge of the topology
of the network.
 The ability of routing protocols to dynamically adjust to changing conditions
such as disabled data lines and computers and route data around obstructions is
what gives the Internet its survivability and reliability.
 The specific characteristics of routing protocols include the manner in which
they avoid routing loops, the manner in which they select preferred routes, using
information about hop costs, the time they require to reach routing convergence,
their scalability, and other factors.

JP COLLEGE OF ENGINEERING 16
CS3591-Computer Networks UNIT IV
INTERNET STRUCTURE
 Internet has a million networks. Routing table entries per router should be
minimized.
 Link state routing protocol is used to partition domain into areas.
 An routing area is a set of routers configured to exchange link-state
information.
 Area introduces an additional level of hierarchy.
 Thus domains can grow without burdening routing protocols.

 There is one special area—the backbone area, also known as area 0.


 Routers R1, R2 and R3 are part of backbone area.
 Routers in backbone area are also part of non-backbone areas. Such routers are
known as Area Border Routers (ABR).
 Link-state advertisement is exchanged amongst routers in a non-backbone area.
 They do not see LSAs of other areas. For example, area 1 routers are not aware of
area 3 routers.
 ABR advertises routing information in their area to other ABRs.
 For example,R2 advertises area 2 routing information to R1 and R3, which in
turn pass onto their areas.
 All routers learn how to reach all networks in the domain.
 When a packet is to be sent to a network in another area, it goes through
backbone area via ABR and reaches the destination area.
 Routing Areas improve scalability but packets may not travel on the shortest
path.

INTER DOMAIN ROUTING


 Internet is organized as autonomous systems (AS) each of which is under the
control of a single administrative entity.
 A corporation’s complex internal network might be a single AS, as may the
network of a single Internet Service Provider (ISP).
 Interdomain routing shares reachability information between autonomous
systems.

JP COLLEGE OF ENGINEERING 17
CS3591-Computer Networks UNIT IV

 The basic idea behind autonomous systems is to provide an additional way to


hierarchically aggregate routing information in a large internet, thus improving
scalability.
 Internet has backbone networks and sites. Providers connect at a peering point.

Traffic on the internet is of two types:


 Local Traffic - Traffic within an autonomous system is called local.
 Transit Traffic - Traffic that passes through an autonomous system is called
transit.

Autonomous Systems (AS) are classified as:


 Stub AS - is connected to only one another autonomous system and carries
local traffic only (e.g. Small corporation).
 Multihomed AS - has connections to multiple autonomous systems but refuses to
carry transit traffic (e.g. Large corporation).
 Transit AS - has connections to multiple autonomous systems and is designed to
carry transit traffic (e.g. Backbone service provider).

JP COLLEGE OF ENGINEERING 18
CS3591-Computer Networks UNIT IV
Policies Used By Autonomous Systems :
 Provider-Customer―Provider advertises the routes it knows, to the customer
and advertises the routes learnt from customer to everyone.
 Customer-Provider―Customers want the routes to be diverted to them. So they
advertise their own prefixes and routes learned from customers to provider and
advertise routes learned from provider to customers.
 Peer―Two providers access to each other’s customers without having to pay.

CHALLENGES IN INTER-DOMAIN ROUTING PROTOCOL


 Each autonomous system has an intra-domain routing protocol, its own policy
and metric.
 Internet backbone must be able to route packets to the destination that complies
with policies of autonomous system along a loopless path.
 Service providers have trust deficit and may not trust advertisements by other
AS, or may refuse to carry traffic from other AS.

TYPES OF ROUTING PROTOCOLS

Two types of Routing Protocols are used in the Internet:


1) Intradomain routing
 Routing within a single autonomous system
 Routing Information Protocol (RIP) - based on the distance-vector
algorithm - (REFER distance-vector routing algorithm)
 Open Shortest Path First (OSPF) - based on the link-state algorithm -
(REFER link-state routing algorithm)
2) Interdomain routing
 Routing between autonomous systems.
 Border Gateway Protocol (BGP) - based on the path-vector algorithm -
(REFER Path Vector routing algorithm)

JP COLLEGE OF ENGINEERING 19
CS3591-Computer Networks UNIT IV

4.10 MULTICASTING

 In multicasting, there is one source and a group of destinations.


 Multicast supports efficient delivery to multiple destinations.
 The relationship is one to many or many-to-many.
 One-to-Many (Source Specific Multicast)
o Radio station broadcast
o Transmitting news, stock-price
o Software updates to multiple hosts
 Many-to-Many (Any Source Multicast)
o Multimedia teleconferencing
o Online multi-player games
o Distributed simulations
 In this type of communication, the source address is a unicast address, but the
destination address is a group address.
 The group address defines the members of the group.

 In multicasting, a multicast router may have to send out copies of the same
datagram through more than one interface.
 Hosts that are members of a group receive copies of any packets sent to that
group’s multicast address
 A host can be in multiple groups
 A host can join and leave groups
 A host signals its desire to join or leave a multicast group by
communicating with its local router using a special protocol.
 In IPv4, the protocol is Internet Group Management Protocol (IGMP)
 In IPv6, the protocol is Multicast Listener Discovery (MLD)

IGMP OR MLD PROTOCOL


 Hosts communicate their desire to join / leave a multicast group to a router
using Internet Group Message Protocol (IGMP) in IPv4 or Multicast Listener
Discovery (MLD) in IPv6.

JP COLLEGE OF ENGINEERING 20
CS3591-Computer Networks UNIT IV
 Provides multicast routers with information about the membership status of
hosts connected to the network.
 Enables a multicast router to create and update list of loyal members for
each group.

MULTICAST ADDRESSING
 Multicast address is associated with a group, whose members are dynamic.
 Each group has its own IP multicast address.
 IP addresses reserved for multicasting are Class D in IPv4 (Class D 224.0.0.1 to
239.255.255.255), 1111 1111 prefix in IPv6.
o
 Hosts that are members of a group receive copy of the packet sent when
destination contains group address.

MULTICASTING VERSUS MULTIPLE UNICASTING


 Multicasting starts with a single packet from the source that is duplicated by the
routers. The destination address in each packet is the same for all duplicates.
 Only a single copy of the packet travels between any two routers.

 In multiple unicasting, several packets start from the source.


 If there are three destinations, for example, the source sends three packets, each
with a different unicast destination address.
 There may be multiple copies traveling between two routers

NEED FOR MULTICAST


Without support for multicast
 A source needs to send a separate packet with the identical data to each
member of the group
 Source needs to keep track of the IP address of each member in the group

Using IP multicast
 Sending host does not send multiple copies of the packet
 A host sends a single copy of the packet addressed to the group’s multicast
address
 The sending host does not need to know the individual unicast IP address of
each member

JP COLLEGE OF ENGINEERING 21
CS3591-Computer Networks UNIT IV
TYPES OF MULTICASTING
 Source-Specific Multicast - In source-specific multicast (one-to-many model),
receiver specifies multicast group and sender from which it is interested to
receive packets. Example: Internet radio broadcasts.
 Any Source Multicast - Supplements any source multicast (many-to-many
model).

MULTICAST APPLICATIONS
 Access to Distributed Databases
 Information Dissemination
 Teleconferencing.
 Distance Learning

MULTICAST ROUTING
 To support multicast, a router must additionally have multicast forwarding
tables that indicate, based on multicast address, which links to use to forward
the multicast packet.
 Unicast forwarding tables collectively specify a set of paths.
 Multicast forwarding tables collectively specify a set of trees -Multicast
distribution trees.
 Multicast routing is the process by which multicast distribution trees are
determined.
 To support multicasting, routers additionally build multicast forwarding
tables.
 Multicast forwarding table is a tree structure, known as multicast
distribution trees.
 Internet multicast is implemented on physical networks that support
broadcasting by extending forwarding functions.

MULTICAST DISTRIBUTION TREES


There are two types of Multicast Distribution Trees used in multicast routing.
They are
 Source-Based Tree: (DVMRP)
 For each combination of (source , group), there is a shortest path spanning
tree.
 Flood and prune
 Send multicast traffic everywhere
 Prune edges that are not actively subscribed to group
 Link-state
 Routers flood groups they would like to receive
 Compute shortest-path trees on demand
 Shared Tree (PIM)
 Single distributed tree shared among all sources
 Does not include its own topology discovery mechanism, but instead
uses routing information supplied by other routing protocols
 Specify rendezvous point (RP) for group
 Senders send packets to RP, receivers join at RP

JP COLLEGE OF ENGINEERING 22
CS3591-Computer Networks UNIT IV
 RP multicasts to receivers; Fix-up tree for optimization
 Rendezvous-Point Tree: one router is the center of the group and
therefore the root of the tree.

MULTICAST ROUTING PROTOCOLS


 Internet multicast is implemented on physical networks that
support broadcasting by extending forwarding functions.
 Major multicast routing protocols are:
1. Distance-Vector Multicast Routing Protocol (DVMRP)
2. Protocol Independent Multicast (PIM)

4.11 Distance Vector Multicast Routing Protocol


 The DVMRP, is a routing protocol used to share information between
routers to facilitate the transportation of IP multicast packets among networks.
 It formed the basis of the Internet's historic multicast backbone.
 Distance vector routing for unicast is extended to support multicast routing.
 Each router maintains a routing table for all destination through exchange of
distance vectors.
 DVMRP is also known as flood-and-prune protocol.
 DVMRP consists of two major components:
 A conventional distance-vector routing protocol, like RIP
 A protocol for determining how to forward multicast packets, based on the
routing table
 DVMRP router forwards a packet if
 The packet arrived from the link used to reach the source of the packet
 If downstream links have not pruned the tree
 DVMRP protocol uses the basic packet types as follows:

 The forwarding table of DVMRP is as follows:

JP COLLEGE OF ENGINEERING 23
CS3591-Computer Networks UNIT IV
 Multicasting is added to distance-vector routing in four stages.
 Flooding
 Reverse Path Forwarding (RPF)
 Reverse Path Broadcasting (RPB)
 Reverse Path Multicast (RPM)
Flooding
=> Router on receiving a multicast packet from source S to a Destination from
NextHop, forwards the packet on all out-going links.
=> Packet is flooded and looped back to S.
=> The drawbacks are:
o It floods a network, even if it has no members for that group.
o Packets are forwarded by each router connected to a LAN, i.e., duplicate
flooding

Reverse Path Forwarding (RPF)


=> RPF eliminates the looping problem in the flooding process.
=> Only one copy is forwarded and the other copies are discarded.
=> RPF forces the router to forward a multicast packet from one specific interface: the
one which has come through the shortest path from the source to the router.
=> Packet is flooded but not looped back to S.

Reverse-Path Broadcasting (RPB)


=> RPB does not multicast the packet, it broadcasts it.
=> RPB creates a shortest path broadcast tree from the source to each destination.
=> It guarantees that each destination receives one and only one copy of the
packet.
=> We need to prevent each network from receiving more than one copy of the
packet.
=> If a network is connected to more than one router, it may receive a copy of the
packet from each router.
=> One router identified as parent called designated Router (DR).
=> Only parent router forwards multicast packets from source S to the attached network.
=> When a router that is not the parent of the attached network receives a multicast
packet, it simply drops the packet.

JP COLLEGE OF ENGINEERING 24
CS3591-Computer Networks UNIT IV

Reverse-Path Multicasting (RPM)


=> To increase efficiency, the multicast packet must reach only those networks
that have active members for that particular group.
=> RPM adds pruning and grafting to RPB to create a multicast shortest path tree that
supports dynamic membership changes.

Pruning:
 Sent from routers receiving multicast traffic for which they have no active
group members
 “Prunes” the tree created by DVMRP
 Stops needless data from being sent

Grafting:
 Used after a branch has been pruned back
 Sent by a router that has a host that joins a multicast group
 Goes from router to router until a router active on the multicast group is
reached
 Sent for the following cases
 A new host member joins a group
 A new dependent router joins a pruned branch
 A dependent router restarts on a pruned branch

JP COLLEGE OF ENGINEERING 25
CS3591-Computer Networks UNIT IV
4.12 Protocol Independent Multicast (PIM)
=> PIM divides multicast routing problem into sparse and dense mode.
=> PIM sparse mode (PIM-SM) is widely used.
=> PIM does not rely on any type of unicast routing protocol, hence protocol
independent.
=> Routers explicitly join and leave multicast group using Join and Prune
messages.
=> One of the router is designated as rendezvous point (RP) for each group in a
domain to receive PIM messages.
=> Multicast forwarding tree is built as a result of routers sending Join messages to
RP.
=> Two types of trees to be constructed:
 Shared tree - used by all senders
 Source-specific tree - used only by a specific sending host
=> The normal mode of operation creates the shared tree first, followed by one or
more source-specific trees

Shared Tree
=> When a router sends Join message for group G to RP, it goes through a set of
routers.
=> Join message is wildcarded (*), i.e., it is applicable to all senders.
=> Routers create an entry (*, G) in its forwarding table for the shared tree.
=> Interface on which the Join arrived is marked to forward packets for that
group.
=> Forwards Join towards rendezvous router RP.
=> Eventually, the message arrives at RP. Thus a shared tree with RP as root is
formed.

Example
=> Router R4 sends Join message for group G to rendezvous router RP.
=> Join message is received by router R2. It makes an entry (*, G) in its table and forwards
the message to RP.
=> When R5 sends Join message for group G, R2 does not forwards the Join. It
adds an outgoing interface to the forwarding table created for that group.

=> As routers send Join message for a group, branches are added to the tree, i.e.,
shared.
=> Multicast packets sent from hosts are forwarded to designated router RP.

JP COLLEGE OF ENGINEERING 26
CS3591-Computer Networks UNIT IV
=> Suppose router R1, receives a message to group G.
o R1 has no state for group G.
o Encapsulates the multicast packet in a Register message.
o Multicast packet is tunneled along the way to RP.
=> RP decapsulates the packet and sends multicast packet onto the shared tree,
towards R2.
=> R2 forwards the multicast packet to routers R4 and R5 that have members for
group G.

Source-Specific Tree
=> RP can force routers to know about group G, by sending Join message to the
sending host, so that tunneling can be avoided.
=> Intermediary routers create sender-specific entry (S, G) in their tables. Thus a
source-specific route from R1 to RP is formed.
=> If there is high rate of packets sent from a sender to a group G, then shared- tree
is replaced by source-specific tree with sender as root.

Example

=> Rendezvous router RP sends a Join message to the host router R1.
=> Router R3 learns about group G through the message sent by RP.
=> Router R4 send a source-specific Join due to high rate of packets from sender.
=> Router R2 learns about group G through the message sent by R4.
=> Eventually a source-specific tree is formed with R1 as root.

Analysis of PIM
=> Protocol independent because, tree is based on Join messages via shortest path.
=> Shared trees are more scalable than source-specific trees.
=> Source-specific trees enable efficient routing than shared trees.

JP COLLEGE OF ENGINEERING 27
UNIT V DATA LINK AND PHYSICAL LAYERS 12

Data Link Layer – Framing – Flow control – Error control – Data-Link Layer Protocols – HDLC – PPP -
Media Access Control – Ethernet Basics – CSMA/CD – Virtual LAN – Wireless LAN (802.11) - Physical
Layer: Data and Signals - Performance – Transmission media- Switching – Circuit Switching.

5.1 Data Link Layer

 In the OSI model, the data link layer is the 2nd layer from the bottom.
 It is responsible for transmitting frames from one node to next node.
 The main responsibility of the Data Link Layer is to transfer the datagram across an individual
link.
 An important characteristic of a Data Link Layer is that datagram can be handled by different
link layer protocols on different links in a path.
 The other responsibilities of this layer are
• Framing - Divides the stream of bits received into data units called frames.
• Physical addressing – If frames are to be distributed to different systems on the same
network, data link layer adds a header to the frame to define the sender and receiver.
• Flow control- If the rate at which the data are absorbed by the receiver is less than the
rate produced in the sender, the Data link layer imposes a flow control mechanism.
• Error control- Used for detecting and retransmitting damaged or lost frames and to
prevent duplication of frames. This is achieved through a trailer added at the end of the
frame.
• Medium Access control - Used to determine which device has control over the link at
any given time.
Nodes and Links
• Communication at the data-link layer is node-to-node.
• The communication channel that connects the adjacent nodes is known as links, and in order
to move the datagram from source to the destination, the datagram must be moved across an
individual link.
• A data unit from one point in the Internet needs to pass through many etworks (LAN and
WAN) to reach another point.
• Theses LANs and WANs are connected by routers.
• The two end hosts and the routers are nodes and the networks in- between are links.
• The first node is the source host; the last node is the destination host.
• The other four nodes are four routers.
• The first, the third, and the fifth links represent the three LANs; the second and the fourth
links represent the two WANs.
Two Categories of Links
Point- to-Point link and Broadcast link.
• In a point-to-point link, the link is dedicated to the two devices
• In a broadcast link, the link is shared between several pairs of devices.
Data Link Layer Services
• The data-link layer is located between the physical and the network layers.
• The data-link layer provides services to the network layer; it receives services from the
physical layer.
• When a packet is travelling, the data-link layer of a node (host or router) is responsible for
delivering a datagram to the next node in the path.
• For this purpose, the data-link layer of the sending node needs to encapsulate the
datagram and the data-link layer of the receiving node needs to decapsulate the datagram.

• The datagram received by the data-link layer of the source host is encapsulated in a frame.
• The frame is logically transported from the source host to the router.
• The frame is decapsulated at the data-link layer of the router and encapsulated at another
frame.
• The new frame is logically transported from the router to the destination host.
Sublayers in Data Link layer
• We can divide the data-link layer into two sublayers: data link control (DLC) and media
access control (MAC).
• The data link control sublayer deals with all issues common to both point- to-point and
broadcast links
• The media access control sublayer deals only with issues specific to broadcast links.

LINK-LAYER ADDRESSING

• A link-layer address is sometimes called a link address, sometimes a physical address, and
sometimes a MAC address.
• Since a link is controlled at the data-link layer, the addresses need to belong to the data-link
layer.
• When a datagram passes from the network layer to the data-link layer, the datagram will be
encapsulated in a frame and two data-link addresses are added to the frame header.
• These two addresses are changed every time the frame moves from one link to another.
THREE TYPES OF ADDRESSES
The link-layer protocols define three types of addresses: unicast, multicast, and broadcast.

Unicast Address : Each host or each interface of a router is assigned a unicast address. Unicasting
means one-to-one communication. A frame with a unicast address destination is destined only for one
entity in the link.

Multicast Address : Link-layer protocols define multicast addresses. Multicasting means one-to-
many Communication but not all.

Broadcast Address : Link-layer protocols define a broadcast address. Broadcasting means one- to-
all communication. A frame with a destination broadcast address is sent to all entities in the link.

DLC SERVICES
• The data link control (DLC) deals with procedures for communication between two adjacent
nodes—node-to-node communication—no matter whether the link is dedicated or broadcast.
• Data link control service include
(1) Framing (2) Flow Control (3) Error Control

5.2. FRAMING
• The data-link layer packs the bits of a message into frames, so that each frame is distinguishable
from another.
• Although the whole message could be packed in one frame, that is not normally done.
• One reason is that a frame can be very large, making flow and error control very inefficient.

• When a message is carried in one very large frame, even a single-bit error would require the
retransmission of the whole frame.
• When a message is divided into smaller frames, a single-bit error affects only that small frame.
• Framing in the data-link layer separates a message from one source to a destination by adding a
sender address and a destination address.
• The destination address defines where the packet is to go; the sender address helps the recipient
acknowledge the receipt.
Frame Size
• Frames can be of fixed or variable size.
• Frames of fixed size are called cells. In fixed-size framing, there is no need for defining the
boundaries of the frames; the size itself can be used as a delimiter.
• In variable-size framing, we need a way to define the end of one frame and the beginning of the
next. Two roaches were used for this purpose: a character-oriented roach and a bit-oriented
roach.

Character-Oriented Framing
• In character-oriented (or byte-oriented) framing, data to be carried are 8-bit characters.
• To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning and the
end of a frame.
• The flag, composed of protocol-dependent special characters, signals the start or end of a
frame.

• Any character used for the flag could also be part of the information.
• If this hens, when it encounters this pattern in the middle of the data,the receiver thinks it has
reached the end of the frame.
• To fix this problem, a byte-stuffing strategy was added to character- oriented framing.
Byte Stuffing (or) Character Stuffing
• Byte stuffing is the process of adding one extra byte whenever there is a flag or escape
character in the text.
• In byte stuffing, a special byte is added to the data section of the frame when there is a
character with the same pattern as the flag.
• The data section is stuffed with an extra byte. This byte is usually called the escape character
(ESC) and has a predefined bit pattern.
• Whenever the receiver encounters the ESC character, it removes it from the data section and
treats the next character as data, not as a delimiting flag.
Bit-Oriented Framing

• In bit-oriented framing, the data section of a frame is a sequence of bits to be interpreted by the
upper layer as text, graphic, audio, video, and so on.
• In addition to headers and trailers), we still need a delimiter to separate one frame from the
other.
• Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter to define the
beginning and the end of the frame

• If the flag pattern ears in the data, the receiver must be informed that this is not the end of the
frame.
• This is done by stuffing 1 single bit (instead of 1 byte) to prevent the pattern from looking like
a flag. The strategy is called bit stuffing.
Bit Stuffing
• Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0 in the
data, so that the receiver does not mistake the pattern 0111110 for a flag.
• In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is added.
• This extra stuffed bit is eventually removed from the data by the receiver.
• The extra bit is added after one 0 followed by five 1’s regardless of the
value of the next bit.
• This guarantees that the flag field sequence does not inadvertently ear in the frame.

5.3 FLOW CONTROL


• Flow control refers to a set of procedures used to restrict the amount of data that the sender
can send before waiting for acknowledgment.
• The receiving device has limited speed and limited memory to store the data.
• Therefore, the receiving device must be able to inform the sending device to stop the
transmission temporarily before the limits are reached.
• It requires a buffer, a block of memory for storing the information until they are processed.
Two methods have been developed to control the flow of data:
• Stop-and-Wait
• Sliding Window
STOP-AND-WAIT
• The simplest scheme is the stop-and-wait algorithm.
• In the Stop-and-wait method, the sender waits for an acknowledgement after every frame it
sends.
• When acknowledgement is received, then only next frame is sent.
• The process of alternately sending and waiting of a frame continues until the sender
transmits the EOT (End of transmission) frame.

• If the acknowledgement is not received within the allotted time, then the sender assumes that
the frame is lost during the transmission, so it will retransmit the frame.
• The acknowledgement may not arrive because of the following three scenarios :
1. Original frame is lost
2. ACK is lost
3. ACK arrives after the timeout

Advantage of Stop-and-wait
The Stop-and-wait method is simple as each frame is checked and acknowledged before the
next frame is sent
Disadvantages of Stop-And-Wait
• In stop-and-wait, at any point in time, there is only one frame that is sent and waiting to be
acknowledged.
• This is not a good use of transmission medium.
• To improve efficiency, multiple frames should be in transition while waiting for ACK.
PIGGYBACKING

• A method to combine a data frame with ACK.


• Piggybacking saves bandwidth
• Station A and B both have data to send.
• Instead of sending separately, station A sends a data frame that includes an ACK.
• Station B does the same thing.

SLIDING WINDOW
• The Sliding Window is a method of flow control in which a sender can transmit the several
frames before getting an acknowledgement.
• In Sliding Window Control, multiple frames can be sent one after another due to which capacity
of the communication channel can be utilized efficiently.
• A single ACK acknowledge multiple frames.
• Sliding Window refers to imaginary boxes at both the sender and receiver end.
• The window can hold the frames at either end, and it provides the upper limit on the number of
frames that can be transmitted before the acknowledgement.
• Frames can be acknowledged even when the window is not completely filled.
• The window has a specific size in which they are numbered as modulo-n means that they are
numbered from 0 to n-1.
• For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
• The size of the window is represented as n-1. Therefore, maximum n-1 frames can be sent
before acknowledgement.
• When the receiver sends the ACK, it includes the number of the next frame that it wants to
receive.
• For example, to acknowledge the string of frames ending with frame number 4, the receiver
will send the ACK containing the number 5.
• When the sender sees the ACK with the number 5, it got to know that the frames from 0
through 4 have been received.
Sender Window Receiver Window

o At the beginning of a transmission, o At the beginning of transmission, the


the sender window contains n-1 receiver window does not contain n
frames. frames, but it contains n-1 spaces for
o When a frame is sent, the size of the frames.
window shrinks. o When the new frame arrives, the size of
o For example, if the size of the the window shrinks.
window is ‘w’ and if three frames o For example, the size of the window is w
are sent out, then the number of and if three frames are received then the
frames left out in the sender number of spaces available in the
window is w-3. window is (w-3).
o Once the ACK has arrived, then the o Once the acknowledgement is sent, the
sender window expands to the receiver window expands by the number
number which will be equal to the equal to the number of frames
number of frames acknowledged by acknowledged.
ACK.

Example of Sliding Window


5.4. ERROR CONTROL

Data can be corrupted during transmission. For reliable communication, errors


must be detected and corrected. Error Control is a technique of error detection and
retransmission.
TYPES OF ERRORS
SINGLE-BIT ERROR
The term Single-bit error means that only one bit of a given data unit (such as byte,
character, data unit or packet) is changed from 1 to 0 or from 0 to 1.

BURST ERROR
The term Burst Error means that two or more bits in the data unit have changed
from 1 to 0 or from 0 to 1.

ERROR DETECTION TECHNIQUES / METHODS


The basic idea behind any error detection scheme is to add additional information
to a frame that can be used to determine if errors have been introduced.

PARITY CHECK
• One bit, called parity bit is added to every data unit so that the total number
of 1’s in the data unit becomes even (or) odd.
• The source then transmits this data via a link, and bits are checked and
verified at the destination.
• Data is considered accurate if the number of bits (even or odd) matches the
number transmitted from the source.
• This techniques is the most common and least complex method.
1. Even parity – Maintain even number of 1s
E.g., 1011 → 1011 1
2. Odd parity – Maintain odd number of 1s
E.g., 1011 → 1011 0

CYCLIC REDUNDANCY CHECK


• Cyclic codes refers to encoding messages by adding a fixed-length check
value.
• CRCs are popular because they are simple to implement, easy to analyze
mathematically and particularly good at detecting common errors caused in
transmission channels.

Steps Involved :
• Consider the original message (dataword) as M(x) consisting of ‘k’ bits and
the divisor as C(x) consists of ‘n+1’ bits.
• The original message M(x) is appended by ‘n’ bits of zero’s. Let us call
this zero-extended message as T(x).
• Divide T(x) by C(x) and find the remainder.
• The division operation is performed using XOR operation.
• The resultant remainder is appended to the original message M(x) as CRC
and sent by the sender(codeword).

Example 1:
• Consider the Dataword / Message M(x) = 1001
• Divisor C(x) = 1011 (n+1=4)
• Appending ‘n’ zeros to the original Message M(x).
• The resultant messages is called T(x) = 1001 000. (here n=3)
• Divide T(x) by the divisor C(x) using XOR operation.
Sender Side :

Receiver Side:
(For Both Case – Without Error and With Error)
Polynomials
• A pattern of 0s and 1s can be represented as a polynomial with coefficients
of 0 and 1.
• The power of each term shows the position of the bit; the coefficient shows
the value of the bit.

INTERNET CHECKSUM

• Checksum is a calculated value that is used to determine the integrity of


data.
Example : Let the message to be transmitted be 7,11,12,0,6.

ERROR CONTROL

• Error control includes both error detection and error correction.


• Whenever an error is detected, specified frames are retransmitted
• It allows the receiver to inform the sender if a frame is lost or damaged
during transmission and coordinates the retransmission of those frames by
the sender.
• Includes the following actions:
o Error detection
o Positive Acknowledgement (ACK): if the frame arrived with no
errors
o Negative Acknowledgement (NAK): if the frame arrived with errors
o Retransmissions after Timeout: Frame is retransmitted after certain
amount of time if no acknowledgement was received
• Error control in the data link layer is based on automatic repeat request
(ARQ).

Categories of Error Control


STOP-AND-WAIT ARQ
• Stop-and-wait ARQ is a technique used to retransmit the data in case of
damaged or lost frames.
• This technique works on the principle that the sender will not transmit the
next frame until it receives the acknowledgement of the last transmitted
frame.

Two possibilities of the retransmission in Stop and Wait ARQ:

o Damaged Frame: When the receiver receives a damaged frame(i.e., the


frame contains an error), then it returns the NAK frame. For example, when
the frame DATA 1 is sent, and then the receiver sends the ACK 0 frame
means that the data 1 has arrived correctly. The sender transmits the next
frame: DATA 0. It reaches undamaged, and the receiver returns ACK 1.
The sender transmits the third frame: DATA 1. The receiver reports an error
and returns the NAK frame. The sender retransmits the DATA 1 frame.

o Lost Frame: Sender is equipped with the timer and starts when the frame is
transmitted. Sometimes the frame has not arrived at the receiving end so
that it cannot be acknowledged either positively or negatively. The sender
waits for acknowledgement until the timer goes off. If the timer goes off, it
retransmits the last transmitted frame.
SLIDING WINDOW ARQ

Sliding Window ARQ is a technique used for continuous transmission error


control.

Two protocols used in sliding window ARQ:

1. GO-BACK-N ARQ
o In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it
retransmits all the frames after which it does not receive the positive ACK.

o In the above figure, three frames (Data 0,1,2) have been transmitted before
an error discovered in the third frame.
o The receiver discovers the error in Data 2 frame, so it returns the NAK 2
frame.
o All the frames including the damaged frame (Data 2,3,4) are discarded as it
is transmitted after the damaged frame.
o Therefore, the sender retransmits the frames (Data2,3,4).

2. SELECTIVE-REJECT(REPEAT) ARQ

o Selective-Reject ARQ technique is more efficient than Go-Back-n ARQ.


o In this technique, only those frames are retransmitted for which negative
acknowledgement (NAK) has been received.
o The receiver storage buffer keeps all the damaged frames on hold until the
frame in error is correctly received.
o The receiver must have an appropriate logic for reinserting the frames in a
correct order.
o The sender must consist of a searching mechanism that selects only the
requested frame for retransmission.
o In the above figure, three frames (Data 0,1,2) have been transmitted before
an error discovered in the third frame.
o The receiver discovers the error in Data 2 frame, so it returns the NAK 2
frame.
o The damaged frame only (Data 2) is discarded.
o The other subsequent frames (Data 3,4) are accepted.
o Therefore, the sender retransmits only the damaged frame (Data2).

5.5 DATA-LINK LAYER PROTOCOLS


Four protocols have been defined for the data-link layer controls.
They are
1. Simple Protocol
2. Stop-and-Wait Protocol
3. Go-Back-N Protocol
4. Selective-Repeat Protocol

1.SIMPLE PROTOCOL

o The first protocol is a simple protocol with neither flow nor error control.
o We assume that the receiver can immediately handle any frame it receives.
o In other words, the receiver can never be overwhelmed with incoming
frames.
o The data-link layers of the sender and receiver provide transmission
services for their network layers.
o The data-link layer at the sender gets a packet from its network layer, makes
a frame out of it, and sends the frame.
o The data-link layer at the receiver receives a frame from the link, extracts
the packet from the frame, and delivers the packet to its network layer.

NOTE :

2. STOP-AND-WAIT PROTOCOL
REFER STOP AND WAIT FROM FLOW CONTROL

3. GO-BACK-N PROTOCOL
REFER GO-BACK-N ARQ FROM ERROR CONTROL

4. SELECTIVE-REPEAT PROTOCOL
REFER SELECTIVE-REPEAT ARQ FROM ERROR CONTROL

5.6 HDLC (HIGH-LEVEL DATA LINK CONTROL)

o High-level Data Link Control (HDLC) is a bit-oriented protocol


o HDLC is used for communication over point-to-point and multipoint links.
o HDLC implements the Stop-and-Wait protocol.

HDLC CONFIGURATIONS AND TRANSFER MODES


HDLC provides two common transfer modes that can be used in different
configurations:
1. Normal response mode (NRM)
2. Asynchronous balanced mode (ABM).
Normal response mode (NRM)
o In normal response mode (NRM), the station configuration is unbalanced.
o We have one primary station and multiple secondary stations.
o A primary station can send commands; a secondary station can only
respond.
o The NRM is used for both point-to-point and multipoint links.

Asynchronous balanced mode (ABM)


o In ABM, the configuration is balanced.
o The link is point-to-point, and each station can function as a primary and a
secondary (acting as peers).
o This is the common mode today.

HDLC FRAMES
HDLC defines three types of frames:
1. Information frames (I-frames) - used to carry user data
2. Supervisory frames (S-frames) - used to carry control information
3. Unnumbered frames (U-frames) – reserved for system management

Each type of frame serves as an envelope for the transmission of a different type of
message.
Each frame in HDLC may contain up to six fields:
1. Beginning flag field
2. Address field
3. Control field
4. Information field (User Information/ Management Information)
5. Frame check sequence (FCS) field
6. Ending flag field

In multiple-frame transmissions, the ending flag of one frame can serve as the
beginning flag of the next frame.
o Flag field - This field contains synchronization pattern 01111110, which
identifies both the beginning and the end of a frame.
o Address field - This field contains the address of the secondary station. If a
primary station created the frame, it contains a ‘to’ address. If a secondary
station creates the frame, it contains a ‘from’ address. The address field can
be one byte or several bytes long, depending on the needs of the network.
o Control field. The control field is one or two bytes used for flow and error
control.
o Information field. The information field contains the user’s data from the
network layer or management information. Its length can vary from one
network to another.
o FCS field. The frame check sequence (FCS) is the HDLC error detection
field. It can contain either a 16- bit or 32-bit CRC.

CONTROL FIELD FORMAT FOR THE DIFFERENT FRAME TYPES


Control Field for I-Frames
o I-frames are designed to carry user data from the network layer. In addition,
they can include flow-control and error-control information

o The first bit defines the type. If the first bit of the control field is 0, this
means the frame is an I-frame.
o The next 3 bits, called N(S), define the sequence number of the frame.
o The last 3 bits, called N(R), correspond to the acknowledgment number
when piggybacking is used.
o The single bit between N(S) and N(R) is called the P/F bit. If this bit is 1 it
means poll (the frame is sent by a primary station to a secondary).
o If this bit is 0 it means final(the frame is sent by a secondary to a Primary).

Control Field for S-Frames


o Supervisory frames are used for flow and error control whenever
piggybacking is either impossible or inappropriate.
o S-frames do not have information fields

o If the first 2 bits of the control field are 10, this means the frame is an S-
frame.
o The last 3 bits, called N(R),correspond to the acknowledgment number
(ACK) or negative acknowledgment number (NAK), depending on the type
of S-frame.
o The 2 bits called code are used to define the type of S-frame itself.
o With 2 bits, we can have four types of S-frames –
Receive ready (RR), Receive not ready (RNR), Reject (REJ) and
Selective reject (SREJ).

Control Field for U-Frames


o Unnumbered frames are used to exchange session management and control
information between connected devices.
o U-frames contain an information field, but used only for system
management information and not user data.

o If the first 2 bits of the control field are 11, this means the frame is an U-
frame.
o U-frame codes are divided into two sections: a 2-bit prefix before the P/F
bit and a 3-bit suffix after the P/F bit.
o Together, these two segments (5 bits) can be used to create up to 32
different types of U-frames.
5.7. POINT-TO-POINT PROTOCOL (PPP)
o Point-to-Point Protocol (PPP) was devised by IETF (Internet Engineering
Task Force) in 1990 as a Serial Line Internet Protocol (SLIP).
o PPP is a data link layer communications protocol used to establish a direct
connection between two nodes.
o It connects two routers directly without any host or any other networking
device in between.
o It is used to connect the Home PC to the server of ISP via a modem.
o It is a byte - oriented protocol that is widely used in broadband
communications having heavy loads and high speeds.
o Since it is a data link layer protocol, data is transmitted in frames. It is also
known as RFC 1661.

Services Provided by PPP


The main services provided by Point - to - Point Protocol are −
1. Defining the frame format of the data to be transmitted.
2. Defining the procedure of establishing link between two points and
exchange of data.
3. Stating the method of encapsulation of network layer data in the frame.
4. Stating authentication rules of the communicating devices.
5. Providing address for network communication.
6. Providing connections over multiple links.
7. Supporting a variety of network layer protocols by providing a range os
services.

PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one
or more bytes.

1. Flag − 1 byte that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
2. Address − 1 byte which is set to 11111111 in case of broadcast.
3. Control − 1 byte set to a constant value of 11000000.
4. Protocol − 1 or 2 bytes that define the type of data contained in the payload
field.
5. Payload − This carries the data from the network layer. The maximum
length of the payload field is 1500 bytes.
6. FCS − It is a 2 byte(16-bit) or 4 bytes(32-bit) frame check sequence for
error detection. The standard code used is CRC.
Byte Stuffing in PPP Frame
Byte stuffing is used is PPP payload field whenever the flag sequence appears in
the message, so that the receiver does not consider it as the end of the frame. The
escape byte, 01111101, is stuffed before every byte that contains the same byte as
the flag byte or the escape byte. The receiver on receiving the message removes
the escape byte before passing it onto the network layer.

Transition Phases in PPP


The PPP connection goes through different states as shown in a transition phase
diagram.

 Dead: In dead phase the link is not used. There is no active carrier and the
line is quiet.
 Establish: Connection goes into this phase when one of the nodes start
communication. In this phase, two parties negotiate the options. If
negotiation is successful, the system goes into authentication phase or
directly to networking phase.
 Authenticate: This phase is optional. The two nodes may decide whether
they need this phase during the establishment phase. If they decide to
proceed with authentication, they send several authentication packets. If the
result is successful, the connection goes to the networking phase; otherwise,
it goes to the termination phase.
 Network: In network phase, negotiation for the network layer protocols
takes place.PPP specifies that two nodes establish a network layer
agreement before data at the network layer can be exchanged. This is
because PPP supports several protocols at network layer. If a node is
running multiple protocols simultaneously at the network layer, the
receiving node needs to know which protocol will receive the data.
 Open: In this phase, data transfer takes place. The connection remains in
this phase until one of the endpoints wants to end the connection.
 Terminate: In this phase connection is terminated.
Components/Protocols of PPP
Three sets of components/protocols are defined to make PPP powerful:
 Link Control Protocol (LCP)
 Authentication Protocols (AP)
 Network Control Protocols (NCP)

Link Control Protocol (LCP) − It is responsible for establishing, configuring,


testing, maintaining and terminating links for transmission. It also provides
negotiation mechanisms to set options between the two endpoints. Both endpoints
of the link must reach an agreement about the options before the link can be
established.

Authentication Protocols (AP) − Authentication means validating the identity of


a user who needs to access a set of resources. PPP has created two protocols for
authentication -Password Authentication Protocol and Challenge Handshake
Authentication Protocol.

PAP
The Password Authentication Protocol (PAP) is a simple authentication procedure
with a two-step process:
a. The user who wants to access a system sends an authentication
identification (usually the user name) and a password.
b. The system checks the validity of the identification and password and
either accepts or denies connection.

CHAP
The Challenge Handshake Authentication Protocol (CHAP) is a three-way
handshaking authentication protocol that provides greater security than PAP. In
this method, the password is kept secret; it is never sent online.
a. The system sends the user a challenge packet containing a challenge
value.
b. The user applies a predefined function that takes the challenge value and
the user’s own password and creates a result. The user sends the result in
the response packet to the system.
c. The system does the same. It applies the same function to the password of
the user (known to the system) and the challenge value to create a result.
If the result created is the same as the result sent in the response packet,
access is granted; otherwise, it is denied.

CHAP is more secure than PAP, especially if the system continuously changes the
challenge value. Even if the intruder learns the challenge value and the result, the
password is still secret.

Network Control Protocols (NCP) − PPP is a multiple-network-layer protocol. It


can carry a network-layer data packet from protocols defined by the Internet. PPP
has defined a specific Network Control Protocol for each network protocol. These
protocols are used for negotiating the parameters and facilities for the network
layer. For every higher-layer protocol supported by PPP, one NCP is there.

5.8 MEDIA ACCESS CONTROL (MAC)


• When two or more nodes transmit data at the same time, their frames will
collide and the link bandwidth is wasted during collision.
• To coordinate the access of multiple sending/receiving nodes to the shared
link, we need a protocol to coordinate the transmission.
• These protocols are called Medium or Multiple Access Control (MAC)
Protocols. MAC belongs to the data link layer of OSI model
• MAC defines rules for orderly access to the shared medium. It tries to
ensure that no two nodes are interfering with each other’s transmissions,
and deals with the situation when they do.

Issues involved in MAC


The key issues involved are –
• Where the control is exercised - refers to whether the control is exercised
in a centralized or distributed manner
• How the control is exercised - refers to in what manner the control is
exercised

Goals of MAC
1. Fairness in sharing
2. Efficient sharing of bandwidth
3. Need to avoid packet collisions at the receiver due to interference

MAC Management
• Medium allocation (collision avoidance)
• Contention resolution (collision handling)
MAC Types
• Round-Robin : – Each station is given opportunity to transmit in turns.
Either a central controller polls a station to permit to go, or stations can
coordinate among themselves.
• Reservation : - Station wishing to transmit makes reservations for time
slots in advance. (Centralized or distributed).
• Contention (Random Access) : - No control on who tries; If collision‖
occurs, retransmission takes place.

MECHANISMS USED
• Wired Networks :
o CSMA / CD – Carrier Sense Multiple Access / Collision Detection
• Wireless Networks :
o CSMA / CA – Carrier Sense Multiple Access / Collision Avoidance

5.9 CARRIER SENSE MULTIPLE ACCESS / COLLISION DETECTION


(CSMA / CD)

 Carrier Sense in CSMA/CD means that all the nodes sense the medium to
check whether it is idle or busy.
• If the carrier sensed is idle, then the node transmits the entire
frame.
• If the carrier sensed is busy, the transmission is postponed.

 Collision Detect means that a node listens as it transmits and can therefore
detect when a frame it is transmitting has collided with a frame transmitted
by another node.

Flowchart of CSMA/CD Operation


Transmitter Algorithm in CSMA/CD
 Transmitter Algorithm defines the procedures for a node that senses a busy
medium.
 Three types of Transmitter Algorithm exist.
 They are
1. Non-Persistent Strategy
2. Persistent Strategy : 1-Persistent & P-Persistent

Non-Persistent Strategy
• In the non-persistent method, a station that has a frame to send senses the
line.
• If the line is idle, it sends immediately.
• If the line is not idle, it waits a random amount of time and then senses the
line again.

• The non-persistent approach reduces the chance of collision because it is


unlikely that two or more stations will wait the same amount of time and
retry to send simultaneously.
• However, this method reduces the efficiency of the network because the
medium remains idle when there may be stations with frames to send.

Persistent Strategy
1- Persistent :
• The 1-persistent method is simple and straightforward.
• In this method, after the station finds the line idle, it sends its frame
immediately (with probability 1).

• This method has the highest chance of collision because two or more
stations may find the line idle and send their frames immediately.
P-Persistent :
• In this method, after the station finds the line idle it follows these steps:
• With probability p, the station sends its frame.
• With probability q = 1 − p, the station waits for the beginning of the next
time slot and checks the line again.

• The p-persistent method is used if the channel has time slots with a slot
duration equal to or greater than the maximum propagation time.
• The p-persistent approach combines the advantages of the other two
strategies. It reduces the chance of collision and improves efficiency.
.
EXPONENTIAL BACK-OFF
• Once an adaptor has detected a collision and stopped its transmission, it waits
a certain amount of time and tries again.
• Each time it tries to transmit but fails, the adaptor doubles the amount of time
it waits before trying again.
• This strategy of doubling the delay interval between each retransmission
attempt is a general technique known as exponential back-off.
CARRIER SENSE MULTIPLE ACCESS / COLLISION AVOIDANCE
(CSMA/CA)
• Carrier sense multiple access with collision avoidance (CSMA/CA) was
invented for wireless networks.
• Wireless protocol would follow exactly the same algorithm as the
Ethernet—Wait until the link becomes idle before transmitting and back off
should a collision occur.
• Collisions are avoided through the use of CSMA/CA’s three strategies: the
interframe space, the contention window, and acknowledgments

Interframe Space (IFS) - First, collisions are avoided by deferring transmission


even if the channel is found idle. When an idle channel is found, the station does
not send immediately. It waits for a period of time called the interframe space or
IFS.

Contention Window - The contention window is an amount of time divided into


slots. A station that is ready to send chooses a random number of slots as its wait
time. The number of slots in the window changes according to the binary
exponential backoff strategy. This means that it is set to one slot the first time and
then doubles each time the station cannot detect an idle channel after the IFS time.

Acknowledgment - In addition, the data may be corrupted during the


transmission.The positive acknowledgment and the time-out timer can help
guarantee that the receiver has received the frame.

5.10. WIRED LAN : ETHERNET (IEEE 802.3)


• Ethernet was developed in the mid-1970’s at the Xerox Palo Alto Research
Center (PARC),
• IEEE controls the Ethernet standards.
• The Ethernet is the most successful local area networking technology, that
uses bus topology.
• The Ethernet is multiple-access networks that is set of nodes send and
receive frames over a shared link.
• Ethernet uses the CSMA / CD ( Carrier Sense Multiple Access with
Collision Detection) mechanism.

EVOLUTION OF ETHERNET
Standard Ethernet (10 Mbps)
The original Ethernet technology with the data rate of 10 Mbps as the Standard
Ethernet.
Standard Ethernet types are
1. 10Base5: Thick Ethernet,
2. 10Base2: Thin Ethernet ,
3. 10Base-T: Twisted-Pair Ethernet
4. 10Base-F: Fiber Ethernet.

10Base5: Thick Ethernet 10Base2: Thin Ethernet

• The first implementation is called • The second implementation is


10Base5, thick Ethernet, or called 10Base2, thin Ethernet, or
Thicknet. Cheapernet.
• 10Base5 was the first Ethernet • 10Base2 also uses a bus topology,
specification to use a bus topology but the cable is much thinner and
with an external more flexible.
transceiver(transmitter/receiver) • In this case, the transceiver is
connected via a tap to a thick coaxial normally part of the network
cable. interface card (NIC), which is
installed inside the station.

10Base-T: Twisted-Pair Ethernet 10Base-F: Fiber Ethernet

• The third implementation is called • Although there are several types of


10Base-T or twisted-pair Ethernet. optical fiber 10-Mbps Ethernet, the
• 10Base-T uses a physical star most common is called 10Base-F.
topology. The stations are connected • 10Base-F uses a star topology to
to a hub via two pairs of twisted connect stations to a hub.
cable.
• The stations are connected to the
hub using two fiber-optic cables.
Fast Ethernet (100 Mbps)

Fast Ethernet or 100BASE-T provides transmission speeds up to 100 megabits per


second and is typically used for LAN backbone systems.
The 100BASE-T standard consists of three different component specifications –
1. 100 BASE-TX
2. 100BASE-T4
3. 100BASE-FX

100 BASE-TX 100BASE-T4 100BASE-FX


100Base-TX uses two A new standard, called 100Base-FX uses two
pairs of twisted-pair 100Base-T4, was pairs of fiber-optic cables.
cable either UTP or STP. designed to use four pairs Optical fiber can easily
A 100Base-TX network of UTP for transmitting handle high bandwidth
can provide a data rate of 100 Mbps. requirements.
100 Mbps.

Gigabit Ethernet (1 Gbps)


• The Gigabit Ethernet upgrades the data rate to 1 Gbps(1000 Mbps).
• Gigabit Ethernet can be categorized as either a two-wire or a four-wire
implementation.
• The two-wire implementations use fiber-optic cable (1000Base-SX, short-
wave, or 1000Base-LX, long-wave), or STP (1000Base-CX).
• The four-wire version uses category 5 twisted-pair cable (1000Base-T).

10 Gigabit Ethernet(10 Gbps)


• 10 Gigabit Ethernet is an upcoming Ethernet technology that transmits at 10
Gbps.
• 10 Gigabit Ethernet enables a familiar network technology to be used in
LAN, MAN and WAN architectures.
• 10 Gigabit Ethernet uses multimode optical fiber up to 300 meters and
single mode fiber up to 40 kilometers.
• Four implementations are the most common: 10GBase-SR, 10GBase-LR,
10GBase-EW, and 10GBase-X4.
ACCESS METHOD/ PROTOCOL OF ETHERNET
The access method of Ethernet is CSMA/CD.
Note : Refer CSMA/CD from MAC

COLLISION DETECTION IN ETHERNET


 As the Ethernet supports collision detection, senders are able to determine a
collision.
 At the moment an adaptor detects that its frame is colliding with another, it
first makes sure to transmit a 32-bit jamming sequence along with the 64-
bit preamble (totally 96 bits) and then stops the transmission.
 These 96 bits are sometimes called Runt Frame.

FRAME FORMAT OF ETHERNET


The Ethernet frame is defined by the format given in the Fig.

 The 64-bit preamble allows the receiver to synchronize with the signal; it is
a sequence of alternating 0’s and 1’s.
 Both the source and destination hosts are identified with a 48-bit address.
 The packet type field serves as the demultiplexing key.
 Each frame contains up to 1500 bytes of data(Body).
 CRC is used for Error detection

Ethernet Addresses
 Every Ethernet host has a unique Ethernet address (48 bits – 6 bytes).
 Ethernet address is represented by sequence of six numbers separated by
colons.
 Each number corresponds to 1 byte of the 6 byte address and is given by
pair of hexadecimal digits.
 Eg: 8:0:2b:e4:b1:2 is the representation of
00001000 00000000 00101011 11100100 10110001 00000010
 Each frame transmitted on an Ethernet is received by every adaptor
connected to the Ethernet.
 In addition to unicast addresses an Ethernet address consisting of all 1s is
treated as broadcast address.
 Similarly the address that has the first bit set to 1 but it is not the broadcast
address is called multicast address.
ADVANTAGES OF ETHERNET
Ethernets are successful because
 It is extremely easy to administer and maintain. There are no switches that
can fail, no routing or configuration tables that have to be kept up-to-date,
and it is easy to add a new host to the network.

 It is inexpensive: Cable is cheap, and the only other cost is the network
adaptor on each host.

5.11. WIRELESS LAN ( IEEE 802.11)


• Wireless communication is one of the fastest-growing technologies.
• The demand for connecting devices without the use of cables is increasing
everywhere.
• Wireless LANs can be found on college campuses, in office buildings, and
in many public areas.

ADVANTAGES OF WLAN / 802.11


1. Flexibility: Within radio coverage, nodes can access each other as radio
waves can penetrate even partition walls.
2. Planning : No prior planning is required for connectivity as long as
devices follow standard convention
3. Design : Allows to design and develop mobile devices.
4. Robustness : Wireless network can survive disaster. If the devices survive,
communication can still be established.

DISADVANTAGES OF WLAN / 802.11


1. Quality of Service : Low bandwidth (1 – 10 Mbps), higher error rates due
to interference, delay due to error correction and detection.
2. Cost : Wireless LAN adapters are costly compared to wired adapters.
3. Proprietary Solution : Due to slow standardization process, many solution
are proprietary that limit the homogeneity of operation.
4. Restriction : Individual countries have their own radio spectral policies.
This restricts the development of the technology
5. Safety and Security : Wireless Radio waves may interfere with other
devices. Eg; In a hospital, radio waves may interfere with high-tech
equipment.
TECHNOLOGY USED IN WLAN / 802.11
 WLAN’s uses Spread Spectrum (SS) technology.
 The idea behind Spread spectrum technique is to spread the signal over a
wider frequency band than normal, so as to minimize the impact of
interference from other devices.
 There are two types of Spread Spectrum:
• Frequency Hopping Spread Spectrum (FHSS)
• Direct Sequence Spread Spectrum (DSSS)

Frequency Hopping Spread Spectrum (FHSS)


 Frequency hopping is a spread spectrum technique that involves
transmitting the signal over a random sequence of frequencies.
 That is, first transmitting at one frequency, then a second, then a third, and
so on.
 The random sequence of frequencies is computed by a pseudorandom
number generator.
 The receiver uses the same algorithm as the sender and initializes it with the
same seed and hence is able to hop frequencies in sync with the transmitter
to correctly receive the frame.

Direct Sequence Spread Spectrum (DSSS)


 Each bit of data is represented by multiple bits in the transmitted signal.
 DSSS takes a user data stream and performs an XOR operation with a
pseudo –random number.
 This pseudo random number is called as chipping sequence.
TOPOLOGY IN WLAN / 802.11
WLANs can be built with either of the following two topologies /architecture:
• Infra-Structure Network Topology
• Ad Hoc Network Topology

Infra-Structure Topology Ad-Hoc Topology


(AP based Topology) (Peer-to-Peer Topology)

• An infrastructure network is the • An adhoc network is the architecture


network architecture for providing that is used to support mutual
communication between wireless communication between wireless
clients and wired network resources. clients.
• The transition of data from the • Typically, an ad- hoc network is
wireless to wired medium occurs via created spontaneously and does not
a Base Station called AP(Access support access to wired networks.
Point). • An adhoc network does not require
• An AP and its associated wireless an AP.
clients define the coverage area.
ARCHITECURE OF WLAN / 802.11
• The standard defines two kinds of services: the Basic Service Set (BSS) and the
Extended Service Set (ESS).

Basic Service Set (BSS)


• IEEE 802.11 defines the basic service set (BSS) as the building blocks of a
wireless LAN.
• A basic service set is made of stationary or mobile wireless stations and an
optional central base station, known as the access point (AP).

Extended Service Set (ESS)


• An extended service set (ESS) is made up of two or more BSSs with APs.
• In this case, the BSSs are connected through a distribution system, which is a
wired or a wireless network.
• The distribution system connects the APs in the BSSs. The extended service set
uses two types of stations: mobile and stationary.
• The mobile stations are normal stations inside a BSS.
• The stationary stations are AP stations that are part of a wired LAN.

Station Types
IEEE 802.11 defines three types of stations based on their mobility in a wireless
LAN:
1. No-transition - A station with no-transition mobility is either stationary
(not moving) or moving only inside a BSS.
2. BSS-transition - A station with BSS-transition mobility can move from
one BSS to another, but the movement is confined inside one ESS
ESS-transition - A station with ESS-transition mobility can move from one
ESS to another.
COLLISION AVOIDANCE IN WLAN / 802.11

Wireless protocol would follow exactly the same algorithm as the Ethernet—Wait
until the link becomes idle before transmitting and back off should a collision
occur.

Hidden Node Problem

• Consider the situation shown in the Figure.


• Here A and C are both within range of B but not with each other.
• Suppose both A and C want to communicate with B and so they each send a
frame to B.
• A and C are unaware of each other since their signals do not carry that far.
• These two frames collide with each other at B, but neither A nor C is aware
of this collision.
• A and C are said to be hidden nodes with respect to each other.

Exposed Node Problem

• Each of the four nodes is able to send and receive signals that reach just the
nodes to its immediate left and right.
• For example, B can exchange frames with A and C but it cannot reach D,
while C can reach B and D but not A.
• Suppose B is sending to A. Node C is aware of this communication because
it hears B’s transmission.
• If at the same time, C wants to transmit to node D.
• It would be a mistake, however, for C to conclude that it cannot transmit to
anyone just because it can hear B’s transmission.
• This is not a problem since C’s transmission to D will not interfere with A’s
ability to receive from B.
• This is called exposed problem.
• Although B and C are exposed to each other’s signals, there is no
interference if B transmits to A while C transmits to D.

MULTIPLE ACCESS WITH COLLISION AVOIDANCE (MACA)


 MACA is used to avoid collisions caused by the hidden terminal problem
and exposed terminal problem.
 MACA uses short signaling packets called RTS and CTS for collision
avoidance.
 The RTS and CTS signals helps us to determine who else is in the
transmission range or who is busy.
 When a sender wants to transmit, it sends a signal called Request-To-Send
(RTS).
 If the receiver allows the transmission, it replies to the sender a signal called
Clear-To-Send (CTS).
 Any node that sees the CTS frame knows that it is close to the receiver, and
therefore cannot transmit for the period of time.
 Any node that sees the RTS frame but not the CTS frame is not close
enough to the receiver to interfere with it, and so is free to transmit.
 The Signaling packets RTS and CTS contains information such as
 sender address
 receiver address
 length of the data to be sent/received

 The receiver sends an ACK to the sender after successfully receiving a


frame.
 All nodes must wait for this ACK before trying to transmit.
 When two or more nodes detect an idle link and try to transmit an RTS frame at
the same time, their RTS frames will collide with each other.
 802.11 do not support collision detection, but instead, the senders realize the
collision has happened when they do not receive the CTS frame after a period
of time.
 Each node waits for a random amount of time before trying again.
 The amount of time a given node delays is defined by exponential back-off
algorithm.

DISTRIBUTION SYSTEM IN WLAN / 802.11


In wireless network, nodes can move freely. Some nodes are allowed to roam and
some are connected to a wired network infrastructure called access points (AP),
and they are connected to each other by a so-called distribution system.

 Two nodes can communicate directly with each other if they are within
reach of each other,
 When the nodes are at different range, for example when node A wish to
communicate with node E, A first sends a frame to its access point (AP-1),
which forwards the frame across the distribution system to AP-3, which
finally transmits the frame to E.

Scanning Process in Distribution System


• The technique for selecting an Access Point is called scanning.
• Scanning will take place whenever a node joins the network as well as
when it is not satisfied with the current access point signal.
• It involves the following four steps:
 The node sends a Probe Request frame.
 All AP’s within reach reply with a Probe Response frame.
 The node selects one of the access points and sends that AP an
Association Request frame.
 The AP replies with an Association Response frame.

• There are two types of Scanning. They are


1. Active Scanning
2. Passive Scanning

Active Scanning
When node C moves from the cell serviced by AP-1 to the cell serviced by AP-2.
As it moves, it sends Probe frames, which eventually result in Probe Response.
Since the node is actively searching for an access point it is called active
scanning.

Passive Scanning
AP’s periodically send a Beacon frame to the nodes that advertises the
capabilities of the access point which includes the transmission rates supported by
the AP. This is called passive scanning and a node can change to this AP based on
the Beacon frame simply by sending it an Association Request frame back to the
access point.

FRAME FORMAT OF WLAN / 802.11

 Control field - contains three subfields :


• Type field - Indicates whether the frame carries data, RTS or CTS frame
• To DS - Data frame sent to DS
• From DS – ACK sent from DS
When both the DS bits are set to 0, it indicates that one node is
sending directly to another . Addr l identifies the target node and
Addr2 identifies the source node.

When both the DS bits are set to 1, it indicates that one node is
sending the message to another indirectly using the distribution
system.

 Duration - contains the duration of time the medium is occupied by the nodes.
 Addr l - identifies the final original destination
 Addr 2 - identifies the immediate sender (the one that forwarded the frame
from the distribution system to the ultimate destination)
 Addr 3 - identifies the intermediate destination (the one that accepted the
frame from a wireless node and forwarded it across the distribution
system)
 Addr 4 - identifies the original source
 Sequence Control - to avoid duplication of frames sequence number is
assigned to each frame
 Payload - Data from sender to receiver
 CRC - used for Error detection of the frame.
5.12 Data and Signals in Physical layer

One of the major role of Physical layer is to transfer the data in form of signals through a
transmission medium. It doesn’t matter what data you are sending, it can be text, audio, image, video etc.
everything is transferred in form of signals. This happens because a data cannot be send as it is over a
transmission medium, it must be converted to a form that is acceptable by the transmission media,
signals are what a transmission medium carry.

Analog and Digital

Both the data and the signal can be represented in form of analog and digital.

Analog and Digital Data:

Analog data is continuous data that keeps changing over time, for example in an analog watch, the
hour, minute and second hands keep moving so you infer the time by looking at it, it keeps changing. On
the other hand digital watch shows you discrete data such as 12:20 AM, 5:30 PM etc. at a particular
moment of time.

Analog and Digital Signals:

Similar to data, a signal can be analog or digital. An analog signal can have infinite number of
values in a given range, on the other hand a digital signal has limited number of values in a given range.
The following diagram shows analog and digital signals.

1. Analog Signals

An analog signal can be be categorized as simple or composite.


1.1 Simple Analog signal

A simple analog signal can be represented in form of sine wave. A sine wave is shown in the above
diagram.

A simple analog signal is smooth, consistent and continuous. As you can see in the diagram above
that a arc above the time axis is followed by the similar arc below the time axis and so on.

Parameters of simple Analog signal – Sine wave

There are three parameters that defines a sine wave – peak amplitude, frequency and phase.

Peak amplitude: Absolute value of highest intensity of sine wave.

Frequency and Period: Period is the amount of time a signal takes to complete one cycle, it is denoted
by T. Frequency refers to the number of cycles in 1 second, it is denoted by f. They are inversely
proportional to each other which means f = 1/T.

Phase: Phase refers to the position of sine wave relative to the time 0. For example if the sine wave is at
its highest intensity at the time zero then the phase value for this sine wave is 90 degrees. Phase is
measured in degrees or radians.

1.2 Composite Analog signal

Unlike sine wave which is smooth and consistent, composite analog signals or waves are not
smooth and consistent, which means an arc above the time axis doesn’t necessarily followed by arc below
the time axis. You can imagine them as a group of sine waves with different frequency, amplitude and
period.

Bandwidth: The range of frequencies in a composite signal is called bandwidth. For example if a
composite signal contains waves with the frequencies ranging from 2000 to 4000 then you can say that
the bandwidth of this composite signal is 4000-2000 = 2000Hz. Bandwidth is measured in Hz.

2. Digital Signals

Similar to analog signals, data can be transmitted in form of digital signals. For example a data that
is converted it into a machine language (combination of 0s and 1s) such as 1001 can be represented in
form digital signals. 1 represents high voltage and 0 represents low voltage.

Bit Rate: A bit rate is measured as bits per second, it represents the number of 1s send in 1 second.

Bit Length: A bit length is the distance a bit occupies on the transmission medium.
5.13 NETWORK PERFORMANCE
Network performance is measured in using: Bandwidth, Throughput, Latency, Jitter,
RoundTrip Time

BANDWIDTH
 The bandwidth of a network is given by the number of bits that can be transmitted
over the network in a certain period of time.
 Bandwidth can be measured in two different values: bandwidth in hertz and
bandwidth in bits per second.

Bandwidth in Hertz
o Bandwidth in hertz refers to the range of frequencies contained in a composite
signal or the range of frequencies a channel can pass.
o For example, we can say the bandwidth of a subscriber telephone line is 4 kHz.

Bandwidth in Bits per Seconds


o Bandwidth in Bits per Seconds refers to the number of bits transmitted per
second.
o For example, the bandwidth of a network is a maximum of 100 Mbps. This means that
this network can send 100 Mbps.

Relationship
o There is an explicit relationship between the bandwidth in hertz and bandwidth in
bits per second.
o Basically, an increase in bandwidth in hertz means an increase in bandwidth in
bits per second.

THROUGHPUT
 Throughput is a measure of how fast we can actually send data through a network.
 Bandwidth in bits per second and throughput may seem to be same, but they are
different.
 A link may have a bandwidth of B bps, but we can only send T bps through this link.
(T always less than B).
 In other words, the bandwidth is a potential measurement of a link; the throughput
is an actual measurement of how fast we can send data.
 For example, we may have a link with a bandwidth of 1 Mbps, but the devices
connected to the end of the link may handle only 200 kbps. This means that we cannot
send more than 200 kbps through this link.
Problem :
A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames per
minute with each frame carrying an average of 10,000 bits. What is the throughput
of this network?
Solution
We can calculate the throughput as

The throughput is almost one-fifth of the bandwidth in this case.


LATENCY (DELAY)
 The latency or delay defines how long it takes for an entire message to travel from
one end of a network to the other.
 Latency is made up of four components: Propagation time, Transmission time,
Queuing time and Processing delay.

Propagation Time
o Propagation time measures the time required for a bit to travel from the source to
the destination.
o The propagation time is calculated by dividing the distance by the propagation
speed.
o The propagation speed of electromagnetic signals depends on the medium and on
the frequency of the signal.

Transmission Time
o In data communications we don’t send just 1 bit, we send a message.
o The first bit may take a time equal to the propagation time to reach its destination.
o The last bit also may take the same amount of time.
o However, there is a time between the first bit leaving the sender and the last bit
arriving at the receiver.
o The first bit leaves earlier and arrives earlier.
o The last bit leaves later and arrives later.
o The transmission time of a message depends on the size of the message and the
bandwidth of the channel.

Queuing Time
o Queuing time is the time needed for each intermediate or end device to hold the
message before it can be processed.
o The queuing time is not a fixed factor. It changes with the load imposed on the
network. When there is heavy traffic on the network, the queuing time increases.
o An intermediate device, such as a router, queues the arrived messages and
processes them one by one.
o If there are many messages, each message will have to wait.

Processing Delay
o Processing delay is the time that the nodes take to process the packet header.
o Processing delay is a key component in network delay.
o During processing of a packet, nodes may check for bit-level errors in the packet that
occurred during transmission as well as determining where the packet's next
destination is.
Bandwidth - Delay Product
o Bandwidth and delay are two performance metrics of a link.
o The bandwidth-delay product defines the number of bits that can fill the
link.
o This measurement is important if we need to send data in bursts and wait for the
acknowledgment of each burst before sending the next one.

JITTER
o Another performance issue that is related to delay is jitter.
o Jitter is a problem that if different packets of data encounter different delays and the
application using the data at the receiver site is time-sensitive (audio and video data, for
example).
o If the delay for the first packet is 20 ms, for the second is 45 ms, and for the third is 40 ms,
then the real-time application that uses the packets endures jitter.

ROUND-TRIP TIME (RTT)


o RTT refers to how long it takes to send a message from one end of a network to the
other and back, rather than the one-way latency. This is called as round-trip time
(RTT) of the network.

SOLVED PROBLEMS – PERFORMANCE


Problem 1:
What is the propagation time if the distance between the two points is 12,000 km? Assume the
propagation speed to be 2.4 × 108 m/s .
Solution :
Propagation time = (12000 * 1000) / (2.4 × 108) = 50 ms
Problem 2:
What are the propagation time and the transmission time for a 2.5-KB (kilobyte) message (an
email) if the bandwidth of the network is 1 Gbps? Assume that the distance between the sender
and the receiver is 12,000 km and that light travels at
2.4 * 108 m/s.
Solution:
Propagation time = (12000 * 1000) / (2.4 * 108 ) = 50 ms
Transmission time = (2500 * 8) / 109 = 0.02 ms
Problem 3:
What are the propagation time and the transmission time for a 5-MB (megabyte) message (an
image) if the bandwidth of the network is 1 Mbps? Assume that the distance between the
sender and the receiver is 12,000 km and that light travels at
2.4 * 108m/s.
Solution:
Propagation time = (12000 * 1000) / (2.4 * 108 ) = 50 ms
Transmission time = (5000000 * 8) / 106 = 40 s

5.14 TRANSMISSION MEDIA


o Transmission media is a communication channel that carries the information from
the sender to the receiver.
o Data is transmitted through the electromagnetic signals.
o The main functionality of the transmission media is to carry the information in
the form of bits (Either as Electrical signals or Light pulses).
o It is a physical path between transmitter and receiver in data communication.
o The characteristics and quality of data transmission are determined by the
characteristics of medium and signal.
o Transmission media is of two types : Guided Media (Wired) and UnGuided
Media (wireless).
o In guided (wired) media, medium characteristics are more important whereas, in
unguided (wireless) media, signal characteristics are more important.
o Different transmission media have different properties such as bandwidth, delay,
cost and ease of installation and maintenance.
o The transmission media is available in the lowest layer of the OSI reference
model, i.e., Physical layer.

FACTORS FOR DESIGNING THE TRANSMISSION MEDIA


o Bandwidth: All the factors are remaining constant, the greater the bandwidth of a
medium, the higher the data transmission rate of a signal.
o Transmission impairment: When the received signal is not identical to the
transmitted one due to the transmission impairment. The quality of the signals will
get destroyed due to transmission impairment.
o Interference: An interference is defined as the process of disrupting a signal when
it travels over a communication medium on the addition of some unwanted signal.
CAUSES OF TRANSMISSION IMPAIRMENT

o Attenuation: Attenuation means the loss of energy, i.e., the strength of the signal
decreases with increasing the distance which causes the loss of energy.
o Distortion: Distortion occurs when there is a change in the shape of the signal. This
type of distortion is examined from different signals having different frequencies.
Each frequency component has its own propagation speed, so they reach at a
different time which leads to the delay distortion.
o Noise: When data is travelled over a transmission medium, some unwanted signal is
added to it which creates the noise.

TYPES / CLASSES OF TRANSMISSION MEDIA

GUIDED MEDIA
• It is defined as the physical medium through which the signals are transmitted.
• It is also known as Bounded media.
• Types of Guided media: Twisted Pair Cable, Coaxial Cable , Fibre Optic Cable

TWISTED PAIR CABLE

• Twisted pair is a physical media made up of a pair of cables twisted with each
other.
• A twisted pair cable is cheap as compared to other transmission media.
• Installation of the twisted pair cable is easy, and it is a lightweight cable.
• The frequency range for twisted pair cable is from 0 to 3.5KHz.
• A twisted pair consists of two insulated copper wires arranged in a regular spiral
pattern.
Unshielded Twisted Pair
An unshielded twisted pair is widely used in telecommunication.
Following are the categories of the unshielded twisted pair cable:
o Category 1: Suports low-speed data.
o Category 2: It can support upto 4Mbps.
o Category 3: It can support upto 16Mbps.
o Category 4: It can support upto 20Mbps.
o Category 5: It can support upto 200Mbps.

Advantages :
o It is cheap.
o Installation of the unshielded twisted pair is easy.
o It can be used for high-speed LAN.

Disadvantage:
o This cable can only be used for shorter distances because of attenuation.

Shielded Twisted Pair

A shielded twisted pair is a cable that contains the mesh surrounding the wire that
allows the higher transmission rate.
Advantages :
o The cost of the shielded twisted pair cable is not very high and not very low.
o Installation of STP is easy.
o It has higher capacity as compared to unshielded twisted pair cable.
o It has a higher attenuation.
o It is shielded that provides the higher data transmission rate.

Disadvantages:
o It is more expensive as compared to UTP and coaxial cable.
o It has a higher attenuation rate.

COAXIAL CABLE

o Coaxial cable(Coax) is a very commonly used transmission media, for example, TV


wire is usually a coaxial cable.
o The name of the cable is coaxial as it contains two conductors parallel to each
other.
o It has a higher frequency as compared to Twisted pair cable.
o The inner conductor of the coaxial cable is made up of copper, and the outer
conductor is made up of copper mesh.
o The middle core is made up of non-conductive cover that separates the inner
conductor from the outer conductor.
o The middle core is responsible for the data transferring whereas the copper mesh
prevents from the EMI(Electromagnetic interference).
o Common applications of coaxial cable are Cable TV networks and traditional
Ethernet LANs.

Coaxial Cable Standards


• Coaxial cables are categorized by their Radio Government (RG) ratings.
• Each RG number denotes a unique set of physical specifications, including the wire
gauge of the inner conductor, the thickness and type of the inner insulator, the
construction of the shield, and the size and type of the outer casing.
• Each cable defined by an RG rating is adapted for a specialized function.
Types of Coaxial cable :
1. Baseband transmission: It is defined as the process of transmitting a single
signal at high speed.
2. Broadband transmission: It is defined as the process of transmitting multiple
signals simultaneously.
Advantages :
o The data can be transmitted at high speed.
o It has better shielding as compared to twisted pair cable.
o It provides higher bandwidth.

Disadvantages :
o It is more expensive as compared to twisted pair cable.
o If any fault occurs in the cable causes the failure in the entire network.

FIBRE OPTIC CABLE


o Fibre optic cable is a cable that uses electrical signals for communication.
o Fibre optic is a cable that holds the optical fibres coated in plastic that are used to
send the data by pulses of light.
o The plastic coating protects the optical fibres from heat, cold, electromagnetic
interference from other types of wiring.
o Fibre optics provide faster data transmission than copper wires.

Basic elements of Fibre optic cable:


o Core: The optical fibre consists of a narrow strand of glass or plastic known as a core.
A core is a light transmission area of the fibre. The more the area of the core, the
more light will be transmitted into the fibre.
o Cladding: The concentric layer of glass is known as cladding. The main functionality
of the cladding is to provide the lower refractive index at the core interface as to
cause the reflection within the core so that the light waves are transmitted through
the fibre.
o Jacket: The protective coating consisting of plastic is known as a jacket. The main
purpose of a jacket is to preserve the fibre strength, absorb shock and extra fibre
protection.
Advantages:
o Greater Bandwidth
o Less signal attenuation
o Immunity to electromagnetic interference
o Resistance to corrosive materials
o Light weight
o Greater immunity to tapping

Disadvantages :
o Requires Expertise for Installation and maintenance
o Unidirectional light propagation.
o Higher Cost.

Propagation Modes of Fibre Optics


• Current technology supports two modes (multimode and single mode) for
propagating light along optical channels, each requiring fiber with different physical
characteristics.
• Multimode can be implemented in two forms: step-index or graded-index.

Multimode Propagation
• Multimode is so named because multiple beams from a light source move through
the core in different paths.
• How these beams move within the cable depends on the structure of the core.
Multimode Step-index fiber Multimode Graded-index fiber

• In multimode step-index fiber, the density • The multimode graded-index fiber,


of the core remains constant from the decreases this distortion of the signal
center to the edges. through the cable.
• A beam of light moves through this • The word index here refers to the index
constant density in a straight line until it of refraction.
reaches the interface of the core and the • The index of refraction is related to
cladding. density.
• At the interface, there is an abrupt change • A graded index fiber, therefore, is one
due to a lower density; this alters the angle with varying densities.
of the beam’s motion. • Density is highest at the center of the
• The term step-index refers to the core and decreases gradually to its
suddenness of this change, which lowest at the edge.
contributes to the distortion of the signal
as it passes through the fiber.

Single-Mode Propagation

• Single-mode uses step-index fiber and a highly focused source of light that limits
beams to a small range of angles, all close to the horizontal.
• The single-mode fiber itself is manufactured with a much smaller diameter than that
of multimode fiber, and with substantially lower density (index of refraction).
• The decrease in density results in a critical angle that is close enough to 90° to make
the propagation of beams almost horizontal.
• In this case, propagation of different beams is almost identical, and delays are
negligible. All the beams arrive at the destination “together” and can be recombined
with little distortion to the signal.

UNGUIDED MEDIA
o An unguided transmission transmits the electromagnetic waves without using any
physical medium. Therefore it is also known as wireless transmission.
o In unguided media, air is the media through which the electromagnetic energy
can flow easily.
o Unguided transmission is broadly classified into three categories :
Radio Waves, Microwaves , Infrared

RADIO WAVES

o Radio waves are the electromagnetic waves that are transmitted in all the
directions of free space.
o Radio waves are omnidirectional, i.e., the signals are propagated in all the
directions.
o The range in frequencies of radio waves is from 3Khz to 1Khz.
o In the case of radio waves, the sending and receiving antenna are not aligned, i.e.,
the wave sent by the sending antenna can be received by any receiving antenna.
o An example of the radio wave is FM radio.

Applications of Radio waves:


o A Radio wave is useful for multicasting when there is one sender and many
receivers.
o An FM radio, television, cordless phones are examples of a radio wave.

Advantages of Radio waves:


o Radio transmission is mainly used for wide area networks and mobile cellular
phones.
o Radio waves cover a large area, and they can penetrate the walls.
o Radio transmission provides a higher transmission rate.

MICROWAVES

Microwaves are of two types - Terrestrial microwave & Satellite microwave

Terrestrial Microwave
o Terrestrial Microwave transmission is a technology that transmits the focused beam
of a radio signal from one ground-based microwave transmission antenna to another.
o Microwaves are the electromagnetic waves having the frequency in the range from
1GHz to 1000 GHz.
o Microwaves are unidirectional as the sending and receiving antenna is to be aligned,
i.e., the waves sent by the sending antenna are narrowly focused.
o In this case, antennas are mounted on the towers to send a beam to another antenna
which is km away.
o It works on the line of sight transmission, i.e., the antennas mounted on the towers
are at the direct sight of each other.

Characteristics of Terrestrial Microwave:


o Frequency range: The frequency range of terrestrial microwave is from 4-6 GHz to
21-23 GHz.
o Bandwidth: It supports the bandwidth from 1 to 10 Mbps.
o Short distance: It is inexpensive for short distance.
o Long distance: It is expensive as it requires a higher tower for a longer distance.
o Attenuation: Attenuation means loss of signal. It is affected by environmental
conditions and antenna size.

Advantages of Terrestrial Microwave:


o Microwave transmission is cheaper than using cables.
o It is free from land acquisition as it does not require any land for the installation of
cables.
o Microwave transmission provides an easy communication in terrains as the
installation of cable in terrain is quite a difficult task.
o Communication over oceans can be achieved by using microwave transmission.

Disadvantages of Terrestrial Microwave:


o Eavesdropping.
o Out of phase signal
o Susceptible to weather condition
o Bandwidth limited

Satellite Microwave
o A satellite is a physical object that revolves around the earth at a known height.
o Satellite communication is more reliable nowadays as it offers more flexibility
than cable and fibre optic systems.
o We can communicate with any point on the globe by using
satellite communication.
o The satellite accepts the signal that is transmitted from the earth station, and it
amplifies the signal. The amplified signal is retransmitted to another earth station.
Advantages of Satellite Microwave:
o The coverage area of a satellite microwave is more than the terrestrial microwave.
o The transmission cost of the satellite is independent of the distance from the
centre of the coverage area.
o Satellite communication is used in mobile and wireless
communication applications.
o It is easy to install.
o It is used in a wide variety of applications such as weather forecasting, radio/TV
signal broadcasting, mobile communication, etc.

Disadvantages of Satellite Microwave:


o Satellite designing and development requires more time and higher cost.
o The Satellite needs to be monitored and controlled on regular periods so that it
remains in orbit.
o The life of the satellite is about 12-15 years. Due to this reason, another launch of
the satellite has to be planned before it becomes non-functional.

INFRARED WAVES

o An infrared transmission is a wireless technology used for communication over short


ranges.
o The frequency of the infrared in the range from 300 GHz to 400 THz.
o It is used for short-range communication such as data transfer between two cell
phones, TV remote operation, data transfer between a computer and cell phone and
devices that resides in the same closed area.

Characteristics of Infrared:
o It supports high bandwidth, and hence the data rate will be very high.
o Infrared waves cannot penetrate the walls. Therefore, the infrared communication
in one room cannot be interrupted by the nearby rooms.
o An infrared communication provides better security with minimum interference.
o Infrared communication is unreliable outside the building because the sun rays
will interfere with the infrared waves.
5.15 SWITCHING

o The technique of transferring the information from one computer network to


another network is known as switching.
o Switching in a computer network is achieved by using switches.
o A switch is a small hardware device which is used to join multiple computers
together with one local area network (LAN).
o Switches are devices capable of creating temporary connections between two or
more devices linked to the switch.
o Switches are used to forward the packets based on MAC addresses.
o A Switch is used to transfer the data only to the device that has been addressed. It
verifies the destination address to route the packet appropriately.
o It is operated in full duplex mode.
o It does not broadcast the message as it works with limited bandwidth.

Advantages of Switching:
o Switch increases the bandwidth of the network.
o It reduces the workload on individual PCs as it sends the information to only that
device which has been addressed.
o It increases the overall performance of the network by reducing the traffic on the
network.
o There will be less frame collision as switch creates the collision domain for each
connection.

Disadvantages of Switching:
o A Switch is more expensive than network bridges.
o A Switch cannot determine the network connectivity issues easily.
o Proper designing and configuration of the switch are required to handle multicast
packets.
Types of Switching Techniques

5.16 CIRCUIT SWITCHING

o Circuit switching is a switching technique that establishes a dedicated path


between sender and receiver.
o In the Circuit Switching Technique, once the connection is established then the
dedicated path will remain to exist until the connection is terminated.
o Circuit switching in a network operates in a similar way as the telephone works.
o A complete end-to-end path must exist before the communication takes place.
o In case of circuit switching technique, when any user wants to send the data, voice,
video, a request signal is sent to the receiver then the receiver sends back the
acknowledgment to ensure the availability of the dedicated path. After receiving the
acknowledgment, dedicated path transfers the data.
o Circuit switching is used in public telephone network. It is used for voice
transmission.
o Fixed data can be transferred at a time in circuit switching technology.

Phases in Circuit Switching


Communication through circuit switching has 3 phases:

1. Connection Setup / Establishment - In this phase, a dedicated circuit is established


from the source to the destination through a number of intermediate switching
centres. The sender and receiver transmits communication signals to request and
acknowledge establishment of circuits.

2. Data transfer - Once the circuit has been established, data and voice are transferred
from the source to the destination. The dedicated connection remains as long as the
end parties communicate.

3. Connection teardown / Termination - When data transfer is complete, the


connection is relinquished. The disconnection is initiated by any one of the user.
Disconnection involves removal of all intermediate links from the sender to the
receiver.
Advantages
• It is suitable for long continuous transmission, since a continuous transmission
route is established, that remains throughout the conversation.
• The dedicated path ensures a steady data rate of communication.
• No intermediate delays are found once the circuit is established. So, they are
suitable for real time communication of both voice and data transmission.

Disadvantages
• Circuit switching establishes a dedicated connection between the end parties. This
dedicated connection cannot be used for transmitting any other data, even if the data
load is very low.
• Bandwidth requirement is high even in cases of low data volume.
• There is underutilization of system resources. Once resources are allocated to a
particular connection, they cannot be used for other connections.
• Time required to establish connection may be high.
• It is more expensive than other switching techniques as a dedicated path is required
for each connection.
COMPARISON – CIRCUIT SWITCHING AND PACKET SWITCHING

PACKET SWITCHING
CIRCUIT
SWITCHING Virtual Circuit
Datagram Switching
Switching

Connection oriented Connection oriented Connection less

Packets may be
Ensures in order
Ensures in order delivery
delivery delivered out of order

No reordering is
No reordering is required Reordering is required
required

A dedicated path exists for A dedicated path No dedicated path


data transfer exists for data transfer exists for data transfer
All the packets All the packets All the packets
take the same path take the same path may not take the same path

Resources are allocated Resources are allocated on No resources are


before data transfer demand using 1st packet allocated

Stream oriented Packet oriented Packet oriented

Fixed bandwidth Dynamic Bandwidth Dynamic bandwidth

Reliable Reliable Unreliable

No overheads Less overheads Higher overheads

Implemented at Implemented at Implemented at


physical layer data link layer network layer
Provides better efficiency Provides better efficiency
Inefficient in terms of
than circuit switched than message switched
resource utilization
systems systems
Example- Examples- X.25,
Telephone systems Frame relay Example- Internet
Virtual LAN (VLAN)

• Virtual LAN (VLAN) is a concept in which we can divide the devices logically on layer 2
(data link layer).
• Generally, layer 3 devices divide the broadcast domain but the broadcast domain can be
divided by switches using the concept of VLAN.
• A broadcast domain is a network segment in which if a device broadcast a packet then all
the devices in the same broadcast domain will receive it.
• To forward out the packets to different VLAN (from one VLAN to another) or broadcast
domains, inter Vlan routing is needed.
VLAN ranges:
• VLAN 0, 4095: These are reserved VLAN which cannot be seen or used.
• VLAN 1: It is the default VLAN of switches. By default, all switch ports are in VLAN. This
VLAN can’t be deleted or edit but can be used.
• VLAN 2-1001: This is a normal VLAN range. We can create, edit and delete these VLAN.
• VLAN 1002-1005: These are CISCO defaults for fddi and token rings. These VLAN can’t be
deleted.
• Vlan 1006-4094: This is the extended range of Vlan.

Configuration –
We can simply create VLANs by simply assigning the vlan-id and Vlan name.
#switch1(config)#vlan 2
#switch1(config-vlan)#vlan accounts
Here, 2 is the Vlan I’d and accounts is the Vlan name. Now, we assign Vlan to the switch
ports.e.g-
Switch(config)#int fa0/0
Switch(config-if)#switchport mode access
Switch(config-if)#switchport access Vlan 2
Also, switchport range can be assigned to required vlans.
Switch(config)#int range fa0/0-2
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 2
By this, switchport fa0/0, fa0/1, fa0-2 will be assigned Vlan 2.

Example –
Assigning IP address 192.168.1.1/24, 192.168.1.2/24 and 192.168.2.1/24 to the PC’s. Now, we
will create Vlan 2 and 3 on switch.
Switch(config)#vlan 2
Switch(config)#vlan 3
We have made VLANs but the most important part is to assign switch ports to the VLANs.
Switch(config)#int fa0/0
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 2

Switch(config)#int fa0/1
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 3

Switch(config)#int fa0/2
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 2
As seen, we have assigned Vlan 2 to fa0/0, fa0/2, and Vlan 3 to fa0/1.

VLANs features and benefits:

• Improved network security: VLANs can be used to separate network traffic and limit
access to specific network resources. This improves security by preventing unauthorized
access to sensitive data and network resources.
• Better network performance: By segregating network traffic into smaller logical
networks, VLANs can reduce the amount of broadcast traffic and improve network
performance.
• Simplified network management: VLANs allow network administrators to group devices
together logically, rather than physically, which can simplify network management tasks
such as configuration, troubleshooting, and maintenance.
• Flexibility: VLANs can be configured dynamically, allowing network administrators to
quickly and easily adjust network configurations as needed.
• Cost savings: VLANs can help reduce hardware costs by allowing multiple virtual networks
to share a single physical network infrastructure.
• Scalability: VLANs can be used to segment a network into smaller, more manageable groups
as the network grows in size and complexity.

Some of the key features of VLANs include:

VLAN tagging: VLAN tagging is a way to identify and distinguish VLAN traffic from other
network traffic. This is typically done by adding a VLAN tag to the Ethernet frame header.

• VLAN membership: VLAN membership determines which devices are assigned to which
VLANs. Devices can be assigned to VLANs based on port, MAC address, or other criteria.
• VLAN trunking: VLAN trunking allows multiple VLANs to be carried over a single physical
link. This is typically done using a protocol such as IEEE 802.1Q.
• VLAN management: VLAN management involves configuring and managing VLANs,
including assigning devices to VLANs, configuring VLAN tags, and configuring VLAN
trunking.
Types of connections in VLAN –
There are three ways to connect devices on a VLAN, the type of connections are based on the
connected devices i.e. whether they are VLAN-aware(A device that understands VLAN formats
and VLAN membership) or VLAN-unaware(A device that doesn’t understand VLAN format and
VLAN membership).
1. Trunk Link –
All connected devices to a trunk link must be VLAN-aware. All frames on this should have a
special header attached to it called tagged frames.
2. Access link –
It connects VLAN-unaware devices to a VLAN-aware bridge. All frames on the access link
must be untagged.
3. Hybrid link –
It is a combination of the Trunk link and Access link. Here both VLAN-unaware and VLAN-
aware devices are attached and it can have both tagged and untagged frames.
Advantages –
• Performance –
The network traffic is full of broadcast and multicast. VLAN reduces the need to send such
traffic to unnecessary destinations. e.g.-If the traffic is intended for 2 users but as 10
devices are present in the same broadcast domain, therefore, all will receive the traffic i.e.
wastage of bandwidth but if we make VLANs, then the broadcast or multicast packet will
go to the intended users only.
• Formation of virtual groups –
As there are different departments in every organization namely sales, finance etc., VLANs
can be very useful in order to group the devices logically according to their departments.
• Security –
In the same network, sensitive data can be broadcast which can be accessed by the
outsider but by creating VLAN, we can control broadcast domains, set up firewalls, restrict
access. Also, VLANs can be used to inform the network manager of an intrusion. Hence,
VLANs greatly enhance network security.
• Flexibility –
VLAN provide flexibility to add, remove the number of host we want.
• Cost reduction –
VLANs can be used to create broadcast domains which eliminate the need for expensive
routers. P0By using Vlan, the number of small size broadcast domain can be increased
which are easy to handle as compared to a bigger broadcast domain.
Disadvantages of VLAN
1. Complexity: VLANs can be complex to configure and manage, particularly in large or
dynamic cloud computing environments.
2. Limited scalability: VLANs are limited by the number of available VLAN IDs, which can be
a constraint in larger cloud computing environments.
3. Limited security: VLANs do not provide complete security and can be compromised by
malicious actors who are able to gain access to the network.
4. Limited interoperability: VLANs may not be fully compatible with all types of network
devices and protocols, which can limit their usefulness in cloud computing environments.
5. Limited mobility: VLANs may not support the movement of devices or users between
different network segments, which can limit their usefulness in mobile or remote cloud
computing environments.
6. Cost: Implementing and maintaining VLANs can be costly, especially if specialized hardware
or software is required.
7. Limited visibility: VLANs can make it more difficult to monitor and troubleshoot network
issues, as traffic is isolated in different segments.

Real-Time Applications of VLAN


Virtual LANs (VLANs) are widely used in cloud computing environments to improve network
performance and security. Here are a few examples of real-time applications of VLANs:
1. Voice over IP (VoIP) : VLANs can be used to isolate voice traffic from data traffic, which
improves the quality of VoIP calls and reduces the risk of network congestion.
2. Video Conferencing : VLANs can be used to prioritize video traffic and ensure that it
receives the bandwidth and resources it needs for high-quality video conferencing.
3. Remote Access : VLANs can be used to provide secure remote access to cloud-based
applications and resources, by isolating remote users from the rest of the network.
4. Cloud Backup and Recovery : VLANs can be used to isolate backup and recovery traffic,
which reduces the risk of network congestion and improves the performance of backup and
recovery operations.
5. Gaming : VLANs can be used to prioritize gaming traffic, which ensures that gamers receive
the bandwidth and resources they need for a smooth gaming experience.
6. IoT : VLANs can be used to isolate Internet of Things (IoT) devices from the rest of the
network, which improves security and reduces the risk of network congestion.

You might also like