CS3591 Notes
CS3591 Notes
I. Delivery: The system must deliver data to the correct destination. Data must be received by
the intended device or user .
2. Accuracy: The system must deliver the data accurately. Data that have been altered in
transmission and left uncorrected are unusable.
3. Timeliness: The system must deliver data in a timely manner. Data delivered late are useless.
4. Jitter: Jitter refers to the variation in the packet arrival time. It is the uneven delay in
the delivery of audio or video packets.
1.1.1 Components
3. Receiver: The receiver is the device that receives the message. It can be a computer,
workstation, telephone handset, television, and so on.
4. Transmission medium: The transmission medium is the physical path by which a message
travels from sender to receiver. Some examples of transmission media include twisted-pair
wire, coaxial cable, fiber optic cable, and radio waves.
Information today comes in different forms such as text, numbers, images, audio, and
video.
Text
In data communications, text is represented as a bit pattern, a sequence of bits (O s or 1 s).
Different sets of bit patterns have been designed to represent text symbols. Each set is called a
code, and the process of representing symbols is called coding.
Unicode, which uses 32 bits to represent a symbol or character used in any language in the
world. The American Standard Code for Information Interchange (ASCII), now constitutes the
first 127 characters in Unicode and is also referred to as Basic Latin.
Numbers
Numbers are also represented by bit patterns. However, a code such as ASCII is not used
to represent numbers; the number is directly converted to a binary number to simplify
mathematical operations.
Images
Images are also represented by bit patterns. In its simplest form, an image is composed
of a matrix of pixels (picture elements), where each pixel is a small dot. The size of the
pixel depends on the resolution. The size and the value of the pattern depend on the image.
For an image made of only black-and-white dots (e.g., a chessboard), a I-bit pattern is enough
to represent a pixel. If an image is not made of pure white and pure black pixels, you can
increase the size of the bit pattern to include gray scale
There are several methods to represent color images. One method is called RGB, so called
because each color is made of a combination of three primary colors: red, green, and blue.
The intensity of each color is measured, and a bit pattern is assigned to it. Another method is
called YCM, in which a color is made of a combination of three other primary colors: yellow,
cyan, and magenta.
Audio
Audio refers to the recording or broadcasting of sound or music. Audio is by nature different
from text, numbers, or images.
Video
Video refers to the recording or broadcasting of a picture or movie. Video can either be
produced as a continuous entity (e.g., by a TV camera), or it can be a combination of
images, each a discrete entity, arranged to convey the idea of motion.
Simplex
In simplex mode, the communication is unidirectional, as on a one-way street. Only one
of the two devices on a link can transmit; the other can only receive (see Figure 1.2a).
Keyboards and traditional monitors are examples of simplex devices.
Half-Duplex
In half-duplex mode, each station can both transmit and receive, but not at the same time.
When one device is sending, the other can only receive, and vice versa (see Figure 1.2b)
Walkie-talkies and CB (citizens band) radios are both half-duplex systems.
The half-duplex mode is used in cases where there is no need for communication
in both directions at the same time.
Advantage of Half-duplex mode:
o In half-duplex mode, both the devices can send and receive the data and also can
utilize the entire bandwidth of the communication channel during the transmission
of data.
Full-Duplex
In full-duplex mode (also called duplex), both stations can transmit and receive simultaneously
(see Figure 1.2c).The full-duplex mode is like a two-way street with traffic flowing in both
directions at the same time. One common example of full-duplex communication is the
telephone network. When two people are communicating by a telephone line, both can talk and
listen at the same time. The full-duplex mode is used when communication in both directions
is required all the time.
1.2 NETWORKS
A network is a set of devices (often referred to as nodes) connected by communication links.
A node can be a computer, printer, or any other device capable of sending and/or receiving data
generated by other nodes on the network.
Distributed Processing
Most networks use distributed processing, in which a task is divided among multiple
computers. Instead of one single large machine being responsible for all aspects of a
process, separate computers (usually a personal computer or workstation) handle a
subset.
Performance
Performance can be measured in many ways, including transit time and response time.
Transit time is the amount of time required for a message to travel from one device to another.
Response time is the elapsed time between an inquiry and a response. The performance of a
network depends on a number of factors, including the number of users,
the type of transmission medium, the capabilities of the connected hardware, and the
efficiency of the software. Performance is often evaluated by two networking metrics:
throughput and delay. Throughput is an actual measurement of how fast data can be
transmitted. Latency/delay is time required for a message to completely arrive at the destination
from source. We often need more throughput and less delay. However, these two criteria are
often contradictory. If we try to send more data to the network, we may increase throughput
but we increase the delay because of traffic congestion in the network.
Reliability
In addition to accuracy of delivery, network reliability is measured by the frequency of
failure, the time it takes a link to recover from a failure, and the network's robustness in
a catastrophe.
Security
Network security issues include protecting data from unauthorized access, protecting
data from damage and development, and implementing policies and procedures for
recovery from breaches and data losses.
The term physical topology refers to the way in which a network is laid out physically. Two or
more devices connect to a link; two or more links form a topology. The topology of a network
is the geometric representation of the relationship of all the links and linking devices (usually
called nodes) to one another. There are four basic topologies possible: mesh, star, bus, and ring
(see Figure 1.4).
Mesh Topology
Star Topology
• In a star topology, each device has a dedicated point-to-point link
only to acentral controller, usually called a hub.
• The devices are not directly linked to one another.
• The controller/hub acts as an exchange.
• If one device wants to send data to another, it sends the data to the
controller/hub ,which then relays the data to the other connected device.
Bus Topology
Hybrid Topology
• Hybrid Topology is a combination of one or more basic topologies.
• For example if one department in an office uses ring topology, the other
departments uses star and bus topology, then connecting these topologies will
result in Hybrid Topology.
• Hybrid Topology inherits the advantages and disadvantages of the topologies
included.
Advantages of Hybrid Topology Disadvantages of Hybrid Topology
1. Reliable as Error detecting and trouble 1. Complex in design.
shooting is easy. 2. Costly
2. Effective.
3. Scalable as size can be increased easily.
4. Flexible.
Switched WAN
A switched WAN is a network with more than two ends. It is used in the backbone of a
global communications network today. Figure 1.10 shows an example of a switched WAN
Types of Internetwork
Extranet Intranet
An extranet is used for information sharing.
The access to the extranet is restricted to only An intranet belongs to an organization which
those users who have login credentials. An is only accessible by the
extranet is the lowest level of internetworking. organization's employee or members. The
It can becategorized as MAN, WAN or other main aim of the intranet is to share the
computer networks. An extranet cannot have a information and resources among the
single LAN, atleast it must haveone connection organization employees. An intranet provides
to the external network. the facility to work in groups and for
teleconferences.
At the second level, there are smaller networks, called provider networks, that use the services
of the backbones for a fee. The provider networks are connected to backbones and sometimes
to other provider networks. The customer networks are networks at the edge of the Internet that
actually use the services provided by the Internet. They pay fees to provider networks for
receiving services.
Backbones and provider networks are also called Internet Service Providers (ISPs). The
backbones are often referred to as international ISPs; the provider networks are often referred
to as national or regional ISPs.
Today most residences and small businesses have telephone service, which means they are
connected to a telephone network. Because most telephone networks have already connected
themselves to the Internet, one option for residences and small businesses to connect to the
Internet is to change the voice line between the residence or business and the telephone center
to a point-to-point WAN. This can be done in two ways.
❏ Dial-up service. The first solution is to add a modem that converts data to voice to the
telephone line. The software installed on the computer dials the ISP and imitates making a
telephone connection. Unfortunately, the dial-up service is very slow, and when the line is used
for an Internet connection, it cannot be used for a telephone (voice)connection. It is only useful
for small residences and businesses with occasional connection to the Internet.
❏ DSL Service. Since the advent of the Internet, some telephone companies have upgraded
their telephone lines to provide higher-speed Internet services to residences or small businesses.
The digital subscriber line (DSL) service also allows the line to be used simultaneously for
voice and data communications.
More and more residents over the last two decades have begun using cable TV services instead
of antennas to receive TV broadcasting. The cable companies have been upgrading their cable
networks and connecting to the Internet. A residence or a small business can be connected to
the Internet by using this service. It provides a higher-speed connection, but the speed varies
depending on the number of neighbors that use the same cable.
A large organization or a large corporation can itself become a local ISP and be connected to
the Internet. This can be done if the organization or the corporation leases a high-speed WAN
from a carrier provider and connects itself to a regional ISP. For example, a large university
with several campuses can create an internetwork and then connect the internetwork to the
Internet.
1. Application layer
2. Transport Layer (TCP/UDP)
3. Network Layer
4. Datalink Layer
5. Physical Layer
1.5.1 Layered Architecture
To show how the layers in the TCP/IP protocol suite are involved in communication
between two hosts, we assume that we want to use the suite in a small internet made up of three
LANs (links), each with a link-layer switch. We also assume that the links are connected by
one router, as shown in Figure 1.18 (on next page). Let us assume that computer A
communicates with computer B.
As Figure 1.18 shows, we have five communicating devices in this communication:
source host (computer A), the link-layer switch in link 1, the router, the link-layer switch in
link 2, and the destination host (computer B). Each device is involved with a set of layers
depending on the role of the device in the internet. The two hosts are involved in all five layers.
After understanding the concept of logical communication, we are ready to briefly discuss the
duty of each layer.
Application Layer
An application layer incorporates the function of top three OSI layers.
Anapplication layer is the topmost layer in the TCP/IP model.
It is responsible for handling high-level protocols, issues of representation.
This layer allows the user to interact with the application.
When one application layer protocol wants to communicate with another
application layer, it forwards its data to the transport layer.
Protocols such as FTP, HTTP, SMTP, POP3, etc running in the application layer
provides service to other program running on top of application layer
Transport Layer
The transport layer is responsible for the reliability, flow control, and correction
of data which is being sent over the network.
The two protocols used in the transport layer are User Datagram protocol and
Transmission control protocol.
o UDP – UDP provides connectionless service and end-to-end delivery of
transmission. It is an unreliable protocol as it discovers the errors but not
specify the error.
o TCP – TCP provides a full transport layer services to applications. TCP is
a reliable protocol as it detects the error and retransmits the damaged frames.
Network Layer
The network layer is the third layer of the TCP/IP model.
The main responsibility of the network layer is to send the packets from any
network, and they arrive at the destination irrespective of the route they take.
Network layer handle the transfer of information across multiple networks
through router and gateway .
IP protocol is used in this layer, and it is the most significant part of the entire
TCP/IP suite.
Data Link Layer
We have seen that an internet is made up of several links (LANs and WANs)
connected by routers. When the next link to travel is determined by the router, the
data-link layer is responsible for taking the datagram and moving it across the link.
Physical Layer
The physical layer is responsible for carrying individual bits in a frame across the
link.
The physical layer is the lowest level in the TCP/IP protocol suite.
The communication between two devices at the physical layer is still a logical
communication because there is another hidden layer, the transmission media, under
the physical layer.
This is the only layer that directly interacts with data from the user. Software
applications like web browsers and email clients rely on the application layer to initiate
communications. But it should be made clear that client software applications are not part of
the application layer; rather the application layer is responsible for the protocols and data
manipulation that the software relies on to present meaningful data to the user. Application
layer protocols include HTTP as well as SMTP (Simple Mail Transfer Protocol is one of the
protocols that enables email communications).
This layer is primarily responsible for preparing data so that it can be used by the
application layer; in other words, layer 6 makes the data presentable for applications to
consume. The presentation layer is responsible for translation, encryption, and compression of
data.
Finally the presentation layer is also responsible for compressing data it receives from
the application layer before delivering it to layer 5. This helps improve the speed and efficiency
of communication by minimizing the amount of data that will be transferred.
1.6.3 Session Layer
This is the layer responsible for opening and closing communication between the two
devices. The time between when the communication is opened and closed is known as the
session. The session layer ensures that the session stays open long enough to transfer all the
data being exchanged, and then promptly closes the session in order to avoid wasting resources.
Layer 4 is responsible for end-to-end communication between the two devices. This
includes taking data from the session layer and breaking it up into chunks called segments
before sending it to layer 3. The transport layer on the receiving device is responsible for
reassembling the segments into data the session layer can consume.
The transport layer is also responsible for flow control and error control. Flow control
determines an optimal speed of transmission to ensure that a sender with a fast connection does
not overwhelm a receiver with a slow connection. The transport layer performs error control
on the receiving end by ensuring that the data received is complete, and requesting a
retransmission if it isn’t.
1.6.5 Network Layer
The network layer works for the transmission of data from one host to the other
located in different networks. It also takes care of packet routing i.e. selection of the shortest
path to transmit the packet, from the number of routes available. The sender & receiver’s IP
addresses are placed in the header by the network layer.
1. Routing: The network layer protocols determine which route is suitable from source
to destination. This function of the network layer is known as routing.
2. Logical Addressing: In order to identify each device on internetwork uniquely, the
network layer defines an addressing scheme. The sender & receiver’s IP addresses are
placed in the header by the network layer. Such an address distinguishes each device
uniquely and universally.
The data link layer is responsible for the node-to-node delivery of the message. The main
function of this layer is to make sure data transfer is error-free from one node to another, over
the physical layer. When a packet arrives in a network, it is the responsibility of DLL to
transmit it to the Host using its MAC address.
Data Link Layer is divided into two sublayers:
The packet received from the Network layer is further divided into frames depending on
the frame size of NIC(Network Interface Card). DLL also encapsulates Sender and
Receiver’s MAC address in the header.
This layer includes the physical equipment involved in the data transfer, such as the
cables and switches. This is also the layer where the data gets converted into a bit stream,
which is a string of 1s and 0s. The physical layer of both devices must also agree on a signal
convention so that the 1s can be distinguished from the 0s on both devices.
Summary of Layers
COMPARISON - OSI MODEL AND TCP/IP MODEL
A socket is one endpoint of a two way communication link between two programs
running on the network. The socket mechanism provides a means of inter-process
communication (IPC) by establishing named contact points between which the communication
take place.
Like ‘Pipe’ is used to create pipes and sockets is created using ‘socket’ system call.
The socket provides bidirectional FIFO Communication facility over the network. A socket
connecting to the network is created at each end of the communication. Each socket has a
specific address. This address is composed of an IP address and a port number.
Socket are generally employed in client server applications. The server creates a socket,
attaches it to a network port addresses then waits for the client to contact it. The client creates
a socket and then attempts to connect to the server socket. When the connection is established,
transfer of data takes place.
How can a client or a server find a pair of socket addresses for communication? The situation
is different for each site.
Server Site
The server needs a local (server) and a remote (client) socket address for communication.
Local Socket Address The local (server) socket address is provided by the operating system.
The operating system knows the IP address of the computer on which the server process is
running. The port number of a server process, however, needs to be assigned. If the server
process is a standard one defined by the Internet authority, a port number is already assigned
to it. When a server starts running, it knows the local socket address.
Remote Socket Address The remote socket address for a server is the socket address of the
client that makes the connection. Because the server can serve many clients, it does not know
beforehand the remote socket address for communication. The server can find this socket
address when a client tries to connect to the server. The client socket address, which is
contained in the request packet sent to the server, becomes the remote socket address that is
used for responding to the client.
Client Site
The client also needs a local (client) and a remote (server) socket address for communication.
Local Socket Address The local (client) socket address is also provided by the operating
system. The operating system knows the IP address of the computer on which the client is
running. The port number, however, is a 16- bit temporary integer that is assigned to a client
process each time the process needs to start the communication. The port number, however,
needs to be assigned from a set of integers defined by the Internet authority and called the
ephemeral (temporary) port numbers. The operating system, however, needs to guarantee that
the new port number is not used by any other running client process.
Remote Socket Address Finding the remote (server) socket address for a client, however, needs
more work. When a client process starts, it should know the socket address of the server it
wants to connect to. We will have two situations in this case.
Sometimes, the user who starts the client process knows both the server port number
and IP address of the computer on which the server is running. This usually occurs in situations
when we have written client and server applications and we want to test them
Although each standard application has a well-known port number, most of the time,
we do not know the IP address. This happens in situations such as when we need to contact a
web page, send an e-mail to a friend, or copy a file from a remote site. In these situations, the
server has a name, an identifier that uniquely defines the server process. Examples of these
identifiers are URLs, such as www.xxx.yyy, or e-mail addresses, such as [email protected].
The client process should now change this identifier (name) to the corresponding server socket
address.
o Each standard protocol is a pair of computer programs that interact with the
user and the transport layer to provide a specific service to the user.
Client-Server Paradigm
o The traditional paradigm is called the client-server paradigm.
o It was the most popular Paradigm.
o In this paradigm, the service provider is an application program, called the server process; it
runs continuously, waiting for another application program, called the client process, to make
a connection through the Internet and ask for service.
o The server process must be running all the time; the client process is started when the client
needs to receive service.
o There are normally some server processes that can provide a specific type of service, but
there are many clients that request service from any of these server processes.
Peer-to-Peer(P2P) Paradigm
o A new paradigm, called the peer-to-peer paradigm has emerged to respond to the needs of
some new applications.
o In this paradigm, there is no need for a server process to be running all the time and waiting
for the client processes to connect.
o The responsibility is shared between peers.
o A computer connected to the Internet can provide service at one time and receive service at
another time.
o A computer can even provide and receive services at the same time.
Mixed Paradigm
o An application may choose to use a mixture of the two paradigms by combining the
advantages of both.
o For example, a light-load client-server communication can be used to find the address of
the peer that can offer a service.
o When the address of the peer is found, the actual service can be received from the peer by
using the peer-to-peer paradigm.
• The HyperText Transfer Protocol (HTTP) is used to define how the client- server programs
can be written to retrieve web pages from the Web.
• It is a protocol used to access the data on the World Wide Web (WWW).
• The HTTP protocol can be used to transfer the data in the form of plain text, hypertext,
audio, video, and so on.
• HTTP is a stateless request/response protocol that governs client/server communication.
• An HTTP client sends a request; an HTTP server returns a response.
• The server uses the port number 80; the client uses a temporary port number.
• HTTP uses the services of TCP , a connection-oriented and reliable protocol.
• HTTP is a text-oriented protocol. It contains embedded URL known as links.
• When hypertext is clicked, browser opens a new connection, retrieves file from
the server and displays the file.
• Each HTTP message has the general form
START_LINE <CRLF>
MESSAGE_HEADER <CRLF>
<CRLF> MESSAGE_BODY <CRLF>
where <CRLF> stands for carriage-return-line-feed.
Features of HTTP
o Connectionless protocol:
HTTP is a connectionless protocol. HTTP client initiates a request and waits for a response
from the server. When the server receives the request, the server processes the request and
sends back the response to the HTTP client after which the client disconnects the connection.
The connection between client and server exist only during the current request and response
time only.
o Media independent:
HTTP protocol is a media independent as data can be sent as long as both the client and
server know how to handle the data content. It is required for both the client and server to
specify the content type in MIME-type header.
o Stateless:
HTTP is a stateless protocol as both the client and server know each other only during the
current request. Due to this nature of the protocol, both the client and server do not retain the
information between various requests of the web pages.
HTTP Request And Response Messages
• The HTTP protocol defines the format of the request and response messages.
• Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.
• Response Message: The response message is sent by the server to the client that consists of
a status line, headers, and sometimes a body.
Request Line
• There are three fields in this request line - Method, URL and Version.
• The Method field defines the request types.
• The URL field defines the address and name of the corresponding web page.
• The Version field gives the version of the protocol; the most current version of
HTTP is 1.1.
• Some of the Method types are:
Request Header
• Each request header line sends additional information from the client to the server.
• Each header line has a header name, a colon, a space, and a header value.
• The value field defines the values associated with each header name.
• Headers defined for request message include:
Body
• The body can be present in a request message. It is optional.
• Usually, it contains the comment to be sent or the file to be published on the website when
the method is PUT or POST.
Conditional Request
• A client can add a condition in its request.
• In this case, the server will send the requested web page if the condition is met or inform
the client otherwise.
• One of the most common conditions imposed by the client is the time and date the web
page is modified.
• The client can send the header line If-Modified-Since with the request to tell the server that
it needs the page only if it is modified after a certain point in time.
Response Header
• Each header provides additional information to the client.
• Each header line has a header name, a colon, a space, and a header value.
• Some of the response headers are:
Body
• The body contains the document to be sent from the server to the client.
• The body is present unless the response is an error message.
HTTP CONNECTIONS
• HTTP Clients and Servers exchange multiple messages over the same TCP connection.
• If some of the objects are located on the same server, we have two choices: to retrieve each
object using a new TCP connection or to make a TCP connection and retrieve them all.
• The first method is referred to as a non-persistent connection, the second as a persistent
connection.
• HTTP 1.0 uses non-persistent connections and HTTP 1.1 uses persistent connections .
Non-Persistent Connections
• In a non-persistent connection, one TCP connection is made for each request/response.
• Only one object can be sent over a single TCP connection
• The client opens a TCP connection and sends a request.
• The server sends the response and closes the connection.
• The client reads the data until it encounters an end-of-file marker.
• It then closes the connection.
Persistent Connections
• HTTP version 1.1 specifies a persistent connection by default.
• Multiple objects can be sent over a single TCP connection.
• In a persistent connection, the server leaves the connection open for more requests after
sending a response.
• The server can close the connection at the request of a client or if a time-out has been
reached.
• Time and resources are saved using persistent connections. Only one set of buffers and
variables needs to be set for the connection at each site.
• The round trip time for connection establishment and connection termination is saved.
Http Cookies
• An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie)
is a small piece of data sent from a website and stored on the user's computer by the user's web
browser while the user is browsing.
• They can also be used to remember arbitrary pieces of information that the user previously
entered into form fields such as names, addresses, passwords, and credit card numbers.
Components of Cookie
A cookie consists of the following components:
1. Name
2. Value
3. Zero or more attributes (name/value pairs). Attributes store information such as
the cookie's expiration, domain, and flags.
Using Cookies
• When a client sends a request to a server, the browser looks in the cookie directory to see if
it can find a cookie sent by that server.
• If found, the cookie is included in the request.
• When the server receives the request, it knows that this is an old client, not a new one.
• The contents of the cookie are never read by the browser or disclosed to the user. It is a
cookie made by the server and eaten by the server.
Types of Cookies
1.Authentication cookies
These are the most common method used by web servers to know whether the user is logged
in or not, and which account they are logged in with. Without such a mechanism, the site
would not know whether to send a page containing sensitive information, or require the user
to authenticate themselves by logging in.
2.Tracking cookies
These are commonly used as ways to compile individuals browsing histories.
3.Session cookie
A session cookie exists only in temporary memory while the user navigates the website. Web
browsers normally delete session cookies when the user closes the browser.
4.Persistent cookie
Instead of expiring when the web browser is closed as session cookies do, a persistent cookie
expires at a specific date or after a specific length of time. This means that, for the cookie's
entire lifespan , its information will be transmitted to the server every time the user visits the
website that it belongs to, or every time the user views a resource belonging to that website
from another website
Http Caching
HTTP Caching enables the client to retrieve document faster and reduces load on the
server.
HTTP Caching is implemented at Proxy server, ISP router and Browser.
Server sets expiration date (Expires header) for each page, beyond which it is not cached.
HTTP Cache document is returned to client only if it is an updated copy by checking
against If-Modified-Since header.
If cache document is out-of-date, then request is forwarded to the server and response is
cached along the way.
A web page will not be cached if no-cache directive is specified.
HTTP SECURITY
HTTP does not provide security.
However HTTP can be run over the Secure Socket Layer (SSL).
In this case, HTTP is referred to as HTTPS.
HTTPS provides confidentiality, client and server authentication, and data
integrity.
FTP OBJECTIVES
It provides the sharing of files.
It is used to encourage the use of remote computers.
It transfers the data more reliably and efficiently.
FTP MECHANISM
FTP CONNECTIONS
There are two types of connections in FTP - Control Connection and Data Connection.
The control connection remains connected during the entire interactive FTP session.
The data connection is opened and then closed for each file transfer activity. When a user
starts an FTP session, the control connection opens.
While the control connection is open, the data connection can be opened and closed
multiple times if several files are transferred.
FTP COMMUNICATION
FTP Communication is achieved through commands and responses.
FTP Commands are sent from the client to the server
FTP responses are sent from the server to the client.
FTP Commands are in the form of ASCII uppercase, which may or may not be followed by
an argument.
Some of the most common commands are:
Every FTP command generates at least one response.
A response has two parts: a three-digit number followed by text.
The numeric part defines the code; the text part defines needed parameter.
FTP SECURITY
FTP requires a password, the password is sent in plaintext which is unencrypted. This
means it can be intercepted and used by an attacker.
The data transfer connection also transfers data in plaintext, which is insecure.
To be secure, one can add a Secure Socket Layer between the FTP application layer and the
TCP layer.
In this case FTP is called SSL-FTP.
When the sender and the receiver of an e-mail are on the same system, we need only two
User Agents and no Message Transfer Agent
When the sender and the receiver of an e-mail are on different system, we need two UA,
two pairs of MTA (client and server), and two MAA (client and server).
WORKING OF EMAIL
When Alice needs to send a message to Bob, she runs a UA program to prepare the
message and send it to her mail server.
The mail server at her site uses a queue (spool) to store messages waiting to be sent. The
message, however, needs to be sent through the Internet from Alice’s
site to Bob’s site using an MTA.
Here two message transfer agents are needed: one client and one server.
The server needs to run all the time because it does not know when a client will ask for a
connection.
The client can be triggered by the system when there is a message in the queue to be sent.
The user agent at the Bob site allows Bob to read the received message.
Bob later uses an MAA client to retrieve the message from an MAA server running on the
second server.
GUI-based
o Modern user agents are GUI-based.
o They allow the user to interact with the software by using both the keyboard and the mouse.
o They have graphical components such as icons, menu bars, and windows that make the
services easy to access.
o Some examples of GUI-based user agents are Eudora and Outlook.
Email was extended in 1993 to carry many different types of data: audio, video, images,
Word documents, and so on.
This extended version is known as MIME(Multipurpose Mail Extension).
SMTP also allows the use of Relays allowing other MTAs to relay the mail.
SMTP MAIL FLOW
SMTP Commands
Commands are sent from the client to the server. It consists of a keyword followed by zero
or more arguments. SMTP defines 14 commands.
SMTP Responses
Responses are sent from the server to the client.
A response is a three digit code that may be followed by additional textual information.
SMTP OPERATIONS
Connection Setup
An SMTP sender will attempt to set up a TCP connection with a target host
when it has one or more mail messages to deliver to that host.
The sequence is quite simple:
1. The sender opens a TCP connection with the receiver.
2. Once the connection is established, the receiver identifies itself with "Service Ready”.
3. The sender identifies itself with the HELO command.
4. The receiver accepts the sender's identification with "OK".
5. If the mail service on the destination is unavailable, the destination host returns a "Service
Not Available" reply in step 2, and the process is terminated.
Mail Transfer
Once a connection has been established, the SMTP sender may send one or more messages
to the SMTP receiver.
There are three logical phases to the transfer of a message:
1. A MAIL command identifies the originator of the message.
2. One or more RCPT commands identify the recipients for this message.
3. A DATA command transfers the message text.
Connection Termination
The SMTP sender closes the connection in two steps.
First, the sender sends a QUIT command and waits for a reply.
The second step is to initiate a TCP close operation for the TCP connection.
The receiver initiates its TCP close after sending its reply to the QUIT command.
Limitations Of Smtp
SMTP cannot transmit executable files or other binary objects.
SMTP cannot transmit text data that includes national language characters, as these are
represented by 8-bit codes with values of 128 decimal or higher, and SMTP is limited to 7-bit
ASCII.
SMTP servers may reject mail message over a certain size.
SMTP gateways that translate between ASCII and the character code EBCDIC do not use a
consistent set of mappings, resulting in translation problems.
Some SMTP implementations do not adhere completely to the SMTP standards
defined.
Common problems include the following:
1. Deletion, addition, or recording of carriage return and linefeed.
2. Truncating or wrapping lines longer than 76 characters.
3. Removal of trailing white space (tab and space characters).
4. Padding of lines in a message to the same length.
5. Conversion of tab characters into multiple-space characters.
SMTP provides a basic email service, while MIME adds multimedia capability to
SMTP.
MIME is an extension to SMTP and is used to overcome the problems and limitations
of SMTP.
Email system was designed to send messages only in ASCII format.
MIME HEADERS
Using headers, MIME describes the type of message content and the encoding
used.
Headers defined in MIME are:
• MIME-Version- current version, i.e., 1.1
• Content-Type - message type (text/html, image/jpeg, application/pdf)
• Content-Transfer-Encoding - message encoding scheme (eg base64).
• Content-Id - unique identifier for the message.
• Content-Description - describes type of the message body.
MTA is a mail daemon (send mail) active on hosts having mailbox, used to send an email.
Mail passes through a sequence of gateways before it reaches the recipient mail server.
Each gateway stores and forwards the mail using Simple mail transfer protocol (SMTP).
SMTP defines communication between MTAs over TCP on port 25.
In an SMTP session, sending MTA is client and receiver is server. In each exchange:
Client posts a command (HELO, MAIL, RCPT, DATA, QUIT, VRFY, etc.)
Server responds with a code (250, 550, 354, 221, 251 etc) and an explanation.
Client is identified using HELO command and verified by the server
Client forwards message to server, if server is willing to accept.
Message is terminated by a line with only single period (.) in it.
Eventually client terminates the connection.
IMAP is an Application Layer Internet protocol that allows an e-mail client to access e-mail
on a remote mail server.
It is a method of accessing electronic mail messages that are kept on a possibly shared mail
server.
IMAP is a more capable wire protocol.
IMAP is similar to SMTP in many ways.
IMAP is a client/server protocol running over TCP on port 143.
IMAP allows multiple clients simultaneously connected to the same mailbox, and through
flags stored on the server, different clients accessing the same mailbox at the same or different
times can detect state changes made by other clients.
In other words, it permits a "client" email program to access remote message stores as if
they were local.
For example, email stored on an IMAP server can be manipulated from a desktop computer
at home, a workstation at the office, and a notebook computer while travelling, without the
need to transfer messages or files back and forth between these computers.
IMAP can support email serving in three modes:
Offline
Online
Users may connect to the server, look at what email is available, and access it online. This
looks to the user very much like having local spool files, but they’re on the mail server.
Disconnected operation
A mail client connects to the server, can make a “cache” copy of selected messages, and
disconnects from the server. The user can then work on the messages offline, and connect to
the server later and resynchronize the server status with the cache.
OPERATION OF IMAP
The mail transfer begins with the client authenticating the user and identifying the mailbox
they want to access.
Client Commands
LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE, and LOGOUT
Server Responses
OK, NO (no permission), BAD (incorrect command),
When user wishes to FETCH a message, server responds in MIME format.
Message attributes such as size are also exchanged.
Flags are used by client to report user actions.
SEEN, ANSWERED, DELETED, RECENT
IMAP4
The latest version is IMAP4. IMAP4 is more powerful and more complex.
IMAP4 provides the following extra functions:
Advantages Of IMAP
With IMAP, the primary storage is on the server, not on the local machine.
Email being put away for storage can be foldered on local disk, or can be foldered on the
IMAP server.
The protocol allows full user of remote folders, including a remote folder hierarchy and
multiple inboxes.
It keeps track of explicit status of messages, and allows for user-defined status.
Supports new mail notification explicitly.
Extensible for non-email data, like netnews, document storage, etc.
Selective fetching of individual MIME body parts.
Server-based search to minimize data transfer.
Servers may have extensions that can be negotiated.
1.8.4.4 POST OFFICE PROTOCOL (POP3)
Post Office Protocol (POP3) is an application-layer Internet standard protocol used by local
e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection.
There are two versions of POP.
• The first, called POP2, became a standard in the mid-80's and requires SMTP to send
messages.
• The current version, POP3, can be used with or without SMTP. POP3 uses TCP/IP port 110.
POP is a much simpler protocol, making implementation easier.
POP supports offline access to the messages, thus requires less internet usage time
POP does not allow search facility.
In order to access the messages, it is necessary to download them.
It allows only one mailbox to be created on server.
It is not suitable for accessing non mail data.
POP mail moves the message from the email server onto the local computer, although there
is usually an option to leave the messages on the email server as well.
POP treats the mailbox as one store, and has no concept of folders.
POP works in two modes namely, delete and keep mode.
• In delete mode, mail is deleted from the mailbox after retrieval. The delete mode is normally
used when the user is working at their permanent computer and can save and organize the
received mail after reading or replying.
• In keep mode, mail after reading is kept in mailbox for later retrieval. The keep mode is
normally used when the user accesses her mail away from their primary computer .
POP3 client is installed on the recipient computer and POP server on the mail server.
Client opens a connection to the server using TCP on port 110.
Client sends username and password to access mailbox and to retrieve messages.
POP3 Commands
POP commands are generally abbreviated into codes of three or four letters
The following describes some of the POP commands:
1. UID - This command opens the connection
2. STAT - It is used to display number of messages currently in the mailbox
3. LIST - It is used to get the summary of messages
4. RETR -This command helps to select a mailbox to access the messages
5. DELE - It is used to delete a message
6. RSET - It is used to reset the session to its initial state
7. QUIT - It is used to log off the session
WORKING OF DNS
The following six steps shows the working of a DNS. It maps the host name to an IP
address:
1. The user passes the host name to the file transfer client.
2. The file transfer client passes the host name to the DNS client.
3. Each computer, after being booted, knows the address of one DNS server. The DNS client
sends a message to a DNS server with a query that gives the file transfer server name using
the known IP address of the DNS server.
4. The DNS server responds with the IP address of the desired file transfer server.
5. The DNS server passes the IP address to the file transfer client.
6. The file transfer client now uses the received IP address to access the file transfer server.
NAME SPACE
To be unambiguous, the names assigned to machines must be carefully selected from a
name space with complete control over the binding between the names and IP address.
The names must be unique because the addresses are unique.
A name space that maps each address to a unique name can be organized in two ways: flat
(or) hierarchical.
Each node in the tree has a label, which is a string with a maximum of 63 characters.
The root label is a null string (empty string). DNS requires that children of a node (nodes
that branch from the same node) have different labels, which guarantees the uniqueness of the
domain names.
Domain Name
• Each node in the tree has a label called as domain name.
• A full domain name is a sequence of labels separated by dots (.)
• The domain names are always read from the node up to the root.
• The last label is the label of the root (null).
• This means that a full domain name always ends in a null label, which means the last
character is a dot because the null string is nothing.
• If a label is terminated by a null string, it is called a fully qualified domain name (FQDN).
• If a label is not terminated by a null string, it is called a partially qualified domain name
(PQDN).
Domain
• A domain is a subtree of the domain name space.
• The name of the domain is the domain name of the node at the top of the sub- tree.
• A domain may itself be divided into domains.
ZONE
What a server is responsible for, or has authority over, is called a zone.
The server makes a database called a zone file and keeps all the information for every node
under that domain.
If a server accepts responsibility for a domain and does not divide the domains into smaller
domains, the domain and zone refer to the same thing.
But if a server divides its domain into sub domains and delegates parts of its authority to
other servers, domain and zone refer to different things.
The information about the nodes in the sub domains is stored in the servers at the lower
levels, with the original server keeping some sort of references to these lower level servers.
But still, the original server does not free itself from responsibility totally.
It still has a zone, but the detailed information is kept by the lower level servers.
ROOT SERVER
A root sever is a server whose zone consists of the whole tree.
A root server usually does not store any information about domains but delegates its
authority to other servers, keeping references to those servers.
Currently there are more than 13 root servers, each covering the whole domain
name space.
The servers are distributed all around the world.
Country Domains
The country domains section follows the same format as the generic domains but uses two
characters for country abbreviations
E.g.; in for India, us for United States etc) in place of the three character organizational
abbreviation at the first level.
Second level labels can be organizational, or they can be more specific, national
designation.
India for example, uses state abbreviations as a subdivision of the country domain us. (e.g.,
ca.in.)
Inverse Domains
Mapping an address to a name is called Inverse domain.
The client can send an IP address to a server to be mapped to a domain name and it is called
PTR(Pointer) query.
To answer queries of this kind, DNS uses the inverse domain.
DNS RESOLUTION
Mapping a name to an address or an address to a name is called name address resolution.
DNS is designed as a client server application.
A host that needs to map an address to a name or a name to an address calls a DNS client
named a Resolver.
The Resolver accesses the closest DNS server with a mapping request.
If the server has the information, it satisfies the resolver; otherwise, it either refers the
resolver to other servers or asks other servers to provide the information.
After the resolver receives the mapping, it interprets the response to see if it is a real
resolution or an error and finally delivers the result to the process that requested it.
A resolution can be either recursive or iterative.
Recursive Resolution
• The application program on the source host calls the DNS resolver (client) to find the IP
address of the destination host. The resolver, which does not know this address, sends the
query to the local DNS server of the source (Event 1)
• The local server sends the query to a root DNS server (Event 2)
• The Root server sends the query to the top-level-DNS server(Event 3)
• The top-level DNS server knows only the IP address of the local DNS server at the
destination. So it forwards the query to the local server, which knows the IP address of the
destination host (Event 4)
• The IP address of the destination host is now sent back to the top-level DNS server(Event 5)
then back to the root server (Event 6), then back to the source DNS server, which may cache it
for the future queries (Event 7), and finally back to the source host (Event 8)
Iterative Resolution
• In iterative resolution, each server that does not know the mapping, sends the IP address of
the next server back to the one that requested it.
• The iterative resolution takes place between two local servers.
• The original resolver gets the final answer from the destination local server.
• The messages shown by Events 2, 4, and 6 contain the same query.
• However, the message shown by Event 3 contains the IP address of the top- level domain
server.
• The message shown by Event 5 contains the IP address of the destination local DNS server
• The message shown by Event 7 contains the IP address of the destination.
• When the Source local DNS server receives the IP address of the destination, it sends it to
the resolver (Event 8).
DNS CACHING
Each time a server receives a query for a name that is not in its domain, it needs to search
its database for a server IP address.
DNS handles this with a mechanism called caching.
When a server asks for a mapping from another server and receives the response, it stores
this information in its cache memory before sending it to the client.
If the same or another client asks for the same mapping, it can check its cache
memory and resolve the problem.
However, to inform the client that the response is coming from the cache memory and not
from an authoritative source, the server marks the response as unauthoritative.
Caching speeds up resolution. Reduction of this search time would increase efficiency, but
it can also be problematic.
If a server caches a mapping for a long time, it may send an outdated mapping to the client.
To counter this, two techniques are used.
First, the authoritative server always adds information to the mapping called time to live
(TTL). It defines the time in seconds that the receiving server can cache the information. After
that time, the mapping is invalid and any query must be sent again to the authoritative server.
Second, DNS requires that each server keep a TTL counter for each mapping it caches. The
cache memory must be searched periodically and those mappings with an expired TTL must
be purged.
DNS MESSAGES
DNS has two types of messages: query and response.
Both types have the same format.
The query message consists of a header and question section.
The response message consists of a header, question section, answer section,
authoritative section, and additional section .
Header
• Both query and response messages have the same header format with
some fields set to zero for the query messages.
• The header fields are as follows:
• The identification field is used by the client to match the response with the query.
• The flag field defines whether the message is a query or response. It also includes status of
error.
• The next four fields in the header define the number of each record type in the message.
Question Section
• The question section consists of one or more question records. It is present in both query and
response messages.
Answer Section
• The answer section consists of one or more resource records. It is present only in response
messages.
Authoritative Section
• The authoritative section gives information (domain name) about one or more authoritative
servers for the query.
Additional Information Section
• The additional information section provides additional information that may help the
resolver.
DNS CONNECTIONS
DNS REGISTRARS
In DNS, when there is a change, such as adding a new host, removing a host, or changing
an IP address, the change must be made to the DNS master file.
The DNS master file must be updated dynamically.
The Dynamic Domain Name System (DDNS) is used for this purpose.
In DDNS, when a binding between a name and an address is determined, the information is
sent to a primary DNS server.
The primary server updates the zone.
The secondary servers are notified either actively or passively.
In active notification, the primary server sends a message to the secondary servers about the
change in the zone, whereas in passive notification, the secondary servers periodically check
for any changes.
In either case, after being notified about the change, the secondary server requests
information about the entire zone (called the zone transfer).
To provide security and prevent unauthorized changes in the DNS records, DDNS can use
an authentication mechanism.
DNS SECURITY
DNS is one of the most important systems in the Internet infrastructure; it provides crucial
services to Internet users.
Applications such as Web access or e-mail are heavily dependent on the proper operation of
DNS.
DNS can be attacked in several ways including:
To protect DNS, IETF has devised a technology named DNS Security (DNSSEC) that
provides message origin authentication and message integrity using a security service called
digital signature.
DNSSEC, however, does not provide confidentiality for the DNS messages.
There is no specific protection against the denial-of-service attack in the specification of
DNSSEC. However, the caching system protects the upper- level servers against this attack to
some extent.
The Simple Network Management Protocol (SNMP) is a framework for managing devices
in an internet using the TCP/IP protocol suite.
SNMP is an application layer protocol that monitors and manages routers, distributed over a
network.
It provides a set of operations for monitoring and managing the internet.
SNMP uses services of UDP on two well-known ports: 161 (Agent) and 162 (manager).
SNMP uses the concept of manager and agent.
SNMP MANAGER
• A manager is a host that runs the SNMP client program
• The manager has access to the values in the database kept by the agent.
• A manager checks the agent by requesting the information that reflects the behavior of the
agent.
• A manager also forces the agent to perform a certain function by resetting values in the
agent database.
• For example, a router can store in appropriate variables the number of packets received and
forwarded.
• The manager can fetch and compare the values of these two variables to see if the router is
congested or not.
SNMP AGENT
• The agent is a router that runs the SNMP server program.
• The agent is used to keep the information in a database while the manager is used to access
the values in the database.
• For example, a router can store the appropriate variables such as a number of packets
received and forwarded while the manager can compare these variables to determine whether
the router is congested or not.
• Agents can also contribute to the management process.
• A server program on the agent checks the environment, if something goes wrong, the agent
sends a warning message to the manager.
Name
SMI requires that each managed object (such as a router, a variable in a router, a value,etc.)
have a unique name. To name objects globally.
SMI uses an object identifier, which is a hierarchical identifier based on a tree structure.
The tree structure starts with an unnamed root. Each object can be defined using a sequence
of integers separated by dots.
The tree structure can also define an object using a sequence of textual names separated by
dots.
Type of data
The second attribute of an object is the type of data stored in it.
To define the data type, SMI uses Abstract Syntax Notation One (ASN.1) definitions.
SMI has two broad categories of data types: simple and structured.
The simple data types are atomic data types. Some of them are taken directly from ASN.1;
some are added by SMI.
SMI defines two structured data types: sequence and sequence of.
Encoding data
SMI uses another standard, Basic Encoding Rules (BER), to encode data to be transmitted
over the network.
BER specifies that each piece of data be encoded in triplet format (TLV): tag, length, value
The Management Information Base (MIB) is the second component used in network
management.
• Each agent has its own MIB, which is a collection of objects to be managed.
• MIB classifies objects under groups.
MIB Variables
GetRequest
The GetRequest PDU is sent from the manager (client) to the agent (server)
to retrieve the value of a variable or a set of variables.
GetNextRequest
The GetNextRequest PDU is sent from the manager to the agent to retrieve
the value of a variable.
GetBulkRequest
The GetBulkRequest PDU is sent from the manager to the agent to retrieve a
large amount of data. It can be used instead of multiple GetRequest and
GetNextRequest PDUs.
SetRequest
The SetRequest PDU is sent from the manager to the agent to set (store) a
value in a variable.
Response
Trap
The Trap PDU is sent from the agent to the manager to report an event. For
example, if the agent is rebooted, it informs the manager and reports the time
of rebooting.
InformRequest
Report
2.1. INTRODUCTION
The transport layer is the fourth layer of the OSI model and is the core of the Internet
model.
It responds to service requests from the session layer and issues service requests to
the network Layer.
The transport layer provides transparent transfer of data between hosts.
It provides end-to-end control and information transfer with the quality of service
needed by the application program.
It is the first true end-to-end layer, implemented in all End Systems (ES).
Process-to-Process Communication
The Transport Layer is responsible for delivering data to the appropriate application
process on the host computers.
This involves multiplexing of data from different application processes, i.e. forming
data packets, and adding source and destination port numbers in the header of each
Transport Layer data packet.
Together with the source and destination IP address, the port numbers constitutes a
network socket, i.e. an identification address of the process-to-process
communication.
Flow Control
Flow Control is the process of managing the rate of data transmission between two
nodes to prevent a fast sender from overwhelming a slow receiver.
It provides a mechanism for the receiver to control the transmission speed, so that the
receiving node is not overwhelmed with data from transmitting node.
2
CS3591 – Computer Networks Unit 2
Error Control
Error control at the transport layer is responsible for
1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.
Error Control involves Error Detection and Error Correction
Congestion Control
Congestion in a network may occur if the load on the network (the number of
packets sent to the network) is greater than the capacity of the network (the number
of packets a network can handle).
Congestion control refers to the mechanisms and techniques that control the
congestion and keep the load below the capacity.
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.
PORT NUMBERS
A transport-layer protocol usually has several responsibilities.
One is to create a process-to-process communication.
Processes are programs that run on hosts. It could be either server or client.
A process on the local host, called a client, needs services from a process usually
on the remote host, called a server.
Processes are assigned a unique 16-bit port number on that host.
Port numbers provide end-to-end addresses at the transport layer
They also provide multiplexing and demultiplexing at this layer.
3
CS3591 – Computer Networks Unit 2
ICANN (Internet Corporation for Assigned Names and Numbers) has divided the port
numbers into three ranges:
Well-known ports
Registered
Ephemeral ports (Dynamic Ports)
WELL-KNOWN PORTS
These are permanent port numbers used by the servers.
They range between 0 to 1023.
This port number cannot be chosen randomly.
These port numbers are universal port numbers for servers.
Every client process knows the well-known port number of the corresponding server
process.
For example, while the daytime client process, a well-known client program, can
use an ephemeral (temporary) port number, 52,000, to identify itself, the daytime
server process must use the well-known (permanent) port number 13.
4
CS3591 – Computer Networks Unit 2
REGISTERED PORTS
The ports ranging from 1024 to 49,151 are not assigned or controlled.
Each protocol provides a different type of service and should be used appropriately.
5
CS3591 – Computer Networks Unit 2
UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity
and efficiency in applications where error control can be provided by the application-layer
process.
TCP - TCP is a reliable connection-oriented protocol that can be used in any application
where reliability is important.
SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP
and TCP in an effort to create a better protocol for multimedia communication.
UDP PORTS
Processes (server/client) are identified by an abstract locator known as port.
Server accepts message at well known port.
Some well-known UDP ports are 7–Echo, 53–DNS, 111–RPC, 161–SNMP, etc.
< port, host > pair is used as key for demultiplexing.
Ports are implemented as a message queue.
When a message arrives, UDP appends it to end of the queue.
When queue is full, the message is discarded.
When a message is read, it is removed from the queue.
When an application process wants to receive a message, one is removed from the
front of the queue.
If the queue is empty, the process blocks until a message becomes available.
6
CS3591 – Computer Networks Unit 2
7
CS3591 – Computer Networks Unit 2
Length
This field denotes the total length of the UDP Packet (Header plus data)
The total length of any UDP datagram can be from 0 to 65,535 bytes.
Checksum
UDP computes its checksum over the UDP header, the contents of the message
body, and something called the pseudoheader.
The pseudoheader consists of three fields from the IP header—protocol number,
source IP address, destination IP address plus the UDP length field.
Data
Data field defines tha actual payload to be transmitted.
Its size is variable.
UDP SERVICES
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.
Connectionless Services
UDP provides a connectionless service.
There is no connection establishment and no connection termination .
Each user datagram sent by UDP is an independent datagram.
There is no relationship between the different user datagrams even if they are
coming from the same source process and going to the same destination program.
The user datagrams are not numbered.
Each user datagram can travel on a different path.
Flow Control
UDP is a very simple protocol.
There is no flow control, and hence no window mechanism.
The receiver may overflow with incoming messages.
The lack of flow control means that the process using UDP should provide for this
service, if needed.
Error Control
There is no error control mechanism in UDP except for the checksum.
This means that the sender does not know if a message has been lost or duplicated.
When the receiver detects an error through the checksum, the user datagram is
silently discarded.
8
CS3591 – Computer Networks Unit 2
The lack of error control means that the process using UDP should provide for this
service, if needed.
Checksum
UDP checksum calculation includes three sections: a pseudoheader, the UDP header,
and the data coming from the application layer.
The pseudoheader is the part of the header in which the user datagram is to be
encapsulated with some fields filled with 0s.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control.
UDP assumes that the packets sent are small and sporadic(occasionally or at irregular
intervals) and cannot create congestion in the network.
This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.
Queuing
In UDP, queues are associated with ports.
At the client site, when a process starts, it requests a port number from the operating
system.
Some implementations create both an incoming and an outgoing queue associated
with each process.
Other implementations create only an incoming queue associated with each process.
9
CS3591 – Computer Networks Unit 2
APPLICATIONS OF UDP
UDP is used for management processes such as SNMP.
UDP is used for route updating protocols such as RIP.
UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software
UDP is suitable for a process with internal flow and error control mechanisms such
as Trivial File Transfer Protocol (TFTP).
UDP is suitable for a process that requires simple request-response communication
with little concern for flow and error control.
UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.
TCP SERVICES
Process-to-Process Communication
TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP is a stream-oriented protocol.
TCP allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.
TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet.
The sending process produces (writes to) the stream and the receiving process
consumes (reads from) it.
10
CS3591 – Computer Networks Unit 2
Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same
time.
Each TCP endpoint then has its own sending and receiving buffer, and segments
move in both directions.
Connection-Oriented Service
TCP is a connection-oriented protocol.
A connection needs to be established for each pair of processes.
When a process at site A wants to send to and receive data from another
process at site B, the following three phases occur:
1. The two TCP’s establish a logical connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.
Reliable Service
TCP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
TCP SEGMENT
A packet in TCP is called a segment.
Data unit exchanged between TCP peers are called segments.
A TCP segment encapsulates the data received from the application layer.
The TCP segment is encapsulated in an IP datagram, which in turn is encapsulated in
a frame at the data-link layer.
11
CS3591 – Computer Networks Unit 2
TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP
connection and the receiver reads bytes out of the TCP connection.
TCP does not, itself, transmit individual bytes over the Internet.
TCP on the source host buffers enough bytes from the sending process to fill a
reasonably sized packet and then sends this packet to its peer on the destination host.
TCP on the destination host then empties the contents of the packet into a receive
buffer, and the receiving process reads from this buffer at its leisure.
TCP connection supports byte streams flowing in both directions.
The packets exchanged between TCP peers are called segments, since each one
carries a segment of the byte stream.
12
CS3591 – Computer Networks Unit 2
Connection Establishment
While opening a TCP connection the two nodes(client and server) want to agree on a
set of parameters.
The parameters are the starting sequence numbers that is to be used for their
respective byte streams.
Connection establishment in TCP is a three-way handshaking.
1. Client sends a SYN segment to the server containing its initial sequence number (Flags
= SYN, SequenceNum = x)
2. Server responds with a segment that acknowledges client’s segment and specifies its
initial sequence number (Flags = SYN + ACK, ACK = x + 1 SequenceNum = y).
3. Finally, client responds with a segment that acknowledges server’s sequence number
(Flags = ACK, ACK = y + 1).
13
CS3591 – Computer Networks Unit 2
The reason that each side acknowledges a sequence number that is one larger
than the one sent is that the Acknowledgment field actually identifies the “next
sequence number expected,”
A timer is scheduled for each of the first two segments, and if the expected
response is not received, the segment is retransmitted.
Data Transfer
After connection is established, bidirectional data transfer can take place.
The client and server can send data and acknowledgments in both directions.
The data traveling in the same direction as an acknowledgment are carried on the
same segment.
The acknowledgment is piggybacked with the data.
Connection Termination
Connection termination or teardown can be done in two ways :
Three-way Close and Half-Close
14
CS3591 – Computer Networks Unit 2
15
CS3591 – Computer Networks Unit 2
16
CS3591 – Computer Networks Unit 2
Send Buffer
Sending TCP maintains send buffer which contains 3 segments
(1) acknowledged data
(2) unacknowledged data
(3) data to be transmitted.
Send buffer maintains three pointers
(1) LastByteAcked, (2) LastByteSent, and (3) LastByteWritten
such that:
LastByteAcked ≤ LastByteSent ≤ LastByteWritten
A byte can be sent only after being written and only a sent byte can be
acknowledged.
Bytes to the left of LastByteAcked are not kept as it had been acknowledged.
Receive Buffer
Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
Receive buffer maintains three pointers namely
(1) LastByteRead, (2) NextByteExpected, and (3) LastByteRcvd
such that:
LastByteRead ≤ NextByteExpected ≤ LastByteRcvd + 1
A byte cannot be read until that byte and all preceding bytes have been received.
If data is received in order, then NextByteExpected = LastByteRcvd + 1
Bytes to the left of LastByteRead are not buffered, since it is read by the application.
TCP TRANSMISSION
TCP has three mechanism to trigger the transmission of a segment.
They are
o Maximum Segment Size (MSS) - Silly Window Syndrome
o Timeout - Nagle’s Algorithm
18
CS3591 – Computer Networks Unit 2
Nagle’s Algorithm
If there is data to send but is less than MSS, then we may want to wait some amount
of time before sending the available data
If we wait too long, then it may delay the process.
If we don’t wait long enough, it may end up sending small segments resulting in
Silly Window Syndrome.
The solution is to introduce a timer and to transmit when the timer expires
Nagle introduced an algorithm for solving this problem
20
CS3591 – Computer Networks Unit 2
Slow Start
Slow start is used to increase CongestionWindow exponentially from a cold start.
Source TCP initializes CongestionWindow to one packet.
TCP doubles the number of packets sent every RTT on successful transmission.
When ACK arrives for first packet TCP adds 1 packet to CongestionWindow and
sends two packets.
When two ACKs arrive, TCP increments CongestionWindow by 2 packets and sends
four packets and so on.
Instead of sending entire permissible packets at once (bursty traffic), packets are sent
in a phased manner, i.e., slow start.
Initially TCP has no idea about congestion, henceforth it increases
CongestionWindow rapidly until there is a timeout. On timeout:
CongestionThreshold = CongestionWindow/ 2
CongestionWindow = 1
Slow start is repeated until CongestionWindow reaches CongestionThreshold and
thereafter 1 packet per RTT.
21
CS3591 – Computer Networks Unit 2
22
CS3591 – Computer Networks Unit 2
For example, packets 1 and 2 are received whereas packet 3 gets lost.
o Receiver sends a duplicate ACK for packet 2 when packet 4 arrives.
o Sender receives 3 duplicate ACKs after sending packet 6 retransmits packet 3.
o When packet 3 is received, receiver sends cumulative ACK up to packet 6.
The congestion window trace will look like
The idea is to evenly split the responsibility for congestion control between the
routers and the end nodes.
Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.
This notification is implemented by setting a binary congestion bit in the packets that
flow through the router; hence the name DECbit.
23
CS3591 – Computer Networks Unit 2
The destination host then copies this congestion bit into the ACK it sends back to the
source.
The Source checks how many ACK has DEC bit set for previous window packets.
If less than 50% of ACK have DEC bit set, then source increases its congestion
window by 1 packet
Using a queue length of 1 as the trigger for setting the congestion bit.
A router sets this bit in a packet if its average queue length is greater than or equal to
1 at the time the packet arrives.
Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
It calculates the average queue length by dividing the curve area with time interval.
24
CS3591 – Computer Networks Unit 2
Each router is programmed to monitor its own queue length, and when it detects that
there is congestion, it notifies the source to adjust its congestion window.
RED differs from the DEC bit scheme by two ways:
a. In DECbit, explicit notification about congestion is sent to source, whereas
RED implicitly notifies the source by dropping a few packets.
b. DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner. Drop each arriving packet with some
drop probability whenever the queue length exceeds some drop level. This
idea is called early random drop.
RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold
When a packet arrives at a gateway it compares Avglen with these two values
according to the following rules.
25
CS3591 – Computer Networks Unit 2
SCTP SERVICES
Process-to-Process Communication
SCTP provides process-to-process communication.
Multiple Streams
SCTP allows multistream service in each connection, which is called association in
SCTP terminology.
If one of the streams is blocked, the other streams can still deliver their data.
Multihoming
An SCTP association supports multihoming service.
The sending and receiving host can define multiple IP addresses in each end for an
association.
In this fault-tolerant approach, when one path fails, another interface can be used for
data delivery without interruption.
26
CS3591 – Computer Networks Unit 2
Full-Duplex Communication
SCTP offers full-duplex service, where data can flow in both directions at the same
time. Each SCTP then has a sending and receiving buffer and packets are sent in both
directions.
Connection-Oriented Service
SCTP is a connection-oriented protocol.
In SCTP, a connection is called an association.
If a client wants to send and receive message from server , the steps are :
Step1: The two SCTPs establish the connection with each other.
Step2: Once the connection is established, the data gets exchanged in both the
directions.
Step3: Finally, the association is terminated.
Reliable Service
SCTP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
An SCTP packet has a mandatory general header and a set of blocks called chunks.
General Header
The general header (packet header) defines the end points of each association to
which the packet belongs
It guarantees that the packet belongs to a particular association
It also preserves the integrity of the contents of the packet including the header itself.
There are four fields in the general header.
Source port
This field identifies the sending port.
Destination port
This field identifies the receiving port that hosts use to route the packet to the
appropriate endpoint/application.
27
CS3591 – Computer Networks Unit 2
Verification tag
A 32-bit random value created during initialization to distinguish stale packets
from a previous connection.
Checksum
The next field is a checksum. The size of the checksum is 32 bits. SCTP uses
CRC-32 Checksum.
Chunks
Control information or user data are carried in chunks.
Chunks have a common layout.
The first three fields are common to all chunks; the information field depends on the
type of chunk.
The type field can define up to 256 types of chunks. Only a few have been defined so
far; the rest are reserved for future use.
The flag field defines special flags that a particular chunk may need.
The length field defines the total size of the chunk, in bytes, including the type, flag,
and length fields.
Types of Chunks
An SCTP association may send many packets, a packet may contain several chunks,
and chunks may belong to different streams.
SCTP defines two types of chunks - Control chunks and Data chunks.
A control chunk controls and maintains the association.
A data chunk carries user data.
28
CS3591 – Computer Networks Unit 2
SCTP ASSOCIATION
SCTP is a connection-oriented protocol.
A connection in SCTP is called an association to emphasize multihoming.
SCTP Associations consists of three phases:
Association Establishment
Data Transfer
Association Termination
Association Establishment
Association establishment in SCTP requires a four-way handshake.
In this procedure, a client process wants to establish an association with a server
process using SCTP as the transport-layer protocol.
The SCTP server needs to be prepared to receive any association (passive open).
Association establishment, however, is initiated by the client (active open).
The client sends the first packet, which contains an INIT chunk.
The server sends the second packet, which contains an INIT ACK chunk. The INIT
ACK also sends a cookie that defines the state of the server at this moment.
The client sends the third packet, which includes a COOKIE ECHO chunk. This is a
very simple chunk that echoes, without change, the cookie sent by the server. SCTP
allows the inclusion of data chunks in this packet.
The server sends the fourth packet, which includes the COOKIE ACK chunk that
acknowledges the receipt of the COOKIE ECHO chunk. SCTP allows the inclusion
of data chunks with this packet.
Data Transfer
The whole purpose of an association is to transfer data between two ends.
After the association is established, bidirectional data transfer can take place.
The client and the server can both send data.
SCTP supports piggybacking.
29
CS3591 – Computer Networks Unit 2
2. Multistream Delivery
SCTP can support multiple streams, which means that the sender process
can define different streams and a message can belong to one of these
streams.
Each stream is assigned a stream identifier (SI) which uniquely defines
that stream.
SCTP supports two types of data delivery in each stream: ordered (default)
and unordered.
Association Termination
In SCTP,either of the two parties involved in exchanging data (client or server) can
close the connection.
SCTP does not allow a “half closed” association. If one end closes the association,
the other end must stop sending new data.
If any data are left over in the queue of the recipient of the termination request, they
are sent and the association is closed.
Association termination uses three packets.
Receiver Site
The receiver has one buffer (queue) and three variables.
30
CS3591 – Computer Networks Unit 2
The queue holds the received data chunks that have not yet been read by the process.
The first variable holds the last TSN received, cumTSN.
The second variable holds the available buffer size; winsize.
The third variable holds the last accumulative acknowledgment, lastACK.
The following figure shows the queue and variables at the receiver site.
When the site receives a data chunk, it stores it at the end of the buffer (queue) and
subtracts the size of the chunk from winSize.
The TSN number of the chunk is stored in the cumTSN variable.
When the process reads a chunk, it removes it from the queue and adds the size of the
removed chunk to winSize (recycling).
When the receiver decides to send a SACK, it checks the value of lastAck; if it is less
than cumTSN, it sends a SACK with a cumulative TSN number equal to the
cumTSN.
It also includes the value of winSize as the advertised window size.
Sender Site
The sender has one buffer (queue) and three variables: curTSN, rwnd, and inTransit.
We assume each chunk is 100 bytes long. The buffer holds the chunks produced by
the process that either have been sent or are ready to be sent.
The first variable, curTSN, refers to the next chunk to be sent.
All chunks in the queue with a TSN less than this value have been sent, but not
acknowledged; they are outstanding.
The second variable, rwnd, holds the last value advertised by the receiver (in bytes).
The third variable, inTransit, holds the number of bytes in transit, bytes sent but not
yet acknowledged.
The following figure shows the queue and variables at the sender site.
31
CS3591 – Computer Networks Unit 2
A chunk pointed to by curTSN can be sent if the size of the data is less than or equal
to the quantity rwnd - inTransit.
After sending the chunk, the value of curTSN is incremented by 1 and now points to
the next chunk to be sent.
The value of inTransit is incremented by the size of the data in the transmitted chunk.
When a SACK is received, the chunks with a TSN less than or equal to the
cumulative TSN in the SACK are removed from the queue and discarded. The sender
does not have to worry about them anymore.
The value of inTransit is reduced by the total size of the discarded chunks.
The value of rwnd is updated with the value of the advertised window in the SACK.
Receiver Site
The receiver stores all chunks that have arrived in its queue including the out-of-
order ones. However, it leaves spaces for any missing chunks.
It discards duplicate messages, but keeps track of them for reports to the sender.
The following figure shows a typical design for the receiver site and the state of the
receiving queue at a particular point in time.
32
CS3591 – Computer Networks Unit 2
An array of variables keeps track of the beginning and the end of each block that is
out of order.
An array of variables holds the duplicate chunks received.
There is no need for storing duplicate chunks in the queue and they will be discarded.
Sender Site
At the sender site, it needs two buffers (queues): a sending queue and a
retransmission queue.
Three variables were used - rwnd, inTransit, and curTSN as described in the previous
section.
The following figure shows a typical design.
33
CS3591 Computer Networks Unit 2
Quality of Service(QoS)
Quality of Service(QoS) is basically the ability to provide different priority to different
applications, users, or data flows, or in order to guarantee a certain level of performance to the
flow of data.In other words, we can also define Quality of Service as something that the flow
seeks to attain.QoS is basically the overall performance of the computer network. Mainly the
performance of the network is seen by the user of the Network.
Flow Characteristics
Given below are four types of characteristics that are mainly attributed to the flow and these are
as follows:
• Reliability
• Delay
• Jitter
• Bandwidth
Reliability
It is one of the main characteristics that the flow needs. If there is a lack of reliability then
it simply means losing any packet or losing an acknowledgement due to which retransmission is
needed.Reliability becomes more important for electronic mail, file transfer, and for internet
access.
Delay
Another characteristic of the flow is the delay in transmission between the source and
destination. During audio conferencing, telephony, video conferencing, and remote conferencing
there should be a minimum delay.
Jitter
It is basically the variation in the delay for packets that belongs to the same flow. Thus
Jitter is basically the variation in the packet delay. Higher the value of jitter means there is a
large delay and the low jitter means the variation is small.
Bandwidth
The different applications need different bandwidth.
How to achieve Quality of Service?
Quality of Service, which can be done by using some tools and techniques, like jitter
buffer and traffic shaping.
Jitter buffer
This is a temporary storage buffer which is used to store the incoming data packets, it is
used in packet-based networks to ensure that the continuity of the data streams doesn't get
disturbed, it does that by smoothing out the packet arrival times during periods of network
congestion.
Traffic shaping
This technique which is also known as packet shaping is a congestion control or
management technique that helps to regulate network data transfer by delaying the flow of least
important or least necessary data packets.
34
CS3591 Computer Networks Unit 2
Stateless solution: Here, the server is not required to keep or store the server information or
session details to itself. The routers maintain no fine-grained state about traffic, one positive
factor of this is, that it's scalable and robust. But also, it has weak services as there is no
guarantee about the kind of performance delay in a particular application which we encounter. In
the stateless solution, the server and client are loosely coupled and can act.
Stateful solution: Here, the server is required to maintain the current state and session
information, the routers maintain per-flow state as the flow is very important in providing the
Quality-of-Service which is providing powerful services such as guaranteed services and high
resource utilization, provides protection, and is much less scalable and robust. Here, the server
and client are tightly bounded.
35
CS3591 Computer Networks Unit 2
Integrated Services: or IntServ, this QoS model reserves the bandwidth along a specific path on
the network. The applications ask the network's resource reservation for themselves and
parallelly the network devices monitor the flow of packets to make sure network resources can
accept packets. Point to remember: while implementing Integrated Services Model, the IntServ-
capable routers and resource reservation protocol are necessary. This model has limited
scalability and high consumption of the network resources.
Differentiated Services: in this QoS model, the network elements such as routers and switches
are configured to serve multiple categories of traffic with different priority orders. A company
can categorize the network traffic based on its requirements. Eg. Assigning higher priority to
audio traffic etc.
Difference between Integrated Services and Differentiated Services:
This Architecture mainly specifies the This Architecture mainly specifies a simple and
elements to guarantee Quality of scalable mechanism for classifying and managing the
Service (QoS) on the network. traffic of the network and also provides QoS on the
modern IP networks.
These services mainly involve the prior These services mark the packets with the priority and
reservation of the resources before then sends it to the network and there is no concept of
sending in order to achieve Quality of prior reservation.
Service.
These involve per flow Setup These involve long term Setup
In this end to end service scope is In this domain service scope is involved
available.
36
UNIT III NETWORK LAYER 7
NETWORK LAYER
• The network layer in the TCP/IP protocol suite is responsible for the host-to host
delivery of datagrams.
• It provides services to the transport layer and receives services from the datalink
layer.
• The network layer translates the logical addresses into physical addresses.
• It determines the route from the source to the destination and also manages the traffic
problems such as switching, routing and controls the congestion of data packets.
• The main role of the network layer is to move the packets from sending host to the
receiving host.
Services provided by network layer
PACKETIZING
• The first duty of the network layer is definitely packetizing.
• This means encapsulating the payload (data received from upper layer) in a network-
layer packet at the source and decapsulating the payload from the network-layer
packet at the destination.
• The network layer is responsible for delivery of packets from a sender to a receiver
without changing or using the contents.
ROUTING AND FORWARDING
Routing
• The network layer is responsible for routing the packet from its source to the
destination.
• The network layer is responsible for finding the best one among these possible
• routes.
• The network layer needs to have some specific strategies for defining the best
route.
• Routing is the concept of applying strategies and running routing protocols to
create the decision-making tables for each router.
• These tables are called as routing tables.
Forwarding
• Forwarding can be defined as the action applied by each router when a packet
arrives at one of its interfaces.
• The decision-making table, a router normally uses for applying this action is
called the forwarding table.
• When a router receives a packet from one of its attached networks, it needs to
forward the packet to another attached network.
ERROR CONTROL
The network layer in the Internet does not directly provide error control.
It adds a checksum field to the datagram to control any corruption in the header, but
not in the whole datagram.
This checksum prevents any changes or corruptions in the header of the datagram.
The Internet uses an auxiliary protocol called ICMP, that provides some kind of error
control if the datagram is discarded or has some unknown information in the header.
FLOW CONTROL
Flow control regulates the amount of data a source can send without overwhelming
the receiver.
The network layer in the Internet, however, does not directly provide any flow
control.
The datagrams are sent by the sender when they are ready, without any attention to
the readiness of the receiver.
Flow control is provided for most of the upper-layer protocols that use the services of
the network layer, so another level of flow control makes the network layer more
complicated and the whole system less efficient.
CONGESTION CONTROL
Another issue in a network-layer protocol is congestion control.
Congestion in the network layer is a situation in which too many datagrams are
present in an area of the Internet.
Congestion may occur if the number of datagrams sent by source computers is beyond
the capacity of the network or routers.
In this situation, some routers may drop some of the datagrams.
SECURITY
Another issue related to communication at the network layer is security.
To provide security for a connectionless network layer, we need to have another
virtual level that changes the connectionless service to a connection oriented service.
This virtual layer is called as called IPSec (IP Security).
2.1 SWITCHING
• The technique of transferring the information from one computer network to another
network is known as switching.
• Switching in a computer network is achieved by using switches.
• A switch is a small hardware device which is used to join multiple computers
together with one local area network (LAN).
• Switches are devices capable of creating temporary connections between two or
more devices linked to the switch.
• Switches are used to forward the packets based on MAC addresses.
• A Switch is used to transfer the data only to the device that has been addressed. It
verifies the destination address to route the packet appropriately.
• It is operated in full duplex mode.
• It does not broadcast the message as it works with limited bandwidth.
Advantages of Switching:
Switch increases the bandwidth of the network.
It reduces the workload on individual PCs as it sends the information to only that
device which has been addressed.
It increases the overall performance of the network by reducing the traffic on the
network.
There will be less frame collision as switch creates the collision domain for each
connection.
Disadvantages of Switching:
A Switch is more expensive than network bridges.
A Switch cannot determine the network connectivity issues easily.
Proper designing and configuration of the switch are required to handle multicast
packets.
PACKET SWITCHING
The packet switching is a switching technique in which the message is sent in one
go, but it is divided into smaller pieces, and they are sent individually.
The message splits into smaller pieces known as packets and packets are given a
unique number to identify their order at the receiving end.
Every packet contains some information in its headers such as source address,
destination address and sequence number.
Packets will travel across the network, taking the shortest path as possible.
All the packets are reassembled at the receiving end in correct order.
If any packet is missing or corrupted, then the message will be sent to resend the
message.
If the correct order of the packets is reached, then the acknowledgment message
will be sent.
In this example, all four packets (or datagrams) belong to the same message, but may travel
different paths to reach their destination.
Routing Table
In this type of network, each switch (or packet switch) has a routing table which is
based
on the destination address. The routing tables are dynamic and are updated periodically. The
destination addresses and the corresponding forwarding output ports are recorded in the
tables.
Delay in a datagram network
A dedicated path exists for data No dedicated path exists for data
transfer transfer
All the packets take the same path All the packets may not take the same
path
Resources are allocated on demand
No resources are allocated
using 1st packet
Reliable Unreliable
FIELD DESCRIPTION
Version Specifies the version of IP. Two versions exists – IPv4 and IPv6.
HLen Specifies the length of the header
TOS An indication of the parameters of the quality of service desired such as Precedence,
(Type of Service) Delay, Throughput and Reliability.
Length Length of the entire datagram, including the header. The maximum size of an IP
datagram is 65,535(210 )bytes
Ident (Identification) Uniquely identifies the packet sequence number. Used for fragmentation and re-
assembly.
Flags Used to control whether routers are allowed to fragment a packet. If a packet is
fragmented , this flag value is 1.If not, flag value is 0.
Offset (Fragmentation Indicates where in the datagram, this fragment belongs. The fragment offset is
offset) measured in units of 8 octets (64 bits). The first fragment has offset zero.
TTL Indicates the maximum time the datagram is allowed to remain in the network. If this
(Time to Live) field contains the value zero, then the datagram must be destroyed.
Protocol Indicates the next level protocol used in the data portion of the datagram
Checksum Used to detect the processing errors introduced into the packet
Source Address The IP address of the original sender of the packet.
Destination Address The IP address of the final destination of the packet.
Options This is optional field. These options may contain values for options such as Security,
Record Route, Time Stamp, etc
Pad Used to ensure that the internet header ends on a 32 bit boundary. The padding is zero.
IP DATAGRAM - FRAGMENTATION AND REASSEMBLY
Fragmentation :
Every network type has a maximum transmission unit (MTU), which is the
largest IP datagram that it can carry in a frame.
The original packet starts at the client; the fragments are reassembled at the
server.
The value of the identification field is the same in all fragments, as is the value
of the flags field with the more bit set for all fragments except the last.
Also, the value of the offset field for each fragment is shown.
Although the fragments arrived out of order at the destination, they can be
correctly reassembled.
The value of the offset field is always relative to the original datagram.
Even if each fragment follows a different path and arrives out of order, the
final destination host can reassemble the original datagram from the fragments
received (if none of them is lost) using the following strategy:
1) The first fragment has an offset field value of zero.
2) Divide the length of the first fragment by 8. The second fragment has
an offset value equal to that result.
3) Divide the total length of the first and second fragment by 8. The third
fragment has an offset value equal to that result.
4) Continue the process. The last fragment has its M bit set to 0.
5) Continue the process. The last fragment has a more bit value of 0.
Reassembly:
Reassembly is done at the receiving host and not at each router.
To enable these fragments to be reassembled at the receiving host, they all
carry the same identifier in the Ident field.
This identifier is chosen by the sending host and is intended to be unique
among all the datagrams that might arrive at the destination from this source
over some reasonable time period.
Since all fragments of the original datagram contain this identifier, the
reassembling host will be able to recognize those fragments that go together.
For example, if a single fragment is lost, the receiver will still attempt to
reassemble the datagram, and it will eventually give up and have to garbage-
collect the resources that were used to perform the failed reassembly.
Hosts are now strongly encouraged to perform “path MTU discovery,” a
process by which fragmentation is avoided by sending packets that are small
enough to traverse the link with the smallest MTU in the path from sender to
receiver.
IP SECURITY
There are three security issues that are particularly applicable to the IP protocol:
(1) Packet Sniffing (2) Packet Modification and (3) IP Spoofing.
Packet Sniffing
An intruder may intercept an IP packet and make a copy of it.
Packet sniffing is a passive attack, in which the attacker does not change the
contents of the packet.
This type of attack is very difficult to detect because the sender and the
receiver may never know that the packet has been copied.
Although packet sniffing cannot be stopped, encryption of the packet can
make the attacker’s effort useless.
The attacker may still sniff the packet, but the content is not detectable.
Packet Modification
The second type of attack is to modify the packet.
The attacker intercepts the packet, changes its contents, and sends the new
packet to the receiver.
The receiver believes that the packet is coming from the original sender.
This type of attack can be detected using a data integrity mechanism.
The receiver, before opening and using the contents of the message, can use
this mechanism to make sure that the packet has not been changed during the
transmission.
IP Spoofing
An attacker can masquerade as somebody else and create an IP packet that
carries the source address of another computer.
An attacker can send an IP packet to a bank pretending that it is coming from
one of the customers.
This type of attack can be prevented using an origin authentication mechanism
IP Sec
The IP packets today can be protected from the previously mentioned attacks
using a protocol called IPSec (IP Security).
This protocol is used in conjunction with the IP protocol.
IPSec protocol creates a connection-oriented service between two entities in
which they can exchange IP packets without worrying about the three attacks
such as Packet Sniffing, Packet Modification and IP Spoofing.
IP Sec provides the following four services:
1) Defining Algorithms and Keys : The two entities that want to create a
secure channel between themselves can agree on some available
algorithms and keys to be used for security purposes.
2) Packet Encryption : The packets exchanged between two parties can
be encrypted for privacy using one of the encryption algorithms and a
shared key agreed upon in the first step. This makes the packet sniffing
attack useless.
3) Data Integrity : Data integrity guarantees that the packet is not
modified during the transmission. If the received packet does not pass
the data integrity test, it is discarded.This prevents the second attack,
packet modification.
Origin Authentication : IPSec can authenticate the origin of the packet to be sure that the
packet is not created by an imposter. This can prevent IP spoofing attacks.
2.3 IPV4 ADDRESSES
The identifier used in the IP layer of the TCP/IP protocol suite to identify the
connection of each device to the Internet is called the Internet address or IP address.
Internet Protocol version 4 (IPv4) is the fourth version in the development of the
Internet Protocol (IP) and the first version of the protocol to be widely deployed.
IPv4 is described in IETF publication in September 1981.
The IP address is the address of the connection, not the host or the router. An IPv4
address is a 32-bit address that uniquely and universally defines the connection .
If the device is moved to another network, the IP address may be changed.
IPv4 addresses are unique in the sense that each address defines one, and only one,
connection to the Internet.
If a device has two connections to the Internet, via two networks, it has two IPv4
addresses.
Pv4 addresses are universal in the sense that the addressing system must be accepted
by any host that wants to be connected to the Internet.
In binary notation, an IPv4 address is displayed as 32 bits. To make the address more
readable, one or more spaces are usually inserted between bytes (8 bits).
In dotted-decimal notation,IPv4 addresses are usually written in decimal form with a decimal
point (dot) separating the bytes. Each number in the dotted-decimal notation is between 0 and
255.
In hexadecimal notation, each hexadecimal digit is equivalent to four bits. This means that a
32-bit address has 8 hexadecimal digits. This notation is often used in network programming.
HIERARCHY IN IPV4 ADDRESSING
In any communication network that involves delivery, the addressing system is
hierarchical.
A 32-bit IPv4 address is also hierarchical, but divided only into two parts.
The first part of the address, called the prefix, defines the network(Net ID); the second
part of the address, called the suffix, defines the node (Host ID).
The prefix length is n bits and the suffix length is (32-n) bits.
There are two broad categories of IPv4 Addressing techniques. They are
Classful Addressing
Classless Addressing
CLASSFUL ADDRESSING
An IPv4 address is 32-bit long(4 bytes).
An IPv4 address is divided into sub-classes:
Classful Network Architecture
Class A
Class B
• In Class B, an IP address is assigned to those networks that range from small-
sized to large-sized networks.
• The Network ID is 16 bits long.
• The Host ID is 16 bits long.
• In Class B, the higher order bits of the first octet is always set to 10, and
the remaining14 bits determine the network ID.
• The other 16 bits determine the Host ID.
• The total number of networks in Class B = 214 = 16384 network address
• The total number of hosts in Class B = 216 - 2 = 65534 host address
Class C
• In Class C, an IP address is assigned to only small-sized networks.
• The Network ID is 24 bits long.
• The host ID is 8 bits long.
• In Class C, the higher order bits of the first octet is always set to 110, and the
remaining 21 bits determine the network ID.
• The 8 bits of the host ID determine the host in a network.
• The total number of networks = 221 = 2097152 network address
• The total number of hosts = 28 - 2 = 254 host address
Class D
• In Class D, an IP address is reserved for multicast addresses.
• It does not possess subnetting.
• The higher order bits of the first octet is always set to 1110, and the remaining
bits determines the host ID in any network.
Class E
• In Class E, an IP address is used for the future use or for the research
and development purposes.
• It does not possess any subnetting.
• The higher order bits of the first octet is always set to 1111, and the remaining
bits determines the host ID in any network.
Address Depletion in Classful Addressing
The reason that classful addressing has become obsolete is address depletion.
Since the addresses were not distributed properly, the Internet was faced with the
problem of the addresses being rapidly used up.
This results in no more addresses available for organizations and individuals that
needed to be connected to the Internet.
To understand the problem, let us think about class A.
This class can be assigned to only 128 organizations in the world, but each
organization needs to have a single network with 16,777,216 nodes .
Since there may be only a few organizations that are this large, most of the addresses
in this class were wasted (unused).
Class B addresses were designed for midsize organizations, but many of the addresses
in this class also remained unused.
Class C addresses have a completely different flaw in design. The number of
addresses that can be used in each network (256) was so small that most companies
were not comfortable using a block in this address class.
Class E addresses were almost never used, wasting the whole class.
Advantage of Classful Addressing
Although classful addressing had several problems and became obsolete, it had one
advantage.
Given an address, we can easily find the class of the address and, since the prefix
length for each class is fixed, we can find the prefix length immediately.
In other words, the prefix length in classful addressing is inherent in the address; no
extra information is needed to extract the prefix and the suffix.
Subnetting
In subnetting, a class A or class B block is divided into several subnets.
Each subnet has a larger prefix length than the original network.
For example, if a network in class A is divided into four subnets, each subnet has a
prefix of nsub = 10.
At the same time, if all of the addresses in a network are not used, subnetting allows
the addresses to be divided among several organizations.
CLASSLESS ADDRESSING
In 1996, the Internet authorities announced a new architecture called classless
addressing.
In classless addressing, variable-length blocks are used that belong to no classes.
We can have a block of 1 address, 2 addresses, 4 addresses, 128 addresses, and so on.
In classless addressing, the whole address space is divided into variable length
blocks.
The prefix in an address defines the block (network); the suffix defines
the node (device).
Theoretically, we can have a block of 20, 21, 22, …..,232 addresses.
The number of addresses in a block needs to be a power of 2. An organization
can be granted one block of addresses.
For example , 192.168.100.14 /24 represents the IP address 192.168.100.14 and, its subnet
mask 255.255.255.0, which has 24 leading 1-bits.
Address Aggregation
One of the advantages of the CIDR strategy is address aggregation
(sometimes called address summarization or route summarization).
When blocks of addresses are combined to create a larger block, routing can be done
based on the prefix of the larger block.
ICANN assigns a large block of addresses to an ISP.
Each ISP in turn divides its assigned block into smaller subblocks and grants the
subblocks to its customers.
Special Addresses in IPv4
There are five special addresses that are used for special purposes: this-host address,
limited-broadcastaddress, loopback address, private addresses, and multicast
addresses.
This-host Address
The only address in the block 0.0.0.0/32 is called the this-host address.
It is used whenever a host needs to send an IP datagram but it does
not know its own address to use as the source address.
Limited-broadcast Address
The only address in the block 255.255.255.255/32 is called
the limited- broadcast address.
It is used whenever a router or a host needs to send a datagram to all
devices in a network.
The routers in the network, however, block the packet having this
address as the destination;the packet cannot travel outside the
network.
Loopback Address
The block 127.0.0.0/8 is called the loopback address.
A packet with one of the addresses in this block as the
destination address never leaves the host; it will remain in the host.
Private Addresses
Four blocks are assigned as private addresses: 10.0.0.0/8,
172.16.0.0/12, 192.168.0.0/16, and 169.254.0.0/16.
Multicast Addresses
The block 224.0.0.0/4 is reserved for multicast addresses.
2.4 Subnetting
Designing Subnets
The subnetworks in a network should be carefully designed to enable the
routing of packets. We assume the total number of addresses granted to the
organization is N, the prefix length is n, the assigned number of addresses to each
subnetwork is Nsub, and the prefix length for each subnetwork is nsub. Then the
following steps need to be carefully followed to guarantee the proper operation of the
subnetworks.
The number of addresses in each subnetwork should be a power of 2.
The prefix length for each subnetwork should be found using the
following formula:
a. The number of addresses in the largest subblock, which requires 120 addresses,
is not a power of 2. We allocate 128 addresses. The subnet mask for this subnet
can be found as n1 = 32 − log2 128 = 25. The first address in this block is
14.24.74.0/25; the last address is 14.24.74.127/25.
• IPv6 was evolved to solve address space problem and offers rich set
of services.
• Some hosts and routers will run IPv4 only, some will run IPv4 and IPv6 and
some will run IPv6 only.
DRAWBACKS OF IPV4
• Despite subnetting and CIDR, address depletion is still a long-term problem.
• Internet must accommodate real-time audio and video transmission that
requires minimum delay strategies and reservation of resources.
• Internet must provide encryption and authentication of data for some
applications
FEATURES OF IPV6
1. Better header format - IPv6 uses a new header format in which options are
separated from the base header and inserted, when needed, between the base
header and the data. This simplifies and speeds up the routing process because
most of the options do not need to be checked by routers.
2. New options - IPv6 has new options to allow for additional functionalities.
3. Allowance for extension - IPv6 is designed to allow the extension of the
protocol if required by new technologies or applications.
4. Support for resource allocation - In IPv6, the type-of-service field has been
removed, but two new fields, traffic class and flow label, have been added to
enable the source to request special handling of the packet. This mechanism
can be used to support traffic such as real-time audio and video.
Additional Features :
1. Need to accommodate scalable routing and addressing
2. Support for real-time services
3. Security support
4. Autoconfiguration -
The ability of hosts to automatically configure themselves with
such information as their own IP address and domain name.
5. Enhanced routing functionality, including support for mobile hosts
6. Transition from ipv4 to ipv6
GLOBAL UNICAST
* Large chunks (87%) of address space are left unassigned for future use.
* IPv6 defines two types of local addresses for private networks.
o Link local - enables a host to construct an address that
need not be globally unique.
o Site local - allows valid local address for use in a
isolated site with several subnets.
* Reserved addresses start with prefix of eight 0's.
o Unspecified address is used when a host does not know its address
o Loopback address is used for testing purposes before connecting
o Compatible address is used when IPv6 hosts uses IPv4 network
o Mapped address is used when a IPv6 host communicates with a IPv4
host
* IPv6 defines anycast address, assigned to a set of interfaces.
* Packet with anycast address is delivered to only one of the nearest interface.
ADVANTAGES OF IPV6
* Address space ― IPv6 uses 128-bit address whereas IPv4 uses 32-bit
address. Hence IPv6 has huge address space whereas IPv4 faces
address shortage problem.
* Header format ― Unlike IPv4, optional headers are separated from
base header in IPv6. Each router thus need not process unwanted
addition information.
* Extensible ― Unassigned IPv6 addresses can accommodate needs of
future technologies.
l nodes except the destination discard the packet but update their ARP table.
host (System B)constructs an ARP Response packet
Response is unicast and sent back to the source host (System A).
Stores target Logical & Physical address pair in its ARP table from ARP Response.
Target node does not exist on same network, ARP request is sent to default router.
ARP Packet
1. Hardware type: This is 16 bits field defining the type of the network on which
ARP is running. Ethernet is given the type 1.
2. Protocol type: This is 16 bits field defining the protocol. The value of this field
for the 1Pv4 protocol is 0800H.
3. Hardware length: This is an 8 bits field defining the length of the physical
address in bytes. Ethernet is the value 6.
4. Protocol length: This is an 8 bits field defining the length of the logical address in
bytes. For the 1Pv4 protocol the value is 4.
5. Operation: This is a 16 bits field defining the type of packet. Packet types are
ARP request (1), ARP reply (2).
6. Sender hardware address: This is a variable length field defining the physical
address of the sender. For example, for Ethernet this field is 6 bytes long.
7. Sender protocol address: This is also a variable length field defining the logical
address of the sender. For the IP protocol, this field is 4 bytes Jong.
8. Target hardware address: This is a variable length field defining the physical
address of the target. For Ethernet this field is 6 bytes long. For ARP request
message, this field is all 0’s because the sender does not know the physical
address of the target.
9. Target protocol address: This is also a variable length field defining the logical
address of the target. For the 1Pv4 protocol, this field is 4 bytes long.
Device 1 connects to the local network and sends an RARP broadcast to all devices on
the subnet. In the RARP broadcast, the device sends its physical MAC address and
requests an IP address it can use.
Because a broadcast is sent, device 2 receives the broadcast request. However, since it is
not a RARP server, device 2 ignores the request.
The broadcast message also reaches the RARP server. The server processes the packet
and attempts to find device 1's MAC address in the RARP lookup table. If one is found,
the RARP server returns the IP address assigned to the device. In this case, the IP address
is 51.100.102.
CS3591-Computer Networks UNIT IV
UNIT IV ROUTING
Routing and protocols: Unicast routing - Distance Vector Routing - RIP - Link State
Routing - OSPF - Path-vector routing - BGP - Multicast Routing: DVMRP - PIM.
• A host or a router has a routing table with an entry for each destination, or a
combination of destinations, to route IP packets. Routing table can be either static or
dynamic.
• A static routing table contains information entered manually. The administrator enters
the route for each destination into the table.
• Dynamic routing table is updated periodically by using one of the dynamic routing
protocols such as RIP, OSPE or BGP.
• The main function of the network layer is to route packets from source to destination.
To accomplish a route through the network must be selected, generally more than one
route is possible. The selection of route is generally based on some performance
criteria. The simplest criteria are to choose shortest route through the network.
• The shortest route means a route that passes through the least number of nodes. This
shortest route selection results in least number of hops per packet. A routing algorithm
is designed to perform this task. The routing algorithm is a part of network layer
software.
JP COLLEGE OF ENGINEERING 1
CS3591-Computer Networks UNIT IV
JP COLLEGE OF ENGINEERING 2
CS3591-Computer Networks UNIT IV
In static routing routes are user- In dynamic routing, routes are updated
1.
defined. according to the topology.
Static routing does not use complex Dynamic routing uses complex routing
2.
routing algorithms. algorithms.
In static routing, failure of the link In dynamic routing, failure of the link
7.
disrupts the rerouting. does not interrupt the rerouting.
Another name for static routing is Another name for dynamic routing is
10.
non-adaptive routing. adaptive routing.
4.1.6 Design Goals
Routing algorithms often have one or more of the following design goals:
1. Optimality
2. Simplicity and low overhead
3. Robustness and stability
4. Rapid convergence
5. Flexibility.
1. Optimality
Optimality refers to the ability of the routing algorithm to select the best route.
The best route depends on the metrics and metric weightings used to make the
calculation. For example, one routing algorithm might use number of hops and delay,
but might weight delay more heavily in the calculation. Naturally, routing protocols
must strictly define their metric calculation algorithms.
JP COLLEGE OF ENGINEERING 3
CS3591-Computer Networks UNIT IV
2. Simplicity
Routing algorithms are also designed to be as simple as possible. In other words,
the routing algorithm must offer its functionality efficiently, with a minimum of
software and utilization overhead. Efficiency is particularly important when the
software implementing the routing algorithm must run on a computer with limited
physical resources.
3. Robustness
Routing algorithms must be robust. In other words, they should perform
correctly in the face of unusual or unforeseen circumstances such as hardware failures,
high load conditions and incorrect implementations. Because routers are located at
network junction points, they can cause considerable problems when they fail. The best
routing algorithms are often those that have withstood the test of time and proven
stable under a variety of network conditions.
4. Rapid _convergence
Routing algorithms must converge rapidly. Convergence is the process of
agreement, by all routers, on optimal routes. When a network event causes routes to
either go down or become available, routers distribute routing update messages.
Routing update messages permeate networks, simulating recalculation of optimal
routes and eventually causing all routers to agree on these routes. Routing algorithms
that converge slowly can cause routing loops or network outages.
5. Flexibility
Routing algorithms should also be flexible. In other words, routing algorithms
should quickly and accurately adapt to a variety of network circumstances. For example,
assume that a network segment has gone down. Many routing algorithms, on becoming
aware of this problem, will quickly select the next-best path for all routes normally
using that segment. Routing algorithms can be programmed to adapt to changes in
network bandwidth, router queue size, network delay, and other variables.
Following Fig. shows the subnet and sink tree with distance metric is measured as
the number of hops.
Sink tree is not necessarily unique, other trees with the same path lengths
may exist.
Sink tree does not contain any loops, so each packet will be delivered within a
finite and bounded number of hops.
JP COLLEGE OF ENGINEERING 4
CS3591-Computer Networks UNIT IV
NETWORK AS A GRAPH
The Figure below shows a graph representing a network.
The nodes of the graph, labeled A through G, may be hosts, switches, routers, or
networks.
The edges of the graph correspond to the network links.
Each edge has an associated cost.
JP COLLEGE OF ENGINEERING 5
CS3591-Computer Networks UNIT IV
The basic problem of routing is to find the lowest-cost path between any two
nodes, where the cost of a path equals the sum of the costs of all the edges that
make up the path.
This static approach has several problems:
It does not deal with node or link failures.
It does not consider the addition of new nodes or links.
It implies that edge costs cannot change.
For these reasons, routing is achieved by running routing protocols among the
nodes.
These protocols provide a distributed, dynamic way to solve the problem of
finding the lowest-cost path in the presence of link and node failures and
changing edge costs.
Initial State
JP COLLEGE OF ENGINEERING 6
CS3591-Computer Networks UNIT IV
The initial table for all the nodes are given below
Each node sends its initial table (distance vector) to neighbors and receives
their estimate.
Node A sends its table to nodes B, C, E & F and receives tables from nodes B, C, E
& F.
Each node updates its routing table by comparing with each of its neighbor's
table
For each destination, Total Cost is computed as:
Total Cost = Cost (Node to Neighbor) + Cost (Neighbor to Destination)
If Total Cost < Cost then
Cost = Total Cost and NextHop = Neighbor
Node A learns from C's table to reach node D and from F's table to reach
node G.
Total Cost to reach node D via C = Cost (A to C) + Cost(C to D)
Cost = 1 + 1 = 2.
Since 2 < ∞, entry for destination D in A's table is changed to (D, 2, C)
Total Cost to reach node G via F = Cost(A to F) + Cost(F to G) = 1 + 1 = 2
Since 2 < ∞, entry for destination G in A's table is changed to (G, 2, F)
Each node builds complete routing table after few exchanges amongst its
neighbors.
JP COLLEGE OF ENGINEERING 7
CS3591-Computer Networks UNIT IV
System stabilizes when all nodes have complete routing information, i.e.,
convergence.
Routing tables are exchanged periodically or in case of triggered update.
The final distances stored at each node is given below:
Periodic Update
In this case, each node automatically sends an update message every so often,
even if nothing has changed.
The frequency of these periodic updates varies from protocol to protocol, but it
is typically on the order of several seconds to several minutes.
Triggered Update
In this case, whenever a node notices a link failure or receives an update from
one of its neighbors that causes it to change one of the routes in its routing table.
JP COLLEGE OF ENGINEERING 8
CS3591-Computer Networks UNIT IV
Whenever a node’s routing table changes, it sends an update to its neighbors,
which may lead to a change in their tables, causing them to send an update to
their neighbors.
Routers advertise the cost of reaching networks. Cost of reaching each link is 1
hop. For example, router C advertises to A that it can reach network 2, 3 at cost 0
(directly connected), networks 5, 6 at cost 1 and network 4 at cost 2.
Each router updates cost and next hop for each network number.
Infinity is defined as 16, i.e., any route cannot have more than 15 hops. Therefore
RIP can be implemented on small-sized networks only.
Advertisements are sent every 30 seconds or in case of triggered update.
Reliable Flooding
Each node sends its LSP out on each of its directly connected links.
When a node receives LSP of another node, checks if it has an LSP already for
that node.
If not, it stores and forwards the LSP on all other links except the incoming
one.
Else if the received LSP has a bigger sequence number, then it is stored and
forwarded. Older LSP for that node is discarded.
Otherwise discard the received LSP, since it is not latest for that node.
Thus recent LSP of a node eventually reaches all nodes, i.e., reliable flooding.
JP COLLEGE OF ENGINEERING 10
CS3591-Computer Networks UNIT IV
Flooding of LSP in a small network is as follows:
When node X receives Y’s LSP (fig a), it floods onto its neighbors A
and C (fig b)
Nodes A and C forward it to B, but does not sends it back to X (fig c).
Node B receives two copies of LSP with same sequence number.
Accepts one LSP and forwards it to D (fig d). Flooding is complete.
LSP is generated either periodically or when there is a change in the topology.
Route Calculation
Each node knows the entire topology, once it has LSP from every other node.
Forward search algorithm is used to compute routing table from the received
LSPs.
Each node maintains two lists, namely Tentative and Confirmed with entries of
the form (Destination, Cost, NextHop).
Example :
JP COLLEGE OF ENGINEERING 11
CS3591-Computer Networks UNIT IV
4.6 OPEN SHORTEST PATH FIRST PROTOCOL (OSPF)
OSPF is a non-proprietary widely used link-state routing protocol.
OSPF Features are:
Authentication―Malicious host can collapse a network by advertising to
reach every host with cost 0. Such disasters are averted by authenticating
routing updates.
Additional hierarchy―Domain is partitioned into areas, i.e., OSPF is
more scalable.
Load balancing―Multiple routes to the same place are assigned same
cost. Thus traffic is distributed evenly.
JP COLLEGE OF ENGINEERING 12
CS3591-Computer Networks UNIT IV
Spanning Trees
In path-vector routing, the path from a source to all destinations is determined
by the best spanning tree.
The best spanning tree is not the least-cost tree.
It is the tree determined by the source when it imposes its own policy.
If there is more than one route to a destination, the source can choose the route
that meets its policy best.
A source may apply several policies at the same time.
One of the common policies uses the minimum number of nodes to be visited.
Another common policy is to avoid some nodes as the middle node in a route.
The spanning trees are made, gradually and asynchronously, by each node. When
a node is booted, it creates a path vector based on the information it can obtain
about its immediate neighbor.
A node sends greeting messages to its immediate neighbors to collect these
pieces of information.
Each node, after the creation of the initial path vector, sends it to all its
immediate neighbors.
Each node, when it receives a path vector from a neighbor, updates its path
vector using the formula
Example:
The Figure below shows a small internet with only five nodes.
Each source has created its own spanning tree that meets its policy.
The policy imposed by all sources is to use the minimum number of nodes to
reach a destination.
The spanning tree selected by A and E is such that the communication does not
pass through D as a middle node.
Similarly, the spanning tree selected by B is such that the communication does
not pass through C as a middle node.
JP COLLEGE OF ENGINEERING 13
CS3591-Computer Networks UNIT IV
JP COLLEGE OF ENGINEERING 14
CS3591-Computer Networks UNIT IV
4.8 BORDER GATEWAY PROTOCOL (BGP)
The Border Gateway Protocol version (BGP) is the only interdomain routing
protocol used in the Internet today.
BGP4 is based on the path-vector algorithm. It provides information about the
reachability of networks in the Internet.
BGP views internet as a set of autonomous systems interconnected
arbitrarily.
Each AS have a border router (gateway), by which packets enter and leave that
AS. In above figure, R3 and R4 are border routers.
One of the router in each autonomous system is designated as BGP speaker.
BGP Speaker exchange reachability information with other BGP speakers,
known as external BGP session.
BGP advertises complete path as enumerated list of AS (path vector) to reach a
particular network.
Paths must be without any loop, i.e., AS list is unique.
For example, backbone network advertises that networks 128.96 and 192.4.153
can be reached along the path <AS1, AS2, AS4>.
If there are multiple routes to a destination, BGP speaker chooses one based on
policy.
Speakers need not advertise any route to a destination, even if one exists.
Advertised paths can be cancelled, if a link/node on the path goes down. This
negative advertisement is known as withdrawn route.
Routes are not repeatedly sent. If there is no change, keep alive messages are
sent.
JP COLLEGE OF ENGINEERING 15
CS3591-Computer Networks UNIT IV
JP COLLEGE OF ENGINEERING 16
CS3591-Computer Networks UNIT IV
INTERNET STRUCTURE
Internet has a million networks. Routing table entries per router should be
minimized.
Link state routing protocol is used to partition domain into areas.
An routing area is a set of routers configured to exchange link-state
information.
Area introduces an additional level of hierarchy.
Thus domains can grow without burdening routing protocols.
JP COLLEGE OF ENGINEERING 17
CS3591-Computer Networks UNIT IV
JP COLLEGE OF ENGINEERING 18
CS3591-Computer Networks UNIT IV
Policies Used By Autonomous Systems :
Provider-Customer―Provider advertises the routes it knows, to the customer
and advertises the routes learnt from customer to everyone.
Customer-Provider―Customers want the routes to be diverted to them. So they
advertise their own prefixes and routes learned from customers to provider and
advertise routes learned from provider to customers.
Peer―Two providers access to each other’s customers without having to pay.
JP COLLEGE OF ENGINEERING 19
CS3591-Computer Networks UNIT IV
4.10 MULTICASTING
In multicasting, a multicast router may have to send out copies of the same
datagram through more than one interface.
Hosts that are members of a group receive copies of any packets sent to that
group’s multicast address
A host can be in multiple groups
A host can join and leave groups
A host signals its desire to join or leave a multicast group by
communicating with its local router using a special protocol.
In IPv4, the protocol is Internet Group Management Protocol (IGMP)
In IPv6, the protocol is Multicast Listener Discovery (MLD)
JP COLLEGE OF ENGINEERING 20
CS3591-Computer Networks UNIT IV
Provides multicast routers with information about the membership status of
hosts connected to the network.
Enables a multicast router to create and update list of loyal members for
each group.
MULTICAST ADDRESSING
Multicast address is associated with a group, whose members are dynamic.
Each group has its own IP multicast address.
IP addresses reserved for multicasting are Class D in IPv4 (Class D 224.0.0.1 to
239.255.255.255), 1111 1111 prefix in IPv6.
o
Hosts that are members of a group receive copy of the packet sent when
destination contains group address.
Using IP multicast
Sending host does not send multiple copies of the packet
A host sends a single copy of the packet addressed to the group’s multicast
address
The sending host does not need to know the individual unicast IP address of
each member
JP COLLEGE OF ENGINEERING 21
CS3591-Computer Networks UNIT IV
TYPES OF MULTICASTING
Source-Specific Multicast - In source-specific multicast (one-to-many model),
receiver specifies multicast group and sender from which it is interested to
receive packets. Example: Internet radio broadcasts.
Any Source Multicast - Supplements any source multicast (many-to-many
model).
MULTICAST APPLICATIONS
Access to Distributed Databases
Information Dissemination
Teleconferencing.
Distance Learning
MULTICAST ROUTING
To support multicast, a router must additionally have multicast forwarding
tables that indicate, based on multicast address, which links to use to forward
the multicast packet.
Unicast forwarding tables collectively specify a set of paths.
Multicast forwarding tables collectively specify a set of trees -Multicast
distribution trees.
Multicast routing is the process by which multicast distribution trees are
determined.
To support multicasting, routers additionally build multicast forwarding
tables.
Multicast forwarding table is a tree structure, known as multicast
distribution trees.
Internet multicast is implemented on physical networks that support
broadcasting by extending forwarding functions.
JP COLLEGE OF ENGINEERING 22
CS3591-Computer Networks UNIT IV
RP multicasts to receivers; Fix-up tree for optimization
Rendezvous-Point Tree: one router is the center of the group and
therefore the root of the tree.
JP COLLEGE OF ENGINEERING 23
CS3591-Computer Networks UNIT IV
Multicasting is added to distance-vector routing in four stages.
Flooding
Reverse Path Forwarding (RPF)
Reverse Path Broadcasting (RPB)
Reverse Path Multicast (RPM)
Flooding
=> Router on receiving a multicast packet from source S to a Destination from
NextHop, forwards the packet on all out-going links.
=> Packet is flooded and looped back to S.
=> The drawbacks are:
o It floods a network, even if it has no members for that group.
o Packets are forwarded by each router connected to a LAN, i.e., duplicate
flooding
JP COLLEGE OF ENGINEERING 24
CS3591-Computer Networks UNIT IV
Pruning:
Sent from routers receiving multicast traffic for which they have no active
group members
“Prunes” the tree created by DVMRP
Stops needless data from being sent
Grafting:
Used after a branch has been pruned back
Sent by a router that has a host that joins a multicast group
Goes from router to router until a router active on the multicast group is
reached
Sent for the following cases
A new host member joins a group
A new dependent router joins a pruned branch
A dependent router restarts on a pruned branch
JP COLLEGE OF ENGINEERING 25
CS3591-Computer Networks UNIT IV
4.12 Protocol Independent Multicast (PIM)
=> PIM divides multicast routing problem into sparse and dense mode.
=> PIM sparse mode (PIM-SM) is widely used.
=> PIM does not rely on any type of unicast routing protocol, hence protocol
independent.
=> Routers explicitly join and leave multicast group using Join and Prune
messages.
=> One of the router is designated as rendezvous point (RP) for each group in a
domain to receive PIM messages.
=> Multicast forwarding tree is built as a result of routers sending Join messages to
RP.
=> Two types of trees to be constructed:
Shared tree - used by all senders
Source-specific tree - used only by a specific sending host
=> The normal mode of operation creates the shared tree first, followed by one or
more source-specific trees
Shared Tree
=> When a router sends Join message for group G to RP, it goes through a set of
routers.
=> Join message is wildcarded (*), i.e., it is applicable to all senders.
=> Routers create an entry (*, G) in its forwarding table for the shared tree.
=> Interface on which the Join arrived is marked to forward packets for that
group.
=> Forwards Join towards rendezvous router RP.
=> Eventually, the message arrives at RP. Thus a shared tree with RP as root is
formed.
Example
=> Router R4 sends Join message for group G to rendezvous router RP.
=> Join message is received by router R2. It makes an entry (*, G) in its table and forwards
the message to RP.
=> When R5 sends Join message for group G, R2 does not forwards the Join. It
adds an outgoing interface to the forwarding table created for that group.
=> As routers send Join message for a group, branches are added to the tree, i.e.,
shared.
=> Multicast packets sent from hosts are forwarded to designated router RP.
JP COLLEGE OF ENGINEERING 26
CS3591-Computer Networks UNIT IV
=> Suppose router R1, receives a message to group G.
o R1 has no state for group G.
o Encapsulates the multicast packet in a Register message.
o Multicast packet is tunneled along the way to RP.
=> RP decapsulates the packet and sends multicast packet onto the shared tree,
towards R2.
=> R2 forwards the multicast packet to routers R4 and R5 that have members for
group G.
Source-Specific Tree
=> RP can force routers to know about group G, by sending Join message to the
sending host, so that tunneling can be avoided.
=> Intermediary routers create sender-specific entry (S, G) in their tables. Thus a
source-specific route from R1 to RP is formed.
=> If there is high rate of packets sent from a sender to a group G, then shared- tree
is replaced by source-specific tree with sender as root.
Example
=> Rendezvous router RP sends a Join message to the host router R1.
=> Router R3 learns about group G through the message sent by RP.
=> Router R4 send a source-specific Join due to high rate of packets from sender.
=> Router R2 learns about group G through the message sent by R4.
=> Eventually a source-specific tree is formed with R1 as root.
Analysis of PIM
=> Protocol independent because, tree is based on Join messages via shortest path.
=> Shared trees are more scalable than source-specific trees.
=> Source-specific trees enable efficient routing than shared trees.
JP COLLEGE OF ENGINEERING 27
UNIT V DATA LINK AND PHYSICAL LAYERS 12
Data Link Layer – Framing – Flow control – Error control – Data-Link Layer Protocols – HDLC – PPP -
Media Access Control – Ethernet Basics – CSMA/CD – Virtual LAN – Wireless LAN (802.11) - Physical
Layer: Data and Signals - Performance – Transmission media- Switching – Circuit Switching.
In the OSI model, the data link layer is the 2nd layer from the bottom.
It is responsible for transmitting frames from one node to next node.
The main responsibility of the Data Link Layer is to transfer the datagram across an individual
link.
An important characteristic of a Data Link Layer is that datagram can be handled by different
link layer protocols on different links in a path.
The other responsibilities of this layer are
• Framing - Divides the stream of bits received into data units called frames.
• Physical addressing – If frames are to be distributed to different systems on the same
network, data link layer adds a header to the frame to define the sender and receiver.
• Flow control- If the rate at which the data are absorbed by the receiver is less than the
rate produced in the sender, the Data link layer imposes a flow control mechanism.
• Error control- Used for detecting and retransmitting damaged or lost frames and to
prevent duplication of frames. This is achieved through a trailer added at the end of the
frame.
• Medium Access control - Used to determine which device has control over the link at
any given time.
Nodes and Links
• Communication at the data-link layer is node-to-node.
• The communication channel that connects the adjacent nodes is known as links, and in order
to move the datagram from source to the destination, the datagram must be moved across an
individual link.
• A data unit from one point in the Internet needs to pass through many etworks (LAN and
WAN) to reach another point.
• Theses LANs and WANs are connected by routers.
• The two end hosts and the routers are nodes and the networks in- between are links.
• The first node is the source host; the last node is the destination host.
• The other four nodes are four routers.
• The first, the third, and the fifth links represent the three LANs; the second and the fourth
links represent the two WANs.
Two Categories of Links
Point- to-Point link and Broadcast link.
• In a point-to-point link, the link is dedicated to the two devices
• In a broadcast link, the link is shared between several pairs of devices.
Data Link Layer Services
• The data-link layer is located between the physical and the network layers.
• The data-link layer provides services to the network layer; it receives services from the
physical layer.
• When a packet is travelling, the data-link layer of a node (host or router) is responsible for
delivering a datagram to the next node in the path.
• For this purpose, the data-link layer of the sending node needs to encapsulate the
datagram and the data-link layer of the receiving node needs to decapsulate the datagram.
• The datagram received by the data-link layer of the source host is encapsulated in a frame.
• The frame is logically transported from the source host to the router.
• The frame is decapsulated at the data-link layer of the router and encapsulated at another
frame.
• The new frame is logically transported from the router to the destination host.
Sublayers in Data Link layer
• We can divide the data-link layer into two sublayers: data link control (DLC) and media
access control (MAC).
• The data link control sublayer deals with all issues common to both point- to-point and
broadcast links
• The media access control sublayer deals only with issues specific to broadcast links.
LINK-LAYER ADDRESSING
• A link-layer address is sometimes called a link address, sometimes a physical address, and
sometimes a MAC address.
• Since a link is controlled at the data-link layer, the addresses need to belong to the data-link
layer.
• When a datagram passes from the network layer to the data-link layer, the datagram will be
encapsulated in a frame and two data-link addresses are added to the frame header.
• These two addresses are changed every time the frame moves from one link to another.
THREE TYPES OF ADDRESSES
The link-layer protocols define three types of addresses: unicast, multicast, and broadcast.
Unicast Address : Each host or each interface of a router is assigned a unicast address. Unicasting
means one-to-one communication. A frame with a unicast address destination is destined only for one
entity in the link.
Multicast Address : Link-layer protocols define multicast addresses. Multicasting means one-to-
many Communication but not all.
Broadcast Address : Link-layer protocols define a broadcast address. Broadcasting means one- to-
all communication. A frame with a destination broadcast address is sent to all entities in the link.
DLC SERVICES
• The data link control (DLC) deals with procedures for communication between two adjacent
nodes—node-to-node communication—no matter whether the link is dedicated or broadcast.
• Data link control service include
(1) Framing (2) Flow Control (3) Error Control
5.2. FRAMING
• The data-link layer packs the bits of a message into frames, so that each frame is distinguishable
from another.
• Although the whole message could be packed in one frame, that is not normally done.
• One reason is that a frame can be very large, making flow and error control very inefficient.
• When a message is carried in one very large frame, even a single-bit error would require the
retransmission of the whole frame.
• When a message is divided into smaller frames, a single-bit error affects only that small frame.
• Framing in the data-link layer separates a message from one source to a destination by adding a
sender address and a destination address.
• The destination address defines where the packet is to go; the sender address helps the recipient
acknowledge the receipt.
Frame Size
• Frames can be of fixed or variable size.
• Frames of fixed size are called cells. In fixed-size framing, there is no need for defining the
boundaries of the frames; the size itself can be used as a delimiter.
• In variable-size framing, we need a way to define the end of one frame and the beginning of the
next. Two roaches were used for this purpose: a character-oriented roach and a bit-oriented
roach.
Character-Oriented Framing
• In character-oriented (or byte-oriented) framing, data to be carried are 8-bit characters.
• To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning and the
end of a frame.
• The flag, composed of protocol-dependent special characters, signals the start or end of a
frame.
• Any character used for the flag could also be part of the information.
• If this hens, when it encounters this pattern in the middle of the data,the receiver thinks it has
reached the end of the frame.
• To fix this problem, a byte-stuffing strategy was added to character- oriented framing.
Byte Stuffing (or) Character Stuffing
• Byte stuffing is the process of adding one extra byte whenever there is a flag or escape
character in the text.
• In byte stuffing, a special byte is added to the data section of the frame when there is a
character with the same pattern as the flag.
• The data section is stuffed with an extra byte. This byte is usually called the escape character
(ESC) and has a predefined bit pattern.
• Whenever the receiver encounters the ESC character, it removes it from the data section and
treats the next character as data, not as a delimiting flag.
Bit-Oriented Framing
• In bit-oriented framing, the data section of a frame is a sequence of bits to be interpreted by the
upper layer as text, graphic, audio, video, and so on.
• In addition to headers and trailers), we still need a delimiter to separate one frame from the
other.
• Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter to define the
beginning and the end of the frame
• If the flag pattern ears in the data, the receiver must be informed that this is not the end of the
frame.
• This is done by stuffing 1 single bit (instead of 1 byte) to prevent the pattern from looking like
a flag. The strategy is called bit stuffing.
Bit Stuffing
• Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0 in the
data, so that the receiver does not mistake the pattern 0111110 for a flag.
• In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is added.
• This extra stuffed bit is eventually removed from the data by the receiver.
• The extra bit is added after one 0 followed by five 1’s regardless of the
value of the next bit.
• This guarantees that the flag field sequence does not inadvertently ear in the frame.
• If the acknowledgement is not received within the allotted time, then the sender assumes that
the frame is lost during the transmission, so it will retransmit the frame.
• The acknowledgement may not arrive because of the following three scenarios :
1. Original frame is lost
2. ACK is lost
3. ACK arrives after the timeout
Advantage of Stop-and-wait
The Stop-and-wait method is simple as each frame is checked and acknowledged before the
next frame is sent
Disadvantages of Stop-And-Wait
• In stop-and-wait, at any point in time, there is only one frame that is sent and waiting to be
acknowledged.
• This is not a good use of transmission medium.
• To improve efficiency, multiple frames should be in transition while waiting for ACK.
PIGGYBACKING
SLIDING WINDOW
• The Sliding Window is a method of flow control in which a sender can transmit the several
frames before getting an acknowledgement.
• In Sliding Window Control, multiple frames can be sent one after another due to which capacity
of the communication channel can be utilized efficiently.
• A single ACK acknowledge multiple frames.
• Sliding Window refers to imaginary boxes at both the sender and receiver end.
• The window can hold the frames at either end, and it provides the upper limit on the number of
frames that can be transmitted before the acknowledgement.
• Frames can be acknowledged even when the window is not completely filled.
• The window has a specific size in which they are numbered as modulo-n means that they are
numbered from 0 to n-1.
• For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
• The size of the window is represented as n-1. Therefore, maximum n-1 frames can be sent
before acknowledgement.
• When the receiver sends the ACK, it includes the number of the next frame that it wants to
receive.
• For example, to acknowledge the string of frames ending with frame number 4, the receiver
will send the ACK containing the number 5.
• When the sender sees the ACK with the number 5, it got to know that the frames from 0
through 4 have been received.
Sender Window Receiver Window
BURST ERROR
The term Burst Error means that two or more bits in the data unit have changed
from 1 to 0 or from 0 to 1.
PARITY CHECK
• One bit, called parity bit is added to every data unit so that the total number
of 1’s in the data unit becomes even (or) odd.
• The source then transmits this data via a link, and bits are checked and
verified at the destination.
• Data is considered accurate if the number of bits (even or odd) matches the
number transmitted from the source.
• This techniques is the most common and least complex method.
1. Even parity – Maintain even number of 1s
E.g., 1011 → 1011 1
2. Odd parity – Maintain odd number of 1s
E.g., 1011 → 1011 0
Steps Involved :
• Consider the original message (dataword) as M(x) consisting of ‘k’ bits and
the divisor as C(x) consists of ‘n+1’ bits.
• The original message M(x) is appended by ‘n’ bits of zero’s. Let us call
this zero-extended message as T(x).
• Divide T(x) by C(x) and find the remainder.
• The division operation is performed using XOR operation.
• The resultant remainder is appended to the original message M(x) as CRC
and sent by the sender(codeword).
Example 1:
• Consider the Dataword / Message M(x) = 1001
• Divisor C(x) = 1011 (n+1=4)
• Appending ‘n’ zeros to the original Message M(x).
• The resultant messages is called T(x) = 1001 000. (here n=3)
• Divide T(x) by the divisor C(x) using XOR operation.
Sender Side :
Receiver Side:
(For Both Case – Without Error and With Error)
Polynomials
• A pattern of 0s and 1s can be represented as a polynomial with coefficients
of 0 and 1.
• The power of each term shows the position of the bit; the coefficient shows
the value of the bit.
INTERNET CHECKSUM
ERROR CONTROL
o Lost Frame: Sender is equipped with the timer and starts when the frame is
transmitted. Sometimes the frame has not arrived at the receiving end so
that it cannot be acknowledged either positively or negatively. The sender
waits for acknowledgement until the timer goes off. If the timer goes off, it
retransmits the last transmitted frame.
SLIDING WINDOW ARQ
1. GO-BACK-N ARQ
o In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it
retransmits all the frames after which it does not receive the positive ACK.
o In the above figure, three frames (Data 0,1,2) have been transmitted before
an error discovered in the third frame.
o The receiver discovers the error in Data 2 frame, so it returns the NAK 2
frame.
o All the frames including the damaged frame (Data 2,3,4) are discarded as it
is transmitted after the damaged frame.
o Therefore, the sender retransmits the frames (Data2,3,4).
2. SELECTIVE-REJECT(REPEAT) ARQ
1.SIMPLE PROTOCOL
o The first protocol is a simple protocol with neither flow nor error control.
o We assume that the receiver can immediately handle any frame it receives.
o In other words, the receiver can never be overwhelmed with incoming
frames.
o The data-link layers of the sender and receiver provide transmission
services for their network layers.
o The data-link layer at the sender gets a packet from its network layer, makes
a frame out of it, and sends the frame.
o The data-link layer at the receiver receives a frame from the link, extracts
the packet from the frame, and delivers the packet to its network layer.
NOTE :
2. STOP-AND-WAIT PROTOCOL
REFER STOP AND WAIT FROM FLOW CONTROL
3. GO-BACK-N PROTOCOL
REFER GO-BACK-N ARQ FROM ERROR CONTROL
4. SELECTIVE-REPEAT PROTOCOL
REFER SELECTIVE-REPEAT ARQ FROM ERROR CONTROL
HDLC FRAMES
HDLC defines three types of frames:
1. Information frames (I-frames) - used to carry user data
2. Supervisory frames (S-frames) - used to carry control information
3. Unnumbered frames (U-frames) – reserved for system management
Each type of frame serves as an envelope for the transmission of a different type of
message.
Each frame in HDLC may contain up to six fields:
1. Beginning flag field
2. Address field
3. Control field
4. Information field (User Information/ Management Information)
5. Frame check sequence (FCS) field
6. Ending flag field
In multiple-frame transmissions, the ending flag of one frame can serve as the
beginning flag of the next frame.
o Flag field - This field contains synchronization pattern 01111110, which
identifies both the beginning and the end of a frame.
o Address field - This field contains the address of the secondary station. If a
primary station created the frame, it contains a ‘to’ address. If a secondary
station creates the frame, it contains a ‘from’ address. The address field can
be one byte or several bytes long, depending on the needs of the network.
o Control field. The control field is one or two bytes used for flow and error
control.
o Information field. The information field contains the user’s data from the
network layer or management information. Its length can vary from one
network to another.
o FCS field. The frame check sequence (FCS) is the HDLC error detection
field. It can contain either a 16- bit or 32-bit CRC.
o The first bit defines the type. If the first bit of the control field is 0, this
means the frame is an I-frame.
o The next 3 bits, called N(S), define the sequence number of the frame.
o The last 3 bits, called N(R), correspond to the acknowledgment number
when piggybacking is used.
o The single bit between N(S) and N(R) is called the P/F bit. If this bit is 1 it
means poll (the frame is sent by a primary station to a secondary).
o If this bit is 0 it means final(the frame is sent by a secondary to a Primary).
o If the first 2 bits of the control field are 10, this means the frame is an S-
frame.
o The last 3 bits, called N(R),correspond to the acknowledgment number
(ACK) or negative acknowledgment number (NAK), depending on the type
of S-frame.
o The 2 bits called code are used to define the type of S-frame itself.
o With 2 bits, we can have four types of S-frames –
Receive ready (RR), Receive not ready (RNR), Reject (REJ) and
Selective reject (SREJ).
o If the first 2 bits of the control field are 11, this means the frame is an U-
frame.
o U-frame codes are divided into two sections: a 2-bit prefix before the P/F
bit and a 3-bit suffix after the P/F bit.
o Together, these two segments (5 bits) can be used to create up to 32
different types of U-frames.
5.7. POINT-TO-POINT PROTOCOL (PPP)
o Point-to-Point Protocol (PPP) was devised by IETF (Internet Engineering
Task Force) in 1990 as a Serial Line Internet Protocol (SLIP).
o PPP is a data link layer communications protocol used to establish a direct
connection between two nodes.
o It connects two routers directly without any host or any other networking
device in between.
o It is used to connect the Home PC to the server of ISP via a modem.
o It is a byte - oriented protocol that is widely used in broadband
communications having heavy loads and high speeds.
o Since it is a data link layer protocol, data is transmitted in frames. It is also
known as RFC 1661.
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one
or more bytes.
1. Flag − 1 byte that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
2. Address − 1 byte which is set to 11111111 in case of broadcast.
3. Control − 1 byte set to a constant value of 11000000.
4. Protocol − 1 or 2 bytes that define the type of data contained in the payload
field.
5. Payload − This carries the data from the network layer. The maximum
length of the payload field is 1500 bytes.
6. FCS − It is a 2 byte(16-bit) or 4 bytes(32-bit) frame check sequence for
error detection. The standard code used is CRC.
Byte Stuffing in PPP Frame
Byte stuffing is used is PPP payload field whenever the flag sequence appears in
the message, so that the receiver does not consider it as the end of the frame. The
escape byte, 01111101, is stuffed before every byte that contains the same byte as
the flag byte or the escape byte. The receiver on receiving the message removes
the escape byte before passing it onto the network layer.
Dead: In dead phase the link is not used. There is no active carrier and the
line is quiet.
Establish: Connection goes into this phase when one of the nodes start
communication. In this phase, two parties negotiate the options. If
negotiation is successful, the system goes into authentication phase or
directly to networking phase.
Authenticate: This phase is optional. The two nodes may decide whether
they need this phase during the establishment phase. If they decide to
proceed with authentication, they send several authentication packets. If the
result is successful, the connection goes to the networking phase; otherwise,
it goes to the termination phase.
Network: In network phase, negotiation for the network layer protocols
takes place.PPP specifies that two nodes establish a network layer
agreement before data at the network layer can be exchanged. This is
because PPP supports several protocols at network layer. If a node is
running multiple protocols simultaneously at the network layer, the
receiving node needs to know which protocol will receive the data.
Open: In this phase, data transfer takes place. The connection remains in
this phase until one of the endpoints wants to end the connection.
Terminate: In this phase connection is terminated.
Components/Protocols of PPP
Three sets of components/protocols are defined to make PPP powerful:
Link Control Protocol (LCP)
Authentication Protocols (AP)
Network Control Protocols (NCP)
PAP
The Password Authentication Protocol (PAP) is a simple authentication procedure
with a two-step process:
a. The user who wants to access a system sends an authentication
identification (usually the user name) and a password.
b. The system checks the validity of the identification and password and
either accepts or denies connection.
CHAP
The Challenge Handshake Authentication Protocol (CHAP) is a three-way
handshaking authentication protocol that provides greater security than PAP. In
this method, the password is kept secret; it is never sent online.
a. The system sends the user a challenge packet containing a challenge
value.
b. The user applies a predefined function that takes the challenge value and
the user’s own password and creates a result. The user sends the result in
the response packet to the system.
c. The system does the same. It applies the same function to the password of
the user (known to the system) and the challenge value to create a result.
If the result created is the same as the result sent in the response packet,
access is granted; otherwise, it is denied.
CHAP is more secure than PAP, especially if the system continuously changes the
challenge value. Even if the intruder learns the challenge value and the result, the
password is still secret.
Goals of MAC
1. Fairness in sharing
2. Efficient sharing of bandwidth
3. Need to avoid packet collisions at the receiver due to interference
MAC Management
• Medium allocation (collision avoidance)
• Contention resolution (collision handling)
MAC Types
• Round-Robin : – Each station is given opportunity to transmit in turns.
Either a central controller polls a station to permit to go, or stations can
coordinate among themselves.
• Reservation : - Station wishing to transmit makes reservations for time
slots in advance. (Centralized or distributed).
• Contention (Random Access) : - No control on who tries; If collision‖
occurs, retransmission takes place.
MECHANISMS USED
• Wired Networks :
o CSMA / CD – Carrier Sense Multiple Access / Collision Detection
• Wireless Networks :
o CSMA / CA – Carrier Sense Multiple Access / Collision Avoidance
Carrier Sense in CSMA/CD means that all the nodes sense the medium to
check whether it is idle or busy.
• If the carrier sensed is idle, then the node transmits the entire
frame.
• If the carrier sensed is busy, the transmission is postponed.
Collision Detect means that a node listens as it transmits and can therefore
detect when a frame it is transmitting has collided with a frame transmitted
by another node.
Non-Persistent Strategy
• In the non-persistent method, a station that has a frame to send senses the
line.
• If the line is idle, it sends immediately.
• If the line is not idle, it waits a random amount of time and then senses the
line again.
Persistent Strategy
1- Persistent :
• The 1-persistent method is simple and straightforward.
• In this method, after the station finds the line idle, it sends its frame
immediately (with probability 1).
• This method has the highest chance of collision because two or more
stations may find the line idle and send their frames immediately.
P-Persistent :
• In this method, after the station finds the line idle it follows these steps:
• With probability p, the station sends its frame.
• With probability q = 1 − p, the station waits for the beginning of the next
time slot and checks the line again.
• The p-persistent method is used if the channel has time slots with a slot
duration equal to or greater than the maximum propagation time.
• The p-persistent approach combines the advantages of the other two
strategies. It reduces the chance of collision and improves efficiency.
.
EXPONENTIAL BACK-OFF
• Once an adaptor has detected a collision and stopped its transmission, it waits
a certain amount of time and tries again.
• Each time it tries to transmit but fails, the adaptor doubles the amount of time
it waits before trying again.
• This strategy of doubling the delay interval between each retransmission
attempt is a general technique known as exponential back-off.
CARRIER SENSE MULTIPLE ACCESS / COLLISION AVOIDANCE
(CSMA/CA)
• Carrier sense multiple access with collision avoidance (CSMA/CA) was
invented for wireless networks.
• Wireless protocol would follow exactly the same algorithm as the
Ethernet—Wait until the link becomes idle before transmitting and back off
should a collision occur.
• Collisions are avoided through the use of CSMA/CA’s three strategies: the
interframe space, the contention window, and acknowledgments
EVOLUTION OF ETHERNET
Standard Ethernet (10 Mbps)
The original Ethernet technology with the data rate of 10 Mbps as the Standard
Ethernet.
Standard Ethernet types are
1. 10Base5: Thick Ethernet,
2. 10Base2: Thin Ethernet ,
3. 10Base-T: Twisted-Pair Ethernet
4. 10Base-F: Fiber Ethernet.
The 64-bit preamble allows the receiver to synchronize with the signal; it is
a sequence of alternating 0’s and 1’s.
Both the source and destination hosts are identified with a 48-bit address.
The packet type field serves as the demultiplexing key.
Each frame contains up to 1500 bytes of data(Body).
CRC is used for Error detection
Ethernet Addresses
Every Ethernet host has a unique Ethernet address (48 bits – 6 bytes).
Ethernet address is represented by sequence of six numbers separated by
colons.
Each number corresponds to 1 byte of the 6 byte address and is given by
pair of hexadecimal digits.
Eg: 8:0:2b:e4:b1:2 is the representation of
00001000 00000000 00101011 11100100 10110001 00000010
Each frame transmitted on an Ethernet is received by every adaptor
connected to the Ethernet.
In addition to unicast addresses an Ethernet address consisting of all 1s is
treated as broadcast address.
Similarly the address that has the first bit set to 1 but it is not the broadcast
address is called multicast address.
ADVANTAGES OF ETHERNET
Ethernets are successful because
It is extremely easy to administer and maintain. There are no switches that
can fail, no routing or configuration tables that have to be kept up-to-date,
and it is easy to add a new host to the network.
It is inexpensive: Cable is cheap, and the only other cost is the network
adaptor on each host.
Station Types
IEEE 802.11 defines three types of stations based on their mobility in a wireless
LAN:
1. No-transition - A station with no-transition mobility is either stationary
(not moving) or moving only inside a BSS.
2. BSS-transition - A station with BSS-transition mobility can move from
one BSS to another, but the movement is confined inside one ESS
ESS-transition - A station with ESS-transition mobility can move from one
ESS to another.
COLLISION AVOIDANCE IN WLAN / 802.11
Wireless protocol would follow exactly the same algorithm as the Ethernet—Wait
until the link becomes idle before transmitting and back off should a collision
occur.
• Each of the four nodes is able to send and receive signals that reach just the
nodes to its immediate left and right.
• For example, B can exchange frames with A and C but it cannot reach D,
while C can reach B and D but not A.
• Suppose B is sending to A. Node C is aware of this communication because
it hears B’s transmission.
• If at the same time, C wants to transmit to node D.
• It would be a mistake, however, for C to conclude that it cannot transmit to
anyone just because it can hear B’s transmission.
• This is not a problem since C’s transmission to D will not interfere with A’s
ability to receive from B.
• This is called exposed problem.
• Although B and C are exposed to each other’s signals, there is no
interference if B transmits to A while C transmits to D.
Two nodes can communicate directly with each other if they are within
reach of each other,
When the nodes are at different range, for example when node A wish to
communicate with node E, A first sends a frame to its access point (AP-1),
which forwards the frame across the distribution system to AP-3, which
finally transmits the frame to E.
Active Scanning
When node C moves from the cell serviced by AP-1 to the cell serviced by AP-2.
As it moves, it sends Probe frames, which eventually result in Probe Response.
Since the node is actively searching for an access point it is called active
scanning.
Passive Scanning
AP’s periodically send a Beacon frame to the nodes that advertises the
capabilities of the access point which includes the transmission rates supported by
the AP. This is called passive scanning and a node can change to this AP based on
the Beacon frame simply by sending it an Association Request frame back to the
access point.
When both the DS bits are set to 1, it indicates that one node is
sending the message to another indirectly using the distribution
system.
Duration - contains the duration of time the medium is occupied by the nodes.
Addr l - identifies the final original destination
Addr 2 - identifies the immediate sender (the one that forwarded the frame
from the distribution system to the ultimate destination)
Addr 3 - identifies the intermediate destination (the one that accepted the
frame from a wireless node and forwarded it across the distribution
system)
Addr 4 - identifies the original source
Sequence Control - to avoid duplication of frames sequence number is
assigned to each frame
Payload - Data from sender to receiver
CRC - used for Error detection of the frame.
5.12 Data and Signals in Physical layer
One of the major role of Physical layer is to transfer the data in form of signals through a
transmission medium. It doesn’t matter what data you are sending, it can be text, audio, image, video etc.
everything is transferred in form of signals. This happens because a data cannot be send as it is over a
transmission medium, it must be converted to a form that is acceptable by the transmission media,
signals are what a transmission medium carry.
Both the data and the signal can be represented in form of analog and digital.
Analog data is continuous data that keeps changing over time, for example in an analog watch, the
hour, minute and second hands keep moving so you infer the time by looking at it, it keeps changing. On
the other hand digital watch shows you discrete data such as 12:20 AM, 5:30 PM etc. at a particular
moment of time.
Similar to data, a signal can be analog or digital. An analog signal can have infinite number of
values in a given range, on the other hand a digital signal has limited number of values in a given range.
The following diagram shows analog and digital signals.
1. Analog Signals
A simple analog signal can be represented in form of sine wave. A sine wave is shown in the above
diagram.
A simple analog signal is smooth, consistent and continuous. As you can see in the diagram above
that a arc above the time axis is followed by the similar arc below the time axis and so on.
There are three parameters that defines a sine wave – peak amplitude, frequency and phase.
Frequency and Period: Period is the amount of time a signal takes to complete one cycle, it is denoted
by T. Frequency refers to the number of cycles in 1 second, it is denoted by f. They are inversely
proportional to each other which means f = 1/T.
Phase: Phase refers to the position of sine wave relative to the time 0. For example if the sine wave is at
its highest intensity at the time zero then the phase value for this sine wave is 90 degrees. Phase is
measured in degrees or radians.
Unlike sine wave which is smooth and consistent, composite analog signals or waves are not
smooth and consistent, which means an arc above the time axis doesn’t necessarily followed by arc below
the time axis. You can imagine them as a group of sine waves with different frequency, amplitude and
period.
Bandwidth: The range of frequencies in a composite signal is called bandwidth. For example if a
composite signal contains waves with the frequencies ranging from 2000 to 4000 then you can say that
the bandwidth of this composite signal is 4000-2000 = 2000Hz. Bandwidth is measured in Hz.
2. Digital Signals
Similar to analog signals, data can be transmitted in form of digital signals. For example a data that
is converted it into a machine language (combination of 0s and 1s) such as 1001 can be represented in
form digital signals. 1 represents high voltage and 0 represents low voltage.
Bit Rate: A bit rate is measured as bits per second, it represents the number of 1s send in 1 second.
Bit Length: A bit length is the distance a bit occupies on the transmission medium.
5.13 NETWORK PERFORMANCE
Network performance is measured in using: Bandwidth, Throughput, Latency, Jitter,
RoundTrip Time
BANDWIDTH
The bandwidth of a network is given by the number of bits that can be transmitted
over the network in a certain period of time.
Bandwidth can be measured in two different values: bandwidth in hertz and
bandwidth in bits per second.
Bandwidth in Hertz
o Bandwidth in hertz refers to the range of frequencies contained in a composite
signal or the range of frequencies a channel can pass.
o For example, we can say the bandwidth of a subscriber telephone line is 4 kHz.
Relationship
o There is an explicit relationship between the bandwidth in hertz and bandwidth in
bits per second.
o Basically, an increase in bandwidth in hertz means an increase in bandwidth in
bits per second.
THROUGHPUT
Throughput is a measure of how fast we can actually send data through a network.
Bandwidth in bits per second and throughput may seem to be same, but they are
different.
A link may have a bandwidth of B bps, but we can only send T bps through this link.
(T always less than B).
In other words, the bandwidth is a potential measurement of a link; the throughput
is an actual measurement of how fast we can send data.
For example, we may have a link with a bandwidth of 1 Mbps, but the devices
connected to the end of the link may handle only 200 kbps. This means that we cannot
send more than 200 kbps through this link.
Problem :
A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames per
minute with each frame carrying an average of 10,000 bits. What is the throughput
of this network?
Solution
We can calculate the throughput as
Propagation Time
o Propagation time measures the time required for a bit to travel from the source to
the destination.
o The propagation time is calculated by dividing the distance by the propagation
speed.
o The propagation speed of electromagnetic signals depends on the medium and on
the frequency of the signal.
Transmission Time
o In data communications we don’t send just 1 bit, we send a message.
o The first bit may take a time equal to the propagation time to reach its destination.
o The last bit also may take the same amount of time.
o However, there is a time between the first bit leaving the sender and the last bit
arriving at the receiver.
o The first bit leaves earlier and arrives earlier.
o The last bit leaves later and arrives later.
o The transmission time of a message depends on the size of the message and the
bandwidth of the channel.
Queuing Time
o Queuing time is the time needed for each intermediate or end device to hold the
message before it can be processed.
o The queuing time is not a fixed factor. It changes with the load imposed on the
network. When there is heavy traffic on the network, the queuing time increases.
o An intermediate device, such as a router, queues the arrived messages and
processes them one by one.
o If there are many messages, each message will have to wait.
Processing Delay
o Processing delay is the time that the nodes take to process the packet header.
o Processing delay is a key component in network delay.
o During processing of a packet, nodes may check for bit-level errors in the packet that
occurred during transmission as well as determining where the packet's next
destination is.
Bandwidth - Delay Product
o Bandwidth and delay are two performance metrics of a link.
o The bandwidth-delay product defines the number of bits that can fill the
link.
o This measurement is important if we need to send data in bursts and wait for the
acknowledgment of each burst before sending the next one.
JITTER
o Another performance issue that is related to delay is jitter.
o Jitter is a problem that if different packets of data encounter different delays and the
application using the data at the receiver site is time-sensitive (audio and video data, for
example).
o If the delay for the first packet is 20 ms, for the second is 45 ms, and for the third is 40 ms,
then the real-time application that uses the packets endures jitter.
o Attenuation: Attenuation means the loss of energy, i.e., the strength of the signal
decreases with increasing the distance which causes the loss of energy.
o Distortion: Distortion occurs when there is a change in the shape of the signal. This
type of distortion is examined from different signals having different frequencies.
Each frequency component has its own propagation speed, so they reach at a
different time which leads to the delay distortion.
o Noise: When data is travelled over a transmission medium, some unwanted signal is
added to it which creates the noise.
GUIDED MEDIA
• It is defined as the physical medium through which the signals are transmitted.
• It is also known as Bounded media.
• Types of Guided media: Twisted Pair Cable, Coaxial Cable , Fibre Optic Cable
• Twisted pair is a physical media made up of a pair of cables twisted with each
other.
• A twisted pair cable is cheap as compared to other transmission media.
• Installation of the twisted pair cable is easy, and it is a lightweight cable.
• The frequency range for twisted pair cable is from 0 to 3.5KHz.
• A twisted pair consists of two insulated copper wires arranged in a regular spiral
pattern.
Unshielded Twisted Pair
An unshielded twisted pair is widely used in telecommunication.
Following are the categories of the unshielded twisted pair cable:
o Category 1: Suports low-speed data.
o Category 2: It can support upto 4Mbps.
o Category 3: It can support upto 16Mbps.
o Category 4: It can support upto 20Mbps.
o Category 5: It can support upto 200Mbps.
Advantages :
o It is cheap.
o Installation of the unshielded twisted pair is easy.
o It can be used for high-speed LAN.
Disadvantage:
o This cable can only be used for shorter distances because of attenuation.
A shielded twisted pair is a cable that contains the mesh surrounding the wire that
allows the higher transmission rate.
Advantages :
o The cost of the shielded twisted pair cable is not very high and not very low.
o Installation of STP is easy.
o It has higher capacity as compared to unshielded twisted pair cable.
o It has a higher attenuation.
o It is shielded that provides the higher data transmission rate.
Disadvantages:
o It is more expensive as compared to UTP and coaxial cable.
o It has a higher attenuation rate.
COAXIAL CABLE
Disadvantages :
o It is more expensive as compared to twisted pair cable.
o If any fault occurs in the cable causes the failure in the entire network.
Disadvantages :
o Requires Expertise for Installation and maintenance
o Unidirectional light propagation.
o Higher Cost.
Multimode Propagation
• Multimode is so named because multiple beams from a light source move through
the core in different paths.
• How these beams move within the cable depends on the structure of the core.
Multimode Step-index fiber Multimode Graded-index fiber
Single-Mode Propagation
• Single-mode uses step-index fiber and a highly focused source of light that limits
beams to a small range of angles, all close to the horizontal.
• The single-mode fiber itself is manufactured with a much smaller diameter than that
of multimode fiber, and with substantially lower density (index of refraction).
• The decrease in density results in a critical angle that is close enough to 90° to make
the propagation of beams almost horizontal.
• In this case, propagation of different beams is almost identical, and delays are
negligible. All the beams arrive at the destination “together” and can be recombined
with little distortion to the signal.
UNGUIDED MEDIA
o An unguided transmission transmits the electromagnetic waves without using any
physical medium. Therefore it is also known as wireless transmission.
o In unguided media, air is the media through which the electromagnetic energy
can flow easily.
o Unguided transmission is broadly classified into three categories :
Radio Waves, Microwaves , Infrared
RADIO WAVES
o Radio waves are the electromagnetic waves that are transmitted in all the
directions of free space.
o Radio waves are omnidirectional, i.e., the signals are propagated in all the
directions.
o The range in frequencies of radio waves is from 3Khz to 1Khz.
o In the case of radio waves, the sending and receiving antenna are not aligned, i.e.,
the wave sent by the sending antenna can be received by any receiving antenna.
o An example of the radio wave is FM radio.
MICROWAVES
Terrestrial Microwave
o Terrestrial Microwave transmission is a technology that transmits the focused beam
of a radio signal from one ground-based microwave transmission antenna to another.
o Microwaves are the electromagnetic waves having the frequency in the range from
1GHz to 1000 GHz.
o Microwaves are unidirectional as the sending and receiving antenna is to be aligned,
i.e., the waves sent by the sending antenna are narrowly focused.
o In this case, antennas are mounted on the towers to send a beam to another antenna
which is km away.
o It works on the line of sight transmission, i.e., the antennas mounted on the towers
are at the direct sight of each other.
Satellite Microwave
o A satellite is a physical object that revolves around the earth at a known height.
o Satellite communication is more reliable nowadays as it offers more flexibility
than cable and fibre optic systems.
o We can communicate with any point on the globe by using
satellite communication.
o The satellite accepts the signal that is transmitted from the earth station, and it
amplifies the signal. The amplified signal is retransmitted to another earth station.
Advantages of Satellite Microwave:
o The coverage area of a satellite microwave is more than the terrestrial microwave.
o The transmission cost of the satellite is independent of the distance from the
centre of the coverage area.
o Satellite communication is used in mobile and wireless
communication applications.
o It is easy to install.
o It is used in a wide variety of applications such as weather forecasting, radio/TV
signal broadcasting, mobile communication, etc.
INFRARED WAVES
Characteristics of Infrared:
o It supports high bandwidth, and hence the data rate will be very high.
o Infrared waves cannot penetrate the walls. Therefore, the infrared communication
in one room cannot be interrupted by the nearby rooms.
o An infrared communication provides better security with minimum interference.
o Infrared communication is unreliable outside the building because the sun rays
will interfere with the infrared waves.
5.15 SWITCHING
Advantages of Switching:
o Switch increases the bandwidth of the network.
o It reduces the workload on individual PCs as it sends the information to only that
device which has been addressed.
o It increases the overall performance of the network by reducing the traffic on the
network.
o There will be less frame collision as switch creates the collision domain for each
connection.
Disadvantages of Switching:
o A Switch is more expensive than network bridges.
o A Switch cannot determine the network connectivity issues easily.
o Proper designing and configuration of the switch are required to handle multicast
packets.
Types of Switching Techniques
2. Data transfer - Once the circuit has been established, data and voice are transferred
from the source to the destination. The dedicated connection remains as long as the
end parties communicate.
Disadvantages
• Circuit switching establishes a dedicated connection between the end parties. This
dedicated connection cannot be used for transmitting any other data, even if the data
load is very low.
• Bandwidth requirement is high even in cases of low data volume.
• There is underutilization of system resources. Once resources are allocated to a
particular connection, they cannot be used for other connections.
• Time required to establish connection may be high.
• It is more expensive than other switching techniques as a dedicated path is required
for each connection.
COMPARISON – CIRCUIT SWITCHING AND PACKET SWITCHING
PACKET SWITCHING
CIRCUIT
SWITCHING Virtual Circuit
Datagram Switching
Switching
Packets may be
Ensures in order
Ensures in order delivery
delivery delivered out of order
No reordering is
No reordering is required Reordering is required
required
• Virtual LAN (VLAN) is a concept in which we can divide the devices logically on layer 2
(data link layer).
• Generally, layer 3 devices divide the broadcast domain but the broadcast domain can be
divided by switches using the concept of VLAN.
• A broadcast domain is a network segment in which if a device broadcast a packet then all
the devices in the same broadcast domain will receive it.
• To forward out the packets to different VLAN (from one VLAN to another) or broadcast
domains, inter Vlan routing is needed.
VLAN ranges:
• VLAN 0, 4095: These are reserved VLAN which cannot be seen or used.
• VLAN 1: It is the default VLAN of switches. By default, all switch ports are in VLAN. This
VLAN can’t be deleted or edit but can be used.
• VLAN 2-1001: This is a normal VLAN range. We can create, edit and delete these VLAN.
• VLAN 1002-1005: These are CISCO defaults for fddi and token rings. These VLAN can’t be
deleted.
• Vlan 1006-4094: This is the extended range of Vlan.
Configuration –
We can simply create VLANs by simply assigning the vlan-id and Vlan name.
#switch1(config)#vlan 2
#switch1(config-vlan)#vlan accounts
Here, 2 is the Vlan I’d and accounts is the Vlan name. Now, we assign Vlan to the switch
ports.e.g-
Switch(config)#int fa0/0
Switch(config-if)#switchport mode access
Switch(config-if)#switchport access Vlan 2
Also, switchport range can be assigned to required vlans.
Switch(config)#int range fa0/0-2
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 2
By this, switchport fa0/0, fa0/1, fa0-2 will be assigned Vlan 2.
Example –
Assigning IP address 192.168.1.1/24, 192.168.1.2/24 and 192.168.2.1/24 to the PC’s. Now, we
will create Vlan 2 and 3 on switch.
Switch(config)#vlan 2
Switch(config)#vlan 3
We have made VLANs but the most important part is to assign switch ports to the VLANs.
Switch(config)#int fa0/0
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 2
Switch(config)#int fa0/1
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 3
Switch(config)#int fa0/2
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 2
As seen, we have assigned Vlan 2 to fa0/0, fa0/2, and Vlan 3 to fa0/1.
• Improved network security: VLANs can be used to separate network traffic and limit
access to specific network resources. This improves security by preventing unauthorized
access to sensitive data and network resources.
• Better network performance: By segregating network traffic into smaller logical
networks, VLANs can reduce the amount of broadcast traffic and improve network
performance.
• Simplified network management: VLANs allow network administrators to group devices
together logically, rather than physically, which can simplify network management tasks
such as configuration, troubleshooting, and maintenance.
• Flexibility: VLANs can be configured dynamically, allowing network administrators to
quickly and easily adjust network configurations as needed.
• Cost savings: VLANs can help reduce hardware costs by allowing multiple virtual networks
to share a single physical network infrastructure.
• Scalability: VLANs can be used to segment a network into smaller, more manageable groups
as the network grows in size and complexity.
VLAN tagging: VLAN tagging is a way to identify and distinguish VLAN traffic from other
network traffic. This is typically done by adding a VLAN tag to the Ethernet frame header.
• VLAN membership: VLAN membership determines which devices are assigned to which
VLANs. Devices can be assigned to VLANs based on port, MAC address, or other criteria.
• VLAN trunking: VLAN trunking allows multiple VLANs to be carried over a single physical
link. This is typically done using a protocol such as IEEE 802.1Q.
• VLAN management: VLAN management involves configuring and managing VLANs,
including assigning devices to VLANs, configuring VLAN tags, and configuring VLAN
trunking.
Types of connections in VLAN –
There are three ways to connect devices on a VLAN, the type of connections are based on the
connected devices i.e. whether they are VLAN-aware(A device that understands VLAN formats
and VLAN membership) or VLAN-unaware(A device that doesn’t understand VLAN format and
VLAN membership).
1. Trunk Link –
All connected devices to a trunk link must be VLAN-aware. All frames on this should have a
special header attached to it called tagged frames.
2. Access link –
It connects VLAN-unaware devices to a VLAN-aware bridge. All frames on the access link
must be untagged.
3. Hybrid link –
It is a combination of the Trunk link and Access link. Here both VLAN-unaware and VLAN-
aware devices are attached and it can have both tagged and untagged frames.
Advantages –
• Performance –
The network traffic is full of broadcast and multicast. VLAN reduces the need to send such
traffic to unnecessary destinations. e.g.-If the traffic is intended for 2 users but as 10
devices are present in the same broadcast domain, therefore, all will receive the traffic i.e.
wastage of bandwidth but if we make VLANs, then the broadcast or multicast packet will
go to the intended users only.
• Formation of virtual groups –
As there are different departments in every organization namely sales, finance etc., VLANs
can be very useful in order to group the devices logically according to their departments.
• Security –
In the same network, sensitive data can be broadcast which can be accessed by the
outsider but by creating VLAN, we can control broadcast domains, set up firewalls, restrict
access. Also, VLANs can be used to inform the network manager of an intrusion. Hence,
VLANs greatly enhance network security.
• Flexibility –
VLAN provide flexibility to add, remove the number of host we want.
• Cost reduction –
VLANs can be used to create broadcast domains which eliminate the need for expensive
routers. P0By using Vlan, the number of small size broadcast domain can be increased
which are easy to handle as compared to a bigger broadcast domain.
Disadvantages of VLAN
1. Complexity: VLANs can be complex to configure and manage, particularly in large or
dynamic cloud computing environments.
2. Limited scalability: VLANs are limited by the number of available VLAN IDs, which can be
a constraint in larger cloud computing environments.
3. Limited security: VLANs do not provide complete security and can be compromised by
malicious actors who are able to gain access to the network.
4. Limited interoperability: VLANs may not be fully compatible with all types of network
devices and protocols, which can limit their usefulness in cloud computing environments.
5. Limited mobility: VLANs may not support the movement of devices or users between
different network segments, which can limit their usefulness in mobile or remote cloud
computing environments.
6. Cost: Implementing and maintaining VLANs can be costly, especially if specialized hardware
or software is required.
7. Limited visibility: VLANs can make it more difficult to monitor and troubleshoot network
issues, as traffic is isolated in different segments.