Operating System
Operating System
In this article, you will learn about the multiprogramming operating system, its
working, advantages, and disadvantages.
When one application is waiting for an I/O transfer, another is ready to use the
processor at all times, and numerous programs may share CPU time. All jobs
are not run simultaneously, but there could be numerous jobs running on the
processor at the same time, and parts of other processes being executed first,
then another segment, etc. As a result, the overall goal of a multiprogramming
system is to keep the CPU busy until some tasks are available in the job pool.
Thus, the numerous programs can run on a single processor computer, and
the CPU is never idle.D
Advantages
The ability for multiple users to use the system at various terminals
simultaneously is one advantage of using a time-sharing operating system. All
users’ response times can be cut down, and the system’s resources can be
used more effectively.
Additionally, because they permit multiple users to use the system without
needing to purchase individual licenses, time-sharing operating systems may
be more cost-effective for businesses.
Advantages
1. Each task gets an equal opportunity.
2. Fewer chances of duplication of software.
3. CPU idle time can be reduced.
Disadvantages
1. Reliability problem.
2. One must have to take of the security and integrity of user programs
and data.
3. Data communication problem.
For example, a robot is hired to weld a car body. If the robot welds
too early or too late, the car cannot be sold, so it is a hard real-time
system that requires complete car welding by the robot hardly on
time., scientific experiments, medical imaging systems, industrial
control systems, weapon systems, robots, air traffic control systems,
etc.
Advantages:
The advantages of real-time operating systems are as follows-
1. Maximum consumption: Maximum utilization of devices and
systems. Thus more output from all the resources.
Disadvantages:
The disadvantages of real-time operating systems are as follows-
Complex Simple
In operating systems, to improve the performance of more than one CPU can
be used within one computer system called Multiprocessor operating system.
Multiple CPUs are interconnected so that a job can be divided among them for
faster execution. When a job finishes, results from all CPUs are collected and
compiled to give the final output. Jobs needed to share main memory and they
may also share other system resources among themselves. Multiple CPUs
can also be used to run multiple jobs simultaneously.
For Example: UNIX Operating system is one of the most widely used
multiprocessing systems.
In this type of system, each processor is assigned a specific task, and there is
a designated master processor that controls the activities of other processors.
Program Execution
It is the Operating System that manages how a program is going to be
executed. It loads the program into the memory after which it is executed. The
order in which they are executed depends on the CPU Scheduling Algorithms.
A few are FCFS, SJF, etc. When the program is in execution, the Operating
System also handles deadlock i.e. no two processes come for execution at the
same time. The Operating System is responsible for the smooth execution of
both user and system programs. The Operating System utilizes various
resources available for the efficient running of all types of functionalities.
File System:
A computer file is defined as a medium used for saving and managing data in
the computer system. The data stored in the computer system is completely in
digital format, although there can be various types of files that help us to store
the data.
What is a File System?
A file system is a method an operating system uses to store, organize, and
manage files and directories on a storage device. Some common types of file
systems include:
1. FAT (File Allocation Table): An older file system used by older
versions of Windows and other operating systems.
2. NTFS (New Technology File System): A modern file system used
by Windows. It supports features such as file and folder permissions,
compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux
and Unix-based operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple
for their Macs and iOS devices.
A file is a collection of related information that is recorded on secondary
storage. Or file is a collection of logically related entities. From the user’s
perspective, a file is the smallest allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
• name
• extension, separated by a period.
Issues Handled By File System
We’ve seen a variety of data structures where the file could be kept. The file
system’s job is to keep the files organized in the best way possible.
A free space is created on the hard drive whenever a file is deleted from it. To
reallocate them to other files, many of these spaces may need to be
recovered. Choosing where to store the files on the hard disc is the main issue
with files one block may or may not be used to store a file. It may be kept in
the disk’s non-contiguous blocks. We must keep track of all the blocks where
the files are partially located.
Files Attributes And Their Operations
Attributes Types Operations
Author C Append
Close
Compiled, machine
Object obj, o
language not linked
Commands to the
Batch bat, sh
command interpreter
Textual data,
Text txt, doc
documents
For containing
Multimedia mpeg, mov, rm
audio/video information
It contains libraries of
Library lib, a ,so, dll routines for
programmers
Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and
also there is grouping capability. We have absolute or relative path name for a
file.
File Allocation Methods
There are several types of file allocation methods. These are mentioned
below.
• Continuous Allocation
• Linked Allocation(Non-contiguous allocation)
• Indexed Allocation
Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file
creation. Thus, this is a pre-allocation strategy, using variable size portions.
The file allocation table needs just a single entry for each file, showing the
starting block and the length of the file. This method is best from the point of
view of the individual sequential file. Multiple blocks can be read in at a time to
improve I/O performance for sequential processing. It is also easy to retrieve a
single block. For example, if a file starts at block b, and the ith block of the file
is wanted, its location on secondary storage is simply b+i-1.
Disadvantages of Continuous Allocation
• External fragmentation will occur, making it difficult to find contiguous
blocks of space of sufficient length. A compaction algorithm will be
necessary to free up additional space on the disk.
• Also, with pre-allocation, it is necessary to declare the size of the file
at the time of creation.
Linked Allocation(Non-Contiguous Allocation)
Allocation is on an individual block basis. Each block contains a pointer to the
next block in the chain. Again the file table needs just a single entry for each
file, showing the starting block and the length of the file. Although pre-
allocation is possible, it is more common simply to allocate blocks as needed.
Any free block can be added to the chain. The blocks need not be continuous.
An increase in file size is always possible if a free disk block is available.
There is no external fragmentation because only one block at a time is needed
but there can be internal fragmentation but it exists only in the last disk block
of the file.
File Protection:
In computer systems, alot of user’s information is stored, the objective of the
operating system is to keep safe the data of the user from the improper
access to the system. Protection can be provided in number of ways. For a
single laptop system, we might provide protection by locking the computer in a
desk drawer or file cabinet. For multi-user systems, different mechanisms are
used for the protection.
Types of Access :
The files which have direct access of the any user have the need of
protection. The files which are not accessible to other users doesn’t require
any kind of protection. The mechanism of the protection provide the facility of
the controlled access by just limiting the types of access to the file. Access
can be given or not given to any user depends on several factors, one of
which is the type of access required. Several different types of operations can
be controlled:
• Read – Reading from a file.
• Write – Writing or rewriting the file.
• Execute – Loading the file and after loading the execution process
starts.
• Append – Writing the new information to the already existing file,
editing must be end at the end of the existing file.
• Delete – Deleting the file which is of no use and using its space for
the another data.
• List – List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be
controlled. There are many protection mechanism. each of them mechanism
have different advantages and disadvantages and must be appropriate for the
intended application.
Access Control :
There are different methods used by different users to access any file. The
general way of protection is to associate identity-dependent access with all the
files and directories an list called access-control list (ACL) which specify the
names of the users and the types of access associate with each of the user.
The main problem with the access list is their length. If we want to allow
everyone to read a file, we must list all the users with the read access. This
technique has two undesirable consequences:
Constructing such a list may be tedious and unrewarding task, especially if we
do not know in advance the list of the users in the system.
Previously, the entry of the any directory is of the fixed size but now it changes
to the variable size which results in the complicates space management.
These problems can be resolved by use of a condensed version of the access
list. To condense the length of the access-control list, many systems
recognize three classification of users in connection with each file:
• Owner – Owner is the user who has created the file.
• Group – A group is a set of members who has similar needs and
they are sharing the same file.
• Universe – In the system, all other users are under the category
called universe.
The most common recent approach is to combine access-control lists with the
normal general owner, group, and universe access control scheme. For
example: Solaris uses the three categories of access by default but allows
access-control lists to be added to specific files and directories when more
fine-grained access control is desired.
Other Protection Approaches:
The access to any system is also controlled by the password. If the use of
password is random and it is changed often, this may be result in limit the
effective access to a file.
The use of passwords has a few disadvantages:
• The number of passwords are very large so it is difficult to remember
the large passwords.
• If one password is used for all the files, then once it is discovered, all
files are accessible; protection is on all-or-none basis.
Unit II:
CPU Scheduling:-
CPU Scheduling
This is a task of the short term scheduler to schedule the CPU for the number
of processes present in the Job Pool. Whenever the running process requests
some IO operation then the short term scheduler saves the current context of
the process (also called PCB) and changes its state from running to waiting.
During the time, process is in waiting state; the Short term scheduler picks
another process from the ready queue and assigns the CPU to this process.
This procedure is called context switching.
If most of the running processes change their state from running to waiting
then there may always be a possibility of deadlock in the system. Hence to
reduce this overhead, the OS needs to schedule the jobs to get the optimal
utilization of CPU and to avoid the possibility to deadlock.
Process Management in OS
Process States
State Diagram
The process, from its creation to completion, passes through various states.
The minimum number of states is five.
The names of the states are not standardized although the process may be in
one of the following states during execution.
1. New
2. Ready
The processes which are ready for the execution and reside in the main
memory are called ready state processes. There can be many processes
present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS
depending upon the scheduling algorithm. Hence, if we have only one CPU in
our system, the number of running processes for a particular time will always
be one. If we have n processors in the system then we can have n processes
running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait
state depending upon the scheduling algorithm or the intrinsic behavior of the
process.
When a process waits for a certain resource to be assigned or for the input
from the user then the OS move this process to the block or wait state and
assigns the CPU to the other processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All
the context of the process (Process Control Block) will also be deleted the
process will be terminated by the Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the
main memory due to lack of the resources (mainly primary memory) is called
in the suspend ready state.
If the main memory is full and a higher priority process comes for the
execution then the OS have to make the room for the process in the main
memory by throwing the lower priority process out into the secondary memory.
The suspend ready processes remain in the secondary memory until the main
memory gets available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove
the blocked process which is waiting for some resources in the main memory.
Since it is already waiting for some resource to get available hence it is better
if it waits in the secondary memory and make room for the higher priority
process. These processes complete their execution once the main memory
gets available and their wait is finished.
1. Creation
Once the process is created, it will be ready and come into the ready queue
(main memory) and will be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system
chooses one process and start executing it. Selecting the process which is to
be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts
executing it. Process may come to the blocked or wait state during the
execution then in that case the processor starts executing the other
processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process.
The Context of the process (PCB) will be deleted and the process gets
terminated by the Operating system.
There are various algorithms which are used by the Operating System to
schedule the processes on the processor in an efficient way.
There are the following algorithms which can be used to schedule the jobs.
It is the simplest algorithm to implement. The process with the minimal arrival
time will get the CPU first. The lesser the arrival time, the sooner will the
process gets the CPU. It is the non-preemptive type of scheduling.
2. Round Robin
The job with the shortest burst time will get the CPU first. The lesser the burst
time, the sooner will the process get the CPU. It is the non-preemptive type of
scheduling.
It is the preemptive form of SJF. In this algorithm, the OS schedules the Job
according to the remaining time of the execution.
In this algorithm, the priority will be assigned to each of the processes. The
higher the priority, the sooner will the process get the CPU. If the priority of the
two processes is same then they will be scheduled according to their arrival
time.
In this scheduling Algorithm, the process with highest response ratio will be
scheduled next. This reduces the starvation in the system.
One approach is when all the scheduling decisions and I/O processing are
handled by a single processor which is called the Master Server and the
other processors executes only the user code. This is simple and reduces the
need of data sharing. This entire scenario is called Asymmetric
Multiprocessing. A second approach uses Symmetric
Multiprocessing where each processor is self scheduling. All processes
may be in a common ready queue or each processor may have its own private
queue for ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process
to execute.
Processor Affinity –
Load Balancing –
Multicore Processors –
It is faster in comparison to
4. Segmentation is slow.
segmentation.
The concept of query navigation in the operating system. This concept says
that we should not load any pages into the main memory until we need them,
or keep all pages in secondary memory until we need them.
What is Demand Paging?
Demand paging can be described as a memory management technique that is
used in operating systems to improve memory usage and system
performance. Demand paging is a technique used in virtual memory systems
where pages enter main memory only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the entire program into
memory at the start. A page fault occurred when the program needed to
access a page that is not currently in memory. The operating system then
loads the required pages from the disk into memory and updates the page
tables accordingly. This process is transparent to the running program and it
continues to run as if the page had always been in memory.
What is Page Fault?
The term “page miss” or “page fault” refers to a situation where a referenced
page is not found in the main memory.
When a program tries to access a page, or fixed-size block of memory, that
isn’t currently loaded in physical memory (RAM), an exception known as a
page fault happens. Before enabling the program to access a page that is
required, the operating system must bring it into memory from secondary
storage (such a hard drive) in order to handle a page fault.
In modern operating systems, page faults are a common component of virtual
memory management. By enabling programs to operate with more data than
can fit in physical memory at once, they enable the efficient use of physical
memory. The operating system is responsible for coordinating the transfer of
data between physical memory and secondary storage as needed.
What is Thrashing?
Thrashing is the term used to describe a state in which excessive paging
activity takes place in computer systems, especially in operating systems that
use virtual memory, severely impairing system performance. Thrashing occurs
when a system’s high memory demand and low physical memory capacity
cause it to spend a large amount of time rotating pages between main
memory (RAM) and secondary storage, which is typically a hard disc.
Examples Of Deadlock
1. The system has 2 tape drives. P0 and P1 each hold one tape drive
and each needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as
follows:
• P0 executes wait(A) and preempts.
• P1 executes wait(B).
• Now P0 and P1 enter in deadlock.
P0 P1
wait(A); wait(B)
wait(B); wait(A)
3. Assume the space is available for allocation of 200K bytes, and the
following sequence of events occurs.
P0 P1
Request Request
80KB; 70KB;
In this article, you will learn about implementing the access matrix in the
operating system. But before discussing the implementation of the access
matrix, you must know about the access matrix in the operating system.
There are various methods of implementing the access matrix in the operating
system. These methods are as follows:
1. Global Table
2. Access Lists for Objects
3. Capability Lists for Domains
4. Lock-Key Mechanism
Global Table
Every access matrix column may be used as a single object's access list. It is
possible to delete the blank entries. For each object, the resulting list contains
ordered pairs <domain, rights-set> that define all domains for that object and
a nonempty set of access rights.
We may start by checking the default set and then find the access list. If the
item is found, we enable the action; if it isn't, we verify the default set. If M is in
the default set, we grant access. Access is denied if this is not the case, and
an extraordinary scenario arises.
A domain's capability list is a collection of objects and the actions that can be
done on them. A capacity is a name or address that is used to define an
object. If you want to perform operation M on object Oj, the process runs
operation M, specifying the capability for object Oj. The simple possession of
the capability implies that access is allowed.
In most cases, capabilities are separated from other data in one of two ways.
Every object has a tag to indicate its type as capability data. Alternatively, a
program's address space can be divided into two portions. The programs may
access one portion, including the program's normal instructions and data. The
other portion is a capability list that is only accessed by the operating system.
Lock-Key Mechanism
It is a compromise between the access lists and the capability lists. Each
object has a list of locks, which are special bit patterns. On the other hand,
each domain has a set of keys that are special bit patterns. A domain-based
process could only access an object if a domain has a key that satisfies one of
the locks on the object. The process is not allowed to modify its keys.
In this example, there are 4 domains and objects in the above matrix, and also
consider 3 files (including F1, F2, and F3) and one printer. Files F1 and F3
can be read by a process running in D1. A process running in domain D4 has
the same rights as D1, but it may also write on files. Only one process running
in domain D2 has access to the printer. The access matrix mechanism is
made up of various policies and semantic features. Specifically, we should
ensure that a process running in domain Di may only access the objects listed
in row i.
The protection policies in the access matrix determine which rights must be
included in the (i j)th entry. We should also choose the domain in which each
process runs. The OS usually decides this policy. The Users determine the
data of the access-matrix entries.
The relationship between the domain and the processes might be static or
dynamic. The access matrix provides a way for defining the control for this
domain-process association. We perform a switch action on an object when
we switch a process from one domain to another. We may regulate domain
switching by containing domains between the access matrix objects. If they
have access to switch rights, processes must be enabled to switch from one
domain (Di) to another domain (Dj).
According to the matrix, a process running in domain D2 can transition to
domains D3 and D4. A process in domain D4 may change to domain D1, and
a process in domain D1 may change to domain D2.
Unit V:
Windows NT:-
Use FDISK to create a 2Gb partition on the fixed disk and set the partition
active. When exiting FDISK the computer will reboot to save partition
information.
2.- Insert Windows NT 4.0 Server CD-Rom and type "D:\i386\winnt /b". This
starts the Installation process.
After files have copied you will be asked if you want to reboot. Press Enter
Press enter.
When setup lists the computer information press enter to accept it.
Insert Windows NT 4.0 Server CD-Rom and click [OK] when prompted.
6.- Click [Next] at Windows NT Server version 4.0 setup.
At the name and organization window, enter what you want. Be Imaginative.
Click [Next].
Enter "NT2" for Computer name (NetBIOS name) and specify the Server type
"BDC" and click [Next].
Enter the administrator account password and confirm, then click [Next].
Select "no" when asked if you would like to create an emergency repair disk.
Click [Next].
Select "Wired to the network" When asked for a connection option and click
[Next].
Uncheck the "Install the Microsoft Internet Information Server (IIS)" and click
[Next].
9.- When asked for NIC drivers, insert driver disk #1 in floppy A: and click
[Have Disk].
Select "3com Fast Etherlink/ Etherlink XL PCI Bus Master NIC (3C905B-TX)
and click [OK].
11.- When the TCP/IP properties window appears provide the following:
a. IP address = 172.16.102.3
b. Subnet Mask = 255.255.254.0
Click [Finish]
Click [OK]
14.- When the "Installation Complete" screen appears. Remove all disks and
click [Restart Computer].
During NT Server installation, you must designate the role that servers will
play in a domain. NT gives you three choices for this role: PDC, BDC, and
member server (i.e., a standalone server). You create a domain when you
designate a PDC. PDCs and BDCs are crucial elements in domain theory and
practice. To maintain control of and get the most out of the domains you
establish in your NT network, you need to understand what PDCs and
BDCs are, how to synchronize the directory database from a PDC to the
BDCs in its domain, how to promote a BDC to a PDC when the PDC is offline,
how to determine the optimum number of BDCs for a domain, and how to
manage trust relationships between the PDCs of separate domains.
A domain can have multiple BDCs. Each BDC in a domain maintains a read-
only copy of the PDC's master directory database. You can't make changes to
a BDC's copy of the directory database. Because directory database
duplication occurs between the PDC's master directory database and the
BDCs' directory database copies, you can promote any BDC in a domain to
the PDC if the original PDC fails or you must shut it down for maintenance.
BDCs also help share the load of authenticating network logons.
Having at least one BDC in a domain is crucial. If the PDC fails, you can keep
the domain functioning by promoting the BDC to PDC. Promoting the BDC
ensures that you can make changes to the directory database and propagate
those changes throughout the network. BDC promotion also guarantees
access to network resources and keeps the directory database accessible to
the domain. If the directory database isn't accessible to the domain, users
can't log on and become authenticated to the domain. Computers can't
identify themselves to the domain and therefore can't create the secure
channel necessary for communication between machines in the domain.
Group accounts won't have access to resources in the domain. In short,
without a BDC to promote to PDC, you'll have a lot of explaining to do when
your network comes to a halt.
Standalone Server
A standalone server is a server that runs alone and is not a part of a group. In
fact, in the context of Microsoft Windows networks, a standalone server is one
that does not belong to or is not governed by a Windows domain. This kind of
server is not a domain member and functions more as a workgroup server, so
its use makes more sense in local settings where complex security and
authentication may not be required.
If all that is needed is a server for read-only files, or for printers alone, it may
not make sense to effect a complex installation. For example, a drafting office
needs to store old drawings and reference standards. Nobody can write files
to the server because it is legislatively important that all documents remain
unaltered. A share-mode read-only standalone server is an ideal solution.
Another situation that warrants simplicity is an office that has many printers
that are queued off a single central server. Everyone needs to be able to print
to the printers, there is no need to effect any access controls, and no files will
be served from the print server. Again, a share-mode standalone server
makes a great solution.
Background
The term standalone server means that it will provide local authentication and
access control for all resources that are available from it. In general this
means that there will be a local user database. In more technical terms, it
means resources on the machine will be made available in either share mode
or in user mode.
Samba tends to blur the distinction a little in defining a standalone server. This
is because the authentication database may be local or on a remote server,
even if from the SMB protocol perspective the Samba server is not a member
of a domain security context.
Through the use of Pluggable Authentication Modules (PAM) (see the chapter
on PAM) and the name service switcher (NSS), which maintains the UNIX-
user database, the source of authentication may reside on another server. We
would be inclined to call this the authentication server. This means that the
Samba server may use the local UNIX/Linux system password database
(/etc/passwd or /etc/shadow), may use a local smbpasswd file, or may use an
LDAP backend, or even via PAM and Winbind another CIFS/SMB server for
authentication.
NT system policies are useful for managing user and machine Registry
changes in the enterprise. They help systems administrators centralize
configuration control in large and small NT environments. They also ease
problems associated with desktop configuration management, such as
delivering icons to your users' desktops. However, NT system policies can be
difficult to configure, can cause widespread damage, and can become
unmanageable if you're not careful.
Web Server
A web server is software and hardware that uses HTTP (Hypertext Transfer
Protocol) and other protocols to respond to client requests made over the
World Wide Web. The main job of a web server is to display website content
through storing, processing and delivering webpages to users. Besides HTTP,
web servers also support SMTP (Simple Mail Transfer Protocol) and FTP (File
Transfer Protocol), used for email, file transfer and storage.
Web servers are used in web hosting, or the hosting of data for websites and
web-based applications -- or web applications.
When a web browser, like Google Chrome or Firefox, needs a file that's
hosted on a web server, the browser will request the file by HTTP. When the
request is received by the web server, the HTTP server will accept the
request, find the content and send it back to the browser through HTTP.
More specifically, when a browser requests a page from a web server, the
process will follow a series of steps. First, a person will specify a URL in a web
browser's address bar. The web browser will then obtain the IP address of the
domain name -- either translating the URL through DNS (Domain Name
System) or by searching in its cache. This will bring the browser to a web
server. The browser will then request the specific file from the web server by
an HTTP request. The web server will respond, sending the browser the
requested page, again, through HTTP. If the requested page does not exist or
if something goes wrong, the web server will respond with an error message.
The browser will then be able to display the webpage.
Many basic web servers will also support server-side scripting, which is used
to employ scripts on a web server that can customize the response to the
client. Server-side scripting runs on the server machine and typically has a
broad feature set, which includes database access. The server-side scripting
process will also use Active Server Pages (ASP), Hypertext Preprocessor
(PHP) and other scripting languages. This process also allows HTML
documents to be created dynamically.
Dynamic web browsers will consist of a web server and other software such
as an application server and database. It is considered dynamic because the
application server can be used to update any hosted files before they are sent
to a browser. The web server can generate content when it is requested from
the database. Though this process is more flexible, it is also more
complicated.
Considerations in choosing a web server include how well it works with the
operating system and other servers; its ability to handle server-side
programming; security characteristics; and the publishing, search engine and
site-building tools that come with it. Web servers may also have different
configurations and set default values. To create high performance, a web
server, high throughput and low latency will help.
The Domain Name System (DNS) is the phonebook of the Internet. Humans
access information online through domain names, like nytimes.com or
espn.com. Web browsers interact through Internet Protocol (IP) addresses.
DNS translates domain names to IP addresses so browsers can load Internet
resources.
Each device connected to the Internet has a unique IP address which other
machines use to find the device. DNS servers eliminate the need for humans
to memorize IP addresses such as 192.168.1.1 (in IPv4), or more complex
newer alphanumeric IP addresses such as 2400:cb00:2048:1::c629:d7a2 (in
IPv6).
The Windows Internet Naming Service (WINS) converts NetBIOS host names
into IP addresses. It allows Windows machines on a given LAN segment to
recognize Windows machines on other LAN segments.
The service WINS resolves NetBios names into IP addresses and is therefore
an elementary component of a Windows network. Given this fact, this service
should already be installed on the Windows Server located in the network.
Router:
Routers allow devices to connect and share data over the Internet or an intranet.
A router is a gateway that passes data between one or more local area networks
(LANs). Routers use the Internet Protocol (IP) to send IP packets containing data
and IP addresses of sending and destination devices located on separate local
area networks. Routers reside between these LANs where the sending and
receiving devices are connected. Devices may be connected over multiple router
“hops” or may reside on separate LANs directly connected to the same router.
Once an IP packet from a sending device reaches a router, the router identifies
the packet’s destination and calculates the best way to forward it there. The router
maintains a set of route-forwarding tables, which are rules that identify how to
forward data to reach the destination device’s LAN. A router will determine the
best router interface (or next hop) to send the packet closer to the destination
device’s LAN. Once a device sends an IP packet, routers determine that packet’s
best route over the Internet or intranet to reach its destination most efficiently and
in accordance with quality-of-service agreements.
Routers provide the essential building blocks network operators need to build
robust networks. Operators can use routers to configure performance metrics with
sophisticated routing algorithms and create traffic engineering policies to alleviate
network congestion and maintain quality of service for subscribers.