0% found this document useful (0 votes)
25 views12 pages

Unit-1 Introduction To Operating System

Uploaded by

Prajwal Kandel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
25 views12 pages

Unit-1 Introduction To Operating System

Uploaded by

Prajwal Kandel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 12

Unit-1

Introduction to Operating System


Introduction
An Operating System (OS) is an interface between computer user and computer hardware. An
operating system is software which performs all the basic tasks like file management, memory
management, process management, handling input and output, and controlling peripheral devices such
as disk drives and printers.

Some popular Operating Systems include Linux Operating System, Windows Operating System,
VMS, OS/400, AIX, z/OS, etc.

The Operating System is a program with the following features −


• An operating system is a program that acts as an interface between the software and the
computer hardware.
• It is an integrated set of specialized programs used to manage overall resources and operations
of the computer.
• It is specialized software that controls and monitors the execution of all other programs that
reside in the computer, including application programs and other system software.

Fig: Layers of operating system

Objectives of Operating System


There are two primary objectives of operating system:
1. Operating system as an extended machine:
The architecture (instruction set, memory organization, I/O, and bus structure) of most
computers at the machine language level is primitive and awkward to program, especially for
input/output. The program that hides the truth about the hardware from the programmer and
presents a nice, simple view of named files that can be read and written is operating system. Just
as the operating system shields the programmer from the disk hardware and presents a simple file-
oriented interface, it also conceals a lot of unpleasant business concerning interrupts, timers,

1
memory management, and other low-level features. In each case, the abstraction offered by the
operating system is simpler and easier to use than that offered by the underlying hardware.
In this view, the function of the operating system is to present the user with the equivalent of
an extended machine or virtual machine that is easier to program than the underlying hardware.
2. The Operating System as a Resource Manager
The main important objective of an operating system is to manage the various resources of the
computer system. This involves performing such tasks as keeping track of who is using which
resources, granting resource requests, accounting for resource usage, and mediating conflicting
requests from different programs and users.
Executing a job on a computer system often requires several of its resource such as CPU time,
memory space, file storage space, I/O devices and so on. The operating system acts as the manager
of the various resources of a computer system and allocates them to specific programs and users
to execute their jobs successfully. When a computer system is used to simultaneously handle
several applications, there may be many, possibly conflicting, requests for resources. In such a
situation, the operating system must decide which requests are allocated resources to operate the
computer system efficiently and fairly (providing due attention to all users). The efficient and fair
sharing of resources among users and/or programs is a key goal of most operating system.

Fig: Operating System as a resource manager

History of Operating System


Operating systems have been evolving through the years. Since operating systems have
historically been closely tied to the architecture of the computers on which they run, we will look
at successive generations of computers to see what their operating systems were like. This mapping
of operating system generations to computer generations is crude, but it does provide some
structure where there would otherwise be none.

2
The first true digital computer was designed by the English mathematician Charles Babbage
(1792-1871). Although Babbage spent most of his life and fortune trying to build his “analytical
engine.” he never got it working properly because it was purely mechanical, and the technology
of his day could not produce the required wheels, gears, and cogs to the high precision that he
needed. Needless to say, the analytical engine did not have an operating system.
As an interesting historical aside, Babbage realized that he would need software for his analytical
engine, so he hired a young woman named Ada Lovelace, who was the daughter of the famed
British poet Lord Byron, as the world’s first programmer. The programming language Ada® is
named after her.

The First Generation (1945-55) Vacuum Tubes and Plug boards


After Babbage’s unsuccessful efforts, little progress was made in constructing digital computers
until World War II. Around the mid-1940s, Howard Aiken at Harvard, John von Neumann at the
Institute for Advanced Study in Princeton, J. Presper Eckert and William Mauchley at the
University of Pennsylvania, and Konrad Zuse in Germany, among others, all succeeded in building
calculating engines. The first ones used mechanical relays but were very slow, with cycle times
measured in seconds. Relays were later replaced by vacuum tubes. These machines were
enormous, filling up entire rooms with tens of thousands of vacuum tubes, but they were still
millions of times slower than even the cheapest personal computers available today.
In these early days, a single group of people designed, built, programmed, operated, and
maintained each machine. All programming was done in absolute machine language, often by
wiring up plug boards to control the machine’s basic functions. Programming languages were
unknown (even assembly language was unknown). Operating systems were unheard of. The usual
made of operation was for the programmer to sign up for a block of time on the signup sheet on
the wall, then come down to the machine room, insert his or her plug board into the computer, and
spend the next few hours hoping that none of the 20,000 or so vacuum tubes would burn out during
the run. Virtually all the problems were straightforward numerical calculations, such as grinding
out tables of sines, cosines, and logarithms.
By the early 1950s, the routine had improved somewhat with the introduction of punched cards. It
was now possible to write programs on cards and read them in instead of using plug boards;
otherwise, the procedure was the same.

The Second Generation (1955-65) Transistors and Batch Systems


The introduction of the transistor in the mid-1950s changed the picture radically. Computers
became reliable enough that they could be manufactured and sold to paying customers with the
expectation that they would continue to function long enough to get some useful work done. For
the first time, there was a clear separation between designers, builders, operators, programmers,
and maintenance personnel.

3
These machines, now called mainframes, were locked away in specially air conditioned computer
rooms, with staffs of professional operators to run them. Only big corporations or major
government agencies or universities could afford the multimillion dollar price tag. To run a job
(i.e., a program or set of programs), a programmer would first write the program on paper (in
FORTRAN or assembler), then punch it on cards. He would then bring the card deck down to the
input room and hand it to one of the operators and go drink coffee until the output was ready.
When the computer finished whatever job it was currently running, an operator would go over to
the printer and tear off the output and carry it over to the output room, so that the programmer
could collect it later. Then he would take one of the card decks that had been brought from the
input room and read it in. If the FORTRAN compiler was needed, the operator would have to get
it from a file cabinet and read it in. Much computer time was wasted while operators were walking
around the machine room.
The solution generally adopted was the batch system. The idea behind it was to collect a tray
full of jobs in the input room and then read them onto a magnetic tape using a small (relatively)
inexpensive computer, such as the IBM 1401, which was very good at reading cards, copying
tapes, and printing output, but not at all good at numerical calculations. Other, much more
expensive machines, such as the IBM 7094, were used for the real computing.

Fig: An Early Batch System


(a) Programmers bring cards to 1401
(b) 1401 reads batch of jobs onto tape
(c) Operator carries input tape to 7094
(d) 7094 does computing
(e) Operator carries output tape to 1401
(f) 1401 prints output
The Third Generation (1965-1980) ICs and Multiprogramming
By the early 1960s, most computer manufacturers had two distinct, and totally incompatible,
product lines. On the one hand there were the word-oriented, large-scale scientific computers, such
as the 7094, which were used for numerical calculations in science and engineering. On the other
hand, there were the character-oriented, commercial computers, such as the 1401, which were
widely used for tape sorting and printing by banks and insurance companies.
IBM attempted to solve both of these problems at a single stroke by introducing the System/360.
The 360 was a series of software-compatible machines ranging from 1401-sized to much more
powerful than the 7094. The machines differed only in price and performance (maximum
memory, processor speed, number of I/O devices permitted, and so forth). Since all the machines

4
had the same architecture and instruction set, programs written for one machine could run on all
the others, at least in theory. Furthermore, the 360 was designed to handle both scientific (i.e.,
numerical) and commercial computing. Thus a single family of machines could satisfy the needs
of all customers. In subsequent years, IBM has come out with compatible successors to the 360
line, using more modern technology, known as the 370, 4300, 3080, and 3090 series.

The greatest strength of the “one family” idea was simultaneously its greatest weakness. The
intention was that all software, including the operating system, OS/360 had to work on all
models. It had to run on small systems, which often just replaced 1401s for copying cards to
tape, and on very large systems, which often replaced 7094s for doing weather forecasting and
other heavy computing. It had to be good on systems with few peripherals and on systems with
many peripherals. It had to work in commercial environments and in scientific environments.
Above all, it had to be efficient for all of these different uses.
Despite its enormous size and problems, OS/360 and the similar third-generation operating
systems produced by other computer manufacturers actually satisfied most of their customers
reasonably well. They also popularized several key techniques absent in second-generation
operating systems. Probably the most important of these was multiprogramming. On the 7094,
when the current job paused to wait for a tape or other I/O operation to complete, the CPU simply
sat idle until the I/O finished. With heavily CPU-bound scientific calculations, I/O is infrequent,
so this wasted time is not significant. With commercial data processing, the I/O wait time can often
be 80 or 90 percent of the total time, so something had to be done to avoid having the (expensive)
CPU be idle so much.

Fig: A multiprogramming system with three jobs in memory.


The Fourth Generation (1980-Present) Personal Computers
With the development of LSI (Large Scale Integration) circuits, chips containing thousands of
transistors on a square centimeter of silicon, the age of the personal computer dawned. In terms of
architecture, personal computers (initially called microcomputers) were not all that different from
minicomputers of the PDP-11 class, but in terms of price they certainly were different. Where the
minicomputer made it possible for a department in a company or university to have its own
computer, the microprocessor chip made it possible for a single individual to have his or her own
personal computer.
In 1974, when Intel came out with the 8080, the first general-purpose 8-bit CPU, it wanted an
operating system for the 8080, in part to be able to test it. Intel asked one of its consultants, Gary
Kildall, to write one. Kildall and a friend first built a controller for the newly-released Shugart
Associates 8-inch floppy disk and hooked the floppy disk up to the 8080, thus producing the first

5
microcomputer with a disk. Kildall then wrote a disk-based operating system called CP/M
(Control Program for Microcomputers) for it.
In the early 1980s, IBM designed the IBM PC and looked around for software to run on it. People
from IBM contacted Bill Gates to license his BASIC interpreter. They also asked him if he knew
of an operating system to run on the PC, Gates suggested that IBM contact Digital Research, then
the world’s dominant operating systems company. Making what was surely the worst business
decision in recorded history, Kildall refused to meet with IBM, sending a subordinate instead. To
make matters worse, his lawyer even refused to sign IBM’s nondisclosure agreement covering the
not-yetannounced PC. Consequently, IBM went back to Gates asking if he could provide them
with an operating system.
When IBM came back, Gates realized that a local computer manufacturer, Seattle
Computer Products, had a suitable operating system. DOS (Disk Operating System). He
approached them and asked to buy it (allegedly for $50,000). which they readily accepted. Gates
then offered IBM a DOS/BASIC package which IBM accepted. IBM wanted certain
modifications, so Gates hired the person who wrote DOS, Tim Paterson, as an employee of Gates’
fledgling company, Microsoft, to make them. The revised system was renamed MS-DOS
(MicroSoft Disk Operating System) and quickly came to dominate the IBM PC market.
CP/M, MS-DOS, and other operating systems for early microcomputers were all based on users
typing in commands from the keyboard. That eventually changed due to research done by Doug
Engelbart at Stanford Research Institute in the 1960s. Engelbart invented the GUI (Graphical
User Interface), pronounced “gooey,” complete with windows, icons, menus, and mouse. These
ideas were adopted by researchers at Xerox PARC and incorporated into machines they built.
One day, Steve Jobs, who co-invented the Apple computer in his garage, visited PARC, saw a
GUI, and instantly realized its potential value; something Xerox management famously did not
(Smith and Alexander, 1988). Jobs then embarked on building an Apple with a GUI. This project
led to the Lisa, which was too expensive and failed commercially. Jobs’ second attempt, the Apple
Macintosh, was a huge success, not only because it was much cheaper than the Lisa, but also
because it was user friendly, meaning that it was intended for users who not only knew nothing
about computers but furthermore had absolutely no intention whatsoever of learning.
Another Microsoft operating system is Windows NT (NT stands for New Technology), which is
compatible with Windows 95 at a certain level, but a complete rewrite from scratch internally. It
is a full 32-bit system.
An interesting development that began taking place during the mid-1980s is the growth of
networks of personal computers running network operating systems and distributed operating
systems. In a network operating system, the users are aware of the existence of multiple computers
and can log in to remote machines and copy files from one machine to another. Each machine runs
its own local operating system and has its own local user (or users).
The Fifth Generation (1990–Present): Mobile Computers
Ever since detective Dick Tracy started talking to his ‘‘two-way radio wrist watch’’ in the 1940s
comic strip, people have craved a communication device they could carry around wherever they

6
went. The first real mobile phone appeared in 1946 and weighed some 40 kilos. You could take it
wherever you went as long as you had a car in which to carry it.
The first true handheld phone appeared in the 1970s and, at roughly one kilogram, was positively
featherweight. It was affectionately known as ‘‘the brick.’’ Pretty soon everybody wanted one.
Today, mobile phone penetration is close to 90% of the global population. We can make calls not
just with our portable phones and wrist watches, but soon with eyeglasses and other wearable items.
Moreover, the phone part is no longer that interesting. We receive email, surf the Web, text our
friends, play games, navigate around heavy traffic—and do not even think twice about it.
While the idea of combining telephony and computing in a phone-like device has been around
since the 1970s also, the first real smartphone did not appear until the mid-1990s when Nokia
released the N9000, which literally combined two, mostly separate devices: a phone and a PDA
(Personal Digital Assistant). In 1997, Ericsson coined the term smartphone for its GS88
‘‘Penelope.’’
Now that smartphones have become ubiquitous, the competition between the various operating
systems is fierce and the outcome is even less clear than in the PC world. At the time of writing,
Google’s Android is the dominant operating system with Apple’s iOS a clear second, but this was
not always the case and all may be different again in just a few years. If anything is clear in the
world of smartphones, it is that it is not easy to stay king of the mountain for long.
After all, most smartphones in the first decade after their inception were running Symbian OS. It
was the operating system of choice for popular brands like Samsung, Sony Ericsson, Motorola,
and especially Nokia. However, other operating systems like RIM’s Blackberry OS (introduced
for smartphones in 2002) and Apple’s iOS (released for the first iPhone in 2007) started eating
into Symbian’s market share. Many expected that RIM would dominate the business market, while
iOS would be the king of the consumer devices. Symbian’s market share plummeted. In 2011,
Nokia ditched Symbian and announced it would focus on Windows Phone as its primary platform.
For some time, Apple and RIM were the toast of the town (although not nearly as dominant as
Symbian had been), but it did not take very long for Android, a Linux-based operating system
released by Google in 2008, to overtake all its rivals.
For phone manufacturers, Android had the advantage that it was open source and available under
a permissive license. As a result, they could tinker with it and adapt it to their own hardware with
ease. Also, it has a huge community of developers writing apps, mostly in the familiar Java
programming language. Even so, the past years have shown that the dominance may not last, and
Android’s competitors are eager to claw back some of its market share.

Types of Operating System


1. Mainframe Operating Systems

At the high end are the operating systems for mainframes, those room-sized computers still found
in major corporate data centers. These computers differ from personal computers in terms of their

7
I/O capacity. A mainframe with 1000 disks and millions of gigabytes of data is not unusual; a
personal computer with these specifications would be the envy of its friends. Mainframes are also
making something of a comeback as high-end Web servers, servers for large-scale electronic
commerce sites, and servers for business-to-business transactions.
The operating systems for mainframes are heavily oriented toward processing many jobs at once,
most of which need prodigious amounts of I/O. They typically offer three kinds of services: batch,
transaction processing, and timesharing. A batch system is one that processes routine jobs without
any interactive user present. Claims processing in an insurance company or sales reporting for a
chain of stores is typically done in batch mode. Transaction-processing systems handle large
numbers of small requests, for example, check processing at a bank or airline reservations. Each
unit of work is small, but the system must handle hundreds or thousands per second. Timesharing
systems allow multiple remote users to run jobs on the computer at once, such as querying a big
database. These functions are closely related; mainframe operating systems often perform all of
them. An example mainframe operating system is OS/390, a descendant of OS/360. However,
mainframe operating systems are gradually being replaced by UNIX variants such as Linux.

2. Server Operating Systems

One level down are the server operating systems. They run on servers, which are either very large
personal computers, workstations, or even mainframes. They serve multiple users at once over a
network and allow the users to share hardware and software resources. Servers can provide print
service, file service, or Web service. Internet providers run many server machines to support their
customers and Websites use servers to store the Web pages and handle the incoming requests.
Typical server operating systems are Solaris, FreeBSD, Linux and Windows Server 201x.
3. Multiprocessor Operating Systems

An increasingly common way to get major-league computing power is to connect multiple CPUs
into a single system. Depending on precisely how they are connected and what is shared, these
systems are called parallel computers, multicomputers, or multiprocessors. They need special
operating systems, but often these are variations on the server operating systems, with special
features for communication, connectivity, and consistency.
With the recent advent of multicore chips for personal computers, even conventional desktop and
notebook operating systems are starting to deal with at least small-scale multiprocessors and the
number of cores is likely to grow over time. Luckily, quite a bit is known about multiprocessor
operating systems from years of previous research, so using this knowledge in multicore systems
should not be hard. The hard part will be having applications make use of all this computing power.
Many popular operating systems, including Windows and Linux, run on multiprocessors.

4. Personal Computer Operating Systems

The next category is the personal computer operating system. Modern ones all support
multiprogramming, often with dozens of programs started up at boot time. Their job is to provide

8
good support to a single user. They are widely used for word processing, spreadsheets, games, and
Internet access. Common examples are Linux, FreeBSD, Windows 7, Windows 8, and Apple’s OS
X. Personal computer operating systems are so widely known that probably little introduction is
needed. In fact, many people are not even aware that other kinds exist.

5. Handheld Computer Operating Systems

Continuing on down to smaller and smaller systems, we come to tablets, smartphones and other
handheld computers. A handheld computer, originally known as a PDA (Personal Digital
Assistant), is a small computer that can be held in your hand during operation. Smartphones and
tablets are the best-known examples. As we have already seen, this market is currently dominated
by Google’s Android and Apple’s iOS, but they have many competitors. Most of these devices
boast multicore CPUs, GPS, cameras and other sensors, copious amounts of memory, and
sophisticated operating systems. Moreover, all of them have more third-party applications
(‘‘apps’’) than you can shake a (USB) stick at.
6. Embedded Operating Systems

Embedded systems run on the computers that control devices that are not generally thought of as
computers and which do not accept user-installed software. Typical examples are microwave
ovens, TV sets, cars, DVD recorders, traditional phones, and MP3 players. The main property
which distinguishes embedded systems from handhelds is the certainty that no untrusted software
will ever run on it. You cannot download new applications to your microwave oven—all the
software is in ROM. This means that there is no need for protection between applications, leading
to design simplification. Systems such as Embedded Linux, QNX and VxWorks are popular in this
domain.

7. Sensor-Node Operating Systems

Networks of tiny sensor nodes are being deployed for numerous purposes. These nodes are tiny
computers that communicate with each other and with a base station using wireless
communication. Sensor networks are used to protect the perimeters of buildings, guard national
borders, detect fires in forests, measure temperature and precipitation for weather forecasting,
glean information about enemy movements on battlefields, and much more.
The sensors are small battery-powered computers with built-in radios. They have limited power
and must work for long periods of time unattended outdoors, frequently in environmentally harsh
conditions. The network must be robust enough to tolerate failures of individual nodes, which
happen with ever-increasing frequency as the batteries begin to run down.
Each sensor node is a real computer, with a CPU, RAM, ROM, and one or more environmental
sensors. It runs a small, but real operating system, usually one that is event driven, responding to
external events or making measurements periodically based on an internal clock. The operating
system has to be small and simple because the nodes have little RAM and battery lifetime is a
major issue. Also, as with embedded systems, all the programs are loaded in advance; users do not

9
suddenly start programs they downloaded from the Internet, which makes the design much simpler.
TinyOS is a well-known operating system for a sensor node.

8. Real-Time Operating Systems

Another type of operating system is the real-time system. These systems are characterized by
having time as a key parameter. For example, in industrial process-control systems, real-time
computers have to collect data about the production process and use it to control machines in the
factory. Often there are hard deadlines that must be met. For example, if a car is moving down an
assembly line, certain actions must take place at certain instants of time. If, for example, a welding
robot welds too early or too late, the car will be ruined. If the action absolutely must occur at a
certain moment (or within a certain range), we have a hard real-time system. Many of these are
found in industrial process control, avionics, military, and similar application areas. These systems
must provide absolute guarantees that a certain action will occur by a certain time.
A soft real-time system, is one where missing an occasional deadline, while not desirable, is
acceptable and does not cause any permanent damage. Digital audio or multimedia systems fall in
this category. Smartphones are also soft realtime systems.
Since meeting deadlines is crucial in (hard) real-time systems, sometimes the operating system is
simply a library linked in with the application programs, with ev erything tightly coupled and no
protection between parts of the system. An example of this type of real-time system is eCos.
The categories of handhelds, embedded systems, and real-time systems overlap considerably.
Nearly all of them have at least some soft real-time aspects. The embedded and real-time systems
run only software put in by the system designers; users cannot add their own software, which makes
protection easier. The handhelds and embedded systems are intended for consumers, whereas real-
time systems are more for industrial usage. Nevertheless, they hav e a certain amount in common.

9. Smart Card Operating Systems

The smallest operating systems run on smart cards, which are credit-card-sized devices containing
a CPU chip. They hav e very severe processing power and memory constraints. Some are powered
by contacts in the reader into which they are inserted, but contactless smart cards are inductively
powered, which greatly limits what they can do. Some of them can handle only a single function,
such as electronic payments, but others can handle multiple functions. Often these are proprietary
systems.
Some smart cards are Java oriented. This means that the ROM on the smart card holds an interpreter
for the Java Virtual Machine (JVM). Java applets (small programs) are downloaded to the card and
are interpreted by the JVM interpreter. Some of these cards can handle multiple Java applets at the
same time, leading to multiprogramming and the need to schedule them. Resource management
and protection also become an issue when two or more applets are present at the same time. These
issues must be handled by the (usually extremely primitive) operating system present on the card.

10
Functions of Operating System
Following are some of important functions of an operating System.

1. Memory Management
2. Processor Management
3. Device Management
4. File Management
5. Security
6. Control over system performance
7. Job accounting
8. Error detecting aids
9. Coordination between other software and users

1. Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main memory
is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to
be executed, it must in the main memory. An Operating System does the following activities for
memory management −
• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are
not in use.
• In multiprogramming, the OS decides which process will get memory when and how
much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been terminated.

2. Processor Management
In multiprogramming environment, the OS decides which process gets the processor when and
for how much time. This function is called process scheduling. An Operating System does the
following activities for processor management −
• Keeps tracks of processor and status of process. The program responsible for this task is
known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.

3. Device Management
An Operating System manages device communication via their respective drivers. It does the
following activities for device management −

11
• Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.

4. File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
• Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.

5. Security
The operating system uses password protection to protect user data and similar other techniques.
it also prevents unauthorized access to programs and user data.

6. Control over system performance


Monitors overall system health to help improve performance. records the response time between
service requests and system response to have a complete view of the system health. This can help
improve performance by providing important information needed to troubleshoot problems.

7. Job accounting
Operating system Keeps track of time and resources used by various tasks and users, this
information can be used to track resource usage for a particular user or group of user.

8. Error detecting aids


Operating system constantly monitors the system to detect errors and avoid the malfunctioning
of computer system.

9. Coordination between other software and users


Operating systems also coordinate and assign interpreters, compilers, assemblers and other
software to the various users of the computer systems.

12

You might also like