0% found this document useful (0 votes)
6 views

Cosc 0120 Lec I, Lec Ii

operating systems

Uploaded by

kawiragitonga17
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Cosc 0120 Lec I, Lec Ii

operating systems

Uploaded by

kawiragitonga17
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

COSC 0120 LECTURE NOTES

Operating System - Overview


An Operating System (OS) is an interface between a computer user and computer
hardware. An operating system is a software which performs all the basic tasks
like file management, memory management, process management, handling input
and output, and controlling peripheral devices such as disk drives and printers.
Some popular Operating Systems include Linux Operating System, Windows
Operating System, VMS, OS/400, AIX, z/OS, etc.
Definition
An operating system is a program that acts as an interface between the user and
the computer hardware and controls the execution of all kinds of programs.
Following are some of important functions of an operating System.

 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users

Memory Management
Memory management refers to management of Primary Memory or Main
Memory. Main memory is a large array of words or bytes where each word or
byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU.
For a program to be executed, it must in the main memory. An Operating System
does the following activities for memory management −
 Keeps tracks of primary memory, i.e., what part of it are in use by whom,
what part are not in use.
 In multiprogramming, the OS decides which process will get memory when
and how much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been
terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the
processor when and for how much time. This function is called process
scheduling. An Operating System does the following activities for processor
management −
 Keeps tracks of processor and status of process. The program responsible
for this task is known as traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.
Device Management
An Operating System manages device communication via their respective drivers.
It does the following activities for device management −
 Keeps tracks of all devices. Program responsible for this task is known as
the I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective
facilities are often known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System performs

 Security − By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
 Control over system performance − Recording delays between request for
a service and response from the system.
 Job accounting − Keeping track of time and resources used by various jobs
and users.
 Error detecting aids − Production of dumps, traces, error messages, and
other debugging and error detecting aids.
 Coordination between other softwares and users − Coordination and
assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.
History of Operating Systems
The First Generation (1940's to early 1950's)
When electronic computers where first introduced in the 1940's they were created without any o
was done in absolute machine language, often by wiring up plugboards to control the machine's
computers were generally used to solve simple math calculations, operating systems were not n

The Second Generation (1955-1965)


The first operating system was introduced in the early 1950's, it was called GMOS and was crea
machine the 701. Operating systems in the 1950's were called single-stream batch processing sy
in groups. These new machines were called mainframes, and they were used by professional op
there was such as high price tag on these machines, only government agencies or large corporat

The Third Generation (1965-1980)


By the late 1960's operating systems designers were able to develop the system of multiprogram
will be able to perform multiple jobs at the same time.The introduction of multiprogramming w
operating systems because it allowed a CPU to be busy nearly 100 percent of the time that it wa
development during the third generation was the phenomenal growth of minicomputers, starting
PDP-1 had only 4K of 18-bit words, but at $120,000 per machine (less than 5 percent of the pri
hotcakes. These microcomputers help create a whole new industry and the development of mor
creation of personal computers which are created in the fourth generation.
The Fourth Generation (1980-Present Day)
The fourth generation of operating systems saw the creation of personal computing. Although th
minicomputers developed in the third generation, personal computers cost a very small fraction
personal computer was so affordable that it made it possible for a single individual could be abl
minicomputers where still at such a high price that only corporations could afford to have them
creation of personal computing was the birth of Microsoft and the Windows operating system. T
created in 1975 when Paul Allen and Bill Gates had a vision to take personal computing to the n
DOS in 1981 although it was effective it created much difficulty for people who tried to unders
went on to become the largest operating system used in techonology today with releases of Win
(Which is currently the most used operating system to this day), and their newest operating syst
Apple is the other major operating system created in the 1980's. Steve Jobs, co founder of Appl
was a huge success due to the fact that it was so user friendly. Windows development througho
Macintosh and it created a strong competition between the two companies. Today all of our elec
systems, from our computers and smartphones, to ATM machines and motor vehicles. And as t
systems.
QUESTION:
What is difference between kernel and OS?

//END OF LECTURE 1

LEC II:

Classification of Operating System


i) Multiuser OS:

In a multiuser OS, more than one user can use the same system at a same time
through the multi I/O terminal or through the network.
For example: windows, Linux, Mac, etc.
A multiuser OS uses timesharing to support multiple users.

ii) Multiprocessing OS:

A multiprocessing OS can support the execution of multiple processes at the same


time. It uses multiple number of CPU. It is expensive in cost however, the
processing speed will be faster. It is complex in its execution. Operating system
like Unix, 64 bit edition of windows, server edition of windows, etc. are
multiprocessing.

iii) Multiprogramming OS:

In a multiprogramming OS more than one programs can be used at the same time.
It may or may not be multiprocessing. In a single CPU system , multiple program
are executed one after another by dividing the CPU into small time slice.
example: Windows, Mac, Linux,etc.

iv) Multitasking OS:

In a multitasking system more than one task can be performed at the same time but
they are executed one after another through a single CPU by time sharing. For
example: Windows, Linux, Mac, Unix,etc.
Multitasking OS are of two types:
a) Pre-empetive multitasking
b) Co-operative multitasking
In the pre-empetive multitasking, the OS allows CPU times slice to each program.
After each time slice, CPU executes another task. Example: Windows XP
In co-operative multitasking a task can control CPU as long as it requires .
However, it will free CPU to execute another program if it doesn’t require CPU.
Exaample: windows 3.x, multifinder,etc.

v) Multithreading:

A program in execution is known as process. A process can be further divided into


multiple sub-processers. These sub-processers are known as threads. A multi-
threading OS can divide process into threads and execute those threads. This
increases operating speed but also increases the complexity. For example: Unix,
Server edition of Linux and windows.

vi) Batch Processing:

A batch processing is a group of processing system in which all the required input
of all the processing task is provided initially. The result of all the task is provided
after the completion of all the processing. Its main functions are:

1. Multiple task are processed


2. User cannot provide input in between the processing
3. It is appropriate only when all the inputs are known in advance
4. It requires large memory
5. CPU ideal time is less
6. Printer is the appropriate output device
7. It is old processing technique and rarely used at present

vii) Online Processing:

It is an individual processing system in which the task is processed on individual


basis as soon as they are provided by the user. It has features like:

1. Individual task is processed at a time


2. User can provide input in between processing
3. It is appropriate when all inputs ate not known in advance
4. It doesn’t require large memory
5. CPU ideal time is more
6. Monitor is appropriate output device
7. It is modern processing technique and mostly used in present

Types of Operating System


Operating systems are there from the very first computer generation and they keep
evolving with time. In this chapter, we will discuss some of the important types of
operating systems which are most commonly used.
Batch operating system
The users of a batch operating system do not interact with the computer directly.
Each user prepares his job on an off-line device like punch cards and submits it to
the computer operator. To speed up processing, jobs with similar needs are
batched together and run as a group. The programmers leave their programs with
the operator and the operator then sorts the programs with similar requirements
into batches.
The problems with Batch Systems are as follows −

 Lack of interaction between the user and the job.


 CPU is often idle, because the speed of the mechanical I/O devices is slower
than the CPU.
 Difficult to provide the desired priority.
Time-sharing operating systems
Time-sharing is a technique which enables many people, located at various
terminals, to use a particular computer system at the same time. Time-sharing or
multitasking is a logical extension of multiprogramming. Processor's time which
is shared among multiple users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-Sharing
Systems is that in case of Multiprogrammed batch systems, the objective is to
maximize processor use, whereas in Time-Sharing Systems, the objective is to
minimize response time.
Multiple jobs are executed by the CPU by switching between them, but the
switches occur so frequently. Thus, the user can receive an immediate response.
For example, in a transaction processing, the processor executes each user
program in a short burst or quantum of computation. That is, if nusers are present,
then each user can get a time quantum. When the user submits the command, the
response time is in few seconds at most.
The operating system uses CPU scheduling and multiprogramming to provide
each user with a small portion of a time. Computer systems that were designed
primarily as batch systems have been modified to time-sharing systems.
Advantages of Timesharing operating systems are as follows −

 Provides the advantage of quick response.


 Avoids duplication of software.
 Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −

 Problem of reliability.
 Question of security and integrity of user programs and data.
 Problem of data communication.
Distributed operating System
Distributed systems use multiple central processors to serve multiple real-time
applications and multiple users. Data processing jobs are distributed among the
processors accordingly.
The processors communicate with one another through various communication
lines (such as high-speed buses or telephone lines). These are referred as loosely
coupled systems or distributed systems. Processors in a distributed system may
vary in size and function. These processors are referred as sites, nodes, computers,
and so on.
The advantages of distributed systems are as follows −

 With resource sharing facility, a user at one site may be able to use the
resources available at another.
 Speedup the exchange of data with one another via electronic mail.
 If one site fails in a distributed system, the remaining sites can potentially
continue operating.
 Better service to the customers.
 Reduction of the load on the host computer.
 Reduction of delays in data processing.
Network operating System
A Network Operating System runs on a server and provides the server the
capability to manage data, users, groups, security, applications, and other
networking functions. The primary purpose of the network operating system is to
allow shared file and printer access among multiple computers in a network,
typically a local area network (LAN), a private network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and
BSD.
The advantages of network operating systems are as follows −

 Centralized servers are highly stable.


 Security is server managed.
 Upgrades to new technologies and hardware can be easily integrated into the
system.
 Remote access to servers is possible from different locations and types of
systems.
The disadvantages of network operating systems are as follows −

 High cost of buying and running a server.


 Dependency on a central location for most operations.
 Regular maintenance and updates are required.
Real Time operating System
A real-time system is defined as a data processing system in which the time
interval required to process and respond to inputs is so small that it controls the
environment. The time taken by the system to respond to an input and display of
required updated information is termed as the response time. So in this method,
the response time is very less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the
operation of a processor or the flow of data and real-time systems can be used as a
control device in a dedicated application. A real-time operating system must have
well-defined, fixed time constraints, otherwise the system will fail. For example,
Scientific experiments, medical imaging systems, industrial control systems,
weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-
time systems, secondary storage is limited or missing and the data is stored in
ROM. In these systems, virtual memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority
over other tasks and retains the priority until it completes. Soft real-time systems
have limited utility than hard real-time systems. For example, multimedia, virtual
reality, Advanced Scientific Projects like undersea exploration and planetary
rovers, etc.
ARCHITECTURE OF OPERATING SYSTEMS:
OS ARCHITECTURE DEFINITION:

For an operating system to be a useful and convenient interface between the user
and the hardware, it must provide certain basic services, such as the ability to read
and write files, allocate and manage memory, make access control decisions, and
so forth. These services are provided by a number of routines that collectively
make up the operating system kernel. Applications invoke these routines through
the use of specific system calls. Because the kernel routines exist for the purposes
of supplying specific services, the operating system has an underlying structure
defined by these services. This underlying structure and its design are called
the system architecture. The terms system architecture and system structure are
used somewhat synonymously.

Software engineers design and implement the system architecture of an operating


system so that its parts work well together. System administrators, system
programmers, applications programmers, and users refer to the system architecture
to provide a conceptual understanding of the parts of the operating system and the
relationships among them.

Sound system architecture is an important aspect of ensuring that the operating


system is secure.

Simple Structure
There are many operating systems that have a rather simple structure. These
started as small systems and rapidly expanded much further than their scope.
A common example of this is MS-DOS. It was designed simply for a niche
amount for people. There was no indication that it would become so popular.
An image to illustrate the structure of MS-DOS is as follows:
It is better that operating systems have a modular structure, unlike MS-DOS. That
would lead to greater control over the computer system and its various
applications. The modular structure would also allow the programmers to hide
information as required and implement internal routines as they see fit without
changing the outer specifications.

Layered Structure

One way to achieve modularity in the operating system is the layered approach. In
this, the bottom layer is the hardware and the topmost layer is the user interface.

An image demonstrating the layered approach is as follows:


The core software components of an operating system are collectively known as
the kernel. The kernel has unrestricted access to all of the resources on the system.
In early monolithic systems, each component of the operating system was
contained within the kernel, could communicate directly with any other
component, and had unrestricted system access. While this made the operating
system very efficient, it also meant that errors were more difficult to isolate, and
there was a high risk of damage due to erroneous or malicious code.
As operating systems became larger and more complex, this approach was largely
abandoned in favour of a modular approach which grouped components with
similar functionality into layers to help operating system designers to manage the
complexity of the system. In this kind of architecture, each layer communicates
only with the layers immediately above and below it, and lower-level layers
provide services to higher-level ones using an interface that hides their
implementation.

The modularity of layered operating systems allows the implementation of each


layer to be modified without requiring any modification to adjacent layers.
Although this modular approach imposes structure and consistency on the
operating system, simplifying debugging and modification, a service request from
a user process may pass through many layers of system software before it is
serviced and performance compares unfavourably to that of a monolithic kernel.
Also, because all layers still have unrestricted access to the system, the kernel is
still susceptible to errant or malicious code. Many of today’s operating systems,
including Microsoft Windows and Linux, implement some level of layering.
A microkernel architecture includes only a very small number of services within
the kernel in an attempt to keep it small and scalable. The services typically
include low-level memory management, inter-process communication and basic
process synchronisation to enable processes to cooperate. In microkernel designs,
most operating system components, such as process management and device
management, execute outside the kernel with a lower level of system access.

Microkernels are highly modular, making them extensible, portable and scalable.
Operating system components outside the kernel can fail without causing the
operating system to fall over. Once again, the downside is an increased level of
inter-module communication which can degrade system performance.

You might also like