0% found this document useful (0 votes)
2K views

Module I - OS

The document provides an introduction to operating systems. It discusses that the operating system acts as an intermediary between the user and computer hardware by controlling and coordinating the use of resources among applications and users. It describes the various components of a computer system including hardware, operating system, application programs, and users. It also outlines the goals and views of an operating system from the user and system perspectives.

Uploaded by

Prakash Hegde
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views

Module I - OS

The document provides an introduction to operating systems. It discusses that the operating system acts as an intermediary between the user and computer hardware by controlling and coordinating the use of resources among applications and users. It describes the various components of a computer system including hardware, operating system, application programs, and users. It also outlines the goals and views of an operating system from the user and system perspectives.

Uploaded by

Prakash Hegde
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Operating Systems

Module I

INTRODUCTION TO OPERATING SYSTEM


What is an Operating System?
An operating system is a system software that acts as an intermediary between a user of a computer and the
computer hardware.
Operating system goals:
 Make the computer system convenient to use. It hides the difficulty in managing the
hardware.
 Use the computer hardware in an efficient manner
 Provide and environment in which user can easily interface with computer.
 It is a resource allocator
Computer System Structure (Components of Computer System)
 Computer system can be divided into four components:
 Hardware – provides basic computing resources CPU, memory, I/O devices
 Operating system Controls and coordinates use of hardware among various applications and
users
 Application programs – define the ways in which the system resources are used to solve the
computing problems of the users- Word processors, compilers, web browsers, database
systems, video games
 Users- People, machines, other computers

The OS controls and co-ordinates the use of hardware, among various application programs (like
compiler, word processor etc.) for various users.
The OS allocates the resources among the programs such that the hardware is efficiently
used.

The operating system is the program running at all the times on the computer. It is usually called
as the kernel.

OS Non Kernel (User necessary functions)

Kernel Core of OS(Sys. necessary functions

Dept.Of ISE, APSCE 1


Operating Systems

Kernel functions are used always in system, so always stored in memory. Non kernel functions
are stored in hard disk, and it is retrieved whenever required.

Views of OS
Operating System can be viewed from two viewpoints–

User Views:-
The user’s view of the operating system depends on the type of user.
i. If the user is using standalone system, then OS is designed for ease of use and high
performances. Here resource utilization is not given importance.

ii. If the users are at different terminals connected to a mainframe or minicomputers, by sharing
information and resources, then the OS is designed to maximize resource utilization. OS is
designed such that the CPU time, memory and i/o are used efficiently and no single user takes
more than the resource allotted to them.

iii. If the users are in workstations, connected to networks and servers, then the user have a
system unit of their own and shares resources and files with other systems. Here the OS is
designed for both ease of use and resource availability (files).

iv. Users of hand held systems, expects the OS to be designed for ease of use and performance
per amount of battery life.
v. Other systems like embedded systems used in home devies (like washing m/c) & automobiles
do not have any user interaction. There are some LEDs to show the status of its work.

System Views:-
vi. Resource allocator - The OS acts as a manager of hardware and software resources. The OS
assigns the resources to the requesting program depending on the priority.
vii. Control Program – The OS is a control program and manage the execution of user program
to prevent errors and improper use of the computer.

Computer System Organization


Computer-system operation
 One or more CPUs, device controllers connect through common bus providing access to
shared memory. Each device controller is in-charge of a specific type of device.
 To ensure orderly access to the shared memory, a memory controller is provided whose
function is to synchronize access to the memory.
 The CPU and other devices execute concurrently competing for memory cycles.

Dept.Of ISE, APSCE 2


Operating Systems

When system is switched on, ‘Bootstrap’ program is executed. It is the initial program to run in
the system.Bootstrap’ program
 Initializes the registers, memory and I/O devices
 Locates & loads kernel into memory
 Starts with ‘init’ process
 Waits for interrupt from user.

Interrupt handling –
 The occurrence of an event is usually signaled by an interrupt. The interrupt can either
be from the hardware or the software.
 Hardware may trigger an interrupt at any time by sending a signal to the CPU.
 Software triggers an interrupt by executing a special operation called a system call (also
called a monitor call).
 When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location. The fixed location (Interrupt Vector Table) contains the
starting address where the service routine for the interrupt is located.
 After the execution of interrupt service routine, the CPU resumes the interrupted
computation.

Dept.Of ISE, APSCE 3


Operating Systems

Storage Structure
 Computer programs must be in main memory (RAM) to be executed. Main memory is
the large memory that the processor can access directly. It commonly is implemented in a
semiconductor technology called dynamic random-access memory (DRAM).
 Computers provide Read Only Memory(ROM), whose data cannot be changed.
 All forms of memory provide an array of memory words. Each word has its own address.\
 Interaction is achieved through a sequence of load or store instructions to specific
memory addresses.
 A typical instruction-execution cycle, as executed on a system with a Von Neumann
architecture, first fetches an instruction from memory and stores that instruction in the
instruction register.
 The instruction is then decoded and may cause operands to be fetched from memory and
stored in some internal register.
 After the instruction on the operands has been executed, the result may be stored back in
memory.
 Ideally, we want the programs and data to reside in main memory permanently. This
arrangement usually is not possible for the following two reasons:
1. Main memory is usually too small to store all needed programs and data permanently.
2. Main memory is a volatile storage device that loses its contents when power is turned
off.
 Thus, most computer systems provide secondary storage as an extension of main
memory. The main requirement for secondary storage is that it will be able to hold large
quantities of data permanently.
 The most common secondary-storage device is a magnetic disk, which provides storage
for both programs and data. Most programs are stored on a disk until they are loaded into
memory. Many programs then use the disk as both a source and a destination of the
information for their processing.

Dept.Of ISE, APSCE 4


Operating Systems

 The wide variety of storage systems in a computer system can be organized in a hierarchy
as shown in the figure, according to speed, cost and capacity.
 The higher levels are expensive, but they are fast. As we move down the hierarchy, the
cost per bit generally decreases, whereas the access time and the capacity of storage
generally increases.
 In addition to differing in speed and cost, the various storage systems are either volatile
or nonvolatile.
 Volatile storage loses its contents when the power to the device is removed. In the
absence of expensive battery and generator backup systems, data must be written to
nonvolatile storage for safekeeping.
 In the hierarchy shown in figure, the storage systems above the electronic disk are
volatile, whereas those below are nonvolatile.

I/O Structure
 A large portion of operating system code is dedicated to managing I/O.
 Every device have a device controller, maintains some local buffer and a set of special-
purpose registers.
 The device controller is responsible for moving the data between the peripheral devices. The
operating systems have a device driver for each device controller
 To start an I/O operation, the device driver loads the registers within the device controller
 The device controller, examines the contents of these registers to determine what action to
take (such as "read a character from the keyboard").
 The controller starts the transfer of data from the device to its local buffer.
 Once the transfer of data is complete, the device controller informs the devicedriver((OS)
via an interrupt that it has finished its operation.
 The device driver then returns control to the operating system, and also returns the data. For
other operations, the device driver returns status information

This form of interrupt-driven I/O is fine for moving small amounts of data, but very difficult for
bulk data movement. To solve this problem, direct memory access (DMA) is used.

Dept.Of ISE, APSCE 5


Operating Systems

 DMA is used for high-speed I/O devices, able to transmit information at close to memory
speeds
 Device controller transfers blocks of data from buffer storage directly to main memory
without CPU intervention
 Only one interrupt is generated per block, rather than the one interrupt per byte
Computer System Architecture
Categorized roughly according to the number of general-purpose processors used –

1. Single-Processor Systems –
 Most systems use a single processor. The variety of single-processor systems range from PDAs
through mainframes.
 On a single-processor system, there is one main CPU capable of executing instructions from user
processes
 It contains special-purpose processors, in the form of device-specific processors, for devices such
as disk, keyboard, and graphics controllers.
 The use of special-purpose microprocessors is common and does not turn a single- processor system into a
multiprocessor. If there is only one general-purpose CPU, then the system is a single-processor system.

2. Multiprocessor Systems (parallel systems or tightly coupled systems)


Systems that have two or more processors in close communication, sharing the computer bus, the
clock, memory, and peripheral devices are the multiprocessor systems.

Multiprocessor systems have three main advantages:


1. Increased throughput
2. Economy of scale
3. Increased reliability- In multiprocessor systems functions are shared among several
processors. The job of the failed processor is taken up, by other processors.
Two techniques to maintain ‘Increased Reliability’ - graceful degradation & fault tolerant
Graceful degradation – As there are multiple processors when one processor
fails other process will take up its work and the system goes down slowly.
Fault tolerant – When one processor fails, its operations are stopped, the system
failure is then detected, diagnosed, and corrected.
There are two types of multiprocessor systems –
 Asymmetric multiprocessing
 Symmetric multiprocessing

1) Asymmetric multiprocessing – (Master/Slave architecture) Here each processor is


assigned a specific task, by the master processor. A master processor controls the other
processors in the system. It schedules and allocates work to the slave processors.
2) Symmetric multiprocessing (SMP) – All the processors are considered as peers. There
is no master-slave relationship. All the processors have its own registers and CPU, only
memory is shared.

Dept.Of ISE, APSCE 6


Operating Systems

The benefit of this model is that many processes can run simultaneously. N processes can
run if there are N CPUs—without causing a significant deterioration of performance.
Operating systems like Windows, Windows XP, Mac OS X, and Linux—now provide
support for SMP.
A recent trend in CPU design is to include multiple compute cores on a single chip. The
communication between processors within a chip is more faster than communication between
two single processors.

3. Clustered Systems
 Clustered systems are two or more individual systems connected together via network
and sharing software resources.
 Clustering provides high-availability of resources and services.
There are two types of Clustered systems
I. Asymmetric clustering – one system is in hot-stand by mode while the others
are running the applications. The hot-standby host machine does nothing but
monitor the active server. If that server fails, the hot-standby host becomes the
active server.
II. symmetric clustering – two or more systems are running applications, and are
monitoring each other. This mode is more efficient, as it uses all of the available
hardware. If any system fails, its job is taken up by the monitoring system.

 Other forms of clusters include parallel clusters and clustering over a wide-area network
(WAN).
 Parallel clusters allow multiple hosts to access the same data on the shared storage.
Cluster technology is changing rapidly with the help of SAN(storage-area networks).
Using SAN resources can be shared with dozens of systems in a cluster, that are
separated by miles.
Operating-System Structure
 One of the most important aspects of operating systems is the ability to multiprogram.
 A single user cannot keep either the CPU or the I/O devices busy at all times.
Multiprogramming increases CPU utilization by organizing jobs, so that the CPU
always has one to execute.
 The operating system keeps several jobs in memory simultaneously as shown in figure.
This set of jobs is a subset of the jobs kept in the job pool.
 The operating system picks and begins to execute one of the jobs in memory.
 Eventually, the job may have to wait for some task, such as an I/O operation, to
complete.

Dept.Of ISE, APSCE 7


Operating Systems

 In a non-multiprogrammed system, the CPU would sit idle. In a multiprogrammed


system, the operating system simply switches to, and executes, another job. When that
job needs to wait, the CPU is switched to another job, and so on.
 Eventually, the first job finishes waiting and gets the CPU back. Thus the CPU is never
idle.

 Timesharing (multitasking) is logical extension of multiprogramming.


 Here a single CPU executes multiple jobs by switching among them, in which CPU
switches jobs so frequently that users can interact with each job while it is running.
 Time sharing requires an interactive (or hands-on) computer system, which provides
direct communication between the user and the system and the response time should be
short—typically less than one second.
 Each user has at least one program executing in memory If several jobs ready to run at the same
time, if processes don’t fit in memory, swapping moves them in and out to run
 Virtual memory allows execution of processes not completely in memory

Operating-System Operations
 Modern operating systems are interrupt driven. If there are no processes to execute, an
operating system will wait for events to occur.
 Events are signaled by the occurrence of an interrupt or a trap.
 A trap (or an exception) is a software-generated interrupt generated either by error
(Division by zero, request for operating system service) or by a request from a user
program.
 For each type of interrupt, an interrupt service routine is provided that is responsible for
dealing with the interrupt.
 Since the operating system and the user programs share the hardware and software
resources of the computer system, it has to be made sure that an error in a user program
cannot cause problems to other programs and the Operating System running in the
system.
 Dual-mode operation allows OS to protect itself and other system components
Dual-Mode Operation
The approach taken is to use a hardware support that allows us to differentiate among various
modes of execution.
The system can be assumed to work in two separate modes of operation:
 user mode and
 kernel mode (supervisor mode, system mode, or privileged mode).

Dept.Of ISE, APSCE 8


Operating Systems

 A hardware bit of the computer, called the mode bit, is used to indicate the current mode:
kernel (0) or user (1).
 With the mode bit, we are able to distinguish between a task that is executed by the operating
system and one that is executed by the user.
 When the computer system is executing a user application, the system is in user mode.
 When a user application requests a service from the operating system (via a system call), the
transition from user to kernel mode takes place.

 At system boot time, the hardware starts in kernel mode. The operating system is then
loaded and starts user applications in user mode.
 Whenever a trap or interrupt occurs, the hardware switches from user mode to kernel mode
(that is, changes the mode bit from 1 to 0). Thus, whenever the operating system gains
control of the computer, it is in kernel mode.
 The hardware allows privileged instructions to be executed only in kernel mode. If an
attempt is made to execute a privileged instruction in user mode, is treated as illegal and
traps it to the operating system.
Timer
 Operating system uses timer to control the CPU. A user program cannot hold CPU for a long
time, this is prevented with the help of timer.
 A timer can be set to interrupt the computer after a specified period. The period may be
Fixed timer – After a fixed time, the process under execution is interrupted.
Variable timer – Interrupt occurs after varying interval.
 Before changing to the user mode, the operating system ensures that the timer is set to
interrupt. If the timer interrupts, control transfers automatically to the operating system.

Dept.Of ISE, APSCE 9


Operating Systems

Process Management
 A program under execution is a process. A process needs resources like CPU time, memory,
files, and I/O devices for its execution.
 These resources are given to the process when it is created or at run time.
 When the process terminates, the operating system reclaims the resources.
 The program stored on a disk is a passive entity and the program under execution is an
active entity.
 A single-threaded process has one program counter specifying the next instruction to
execute. The CPU executes one instruction of the process after another, until the process
completes.
 A multithreaded process has multiple program counters, each pointing to the next
instruction to execute for a given thread.
The operating system is responsible for the following activities in connection with process
management:
 Scheduling process and threads on the CPU
 Creating and deleting both user and system processes
 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication

Memory Management
 Main memory is a large array of words or bytes. Each word or byte has its own address.
 As the program executes, the central processor reads instructions and also reads and writes
data from main memory.
 To improve both the utilization of the CPU and the speed of the computer's response to its
users, general-purpose computers must keep several programs in memory, creating a need
for memory management.\
 The operating system is responsible for the following activities in connection with memory
management:
 Keeping track of which parts of memory are currently being used by user.
 Deciding which processes and data to move into and out of memory.
 Allocating and deallocating memory space as needed.

Storage Management
There are three types of storage management i) File system management ii) Mass-storage
management iii) Cache management.
File-System Management
 File management is one of the most visible components of an operating system.
 A file is a collection of related information defined by its creator. Commonly, files represent
programs and data.
 The operating system implements the abstract concept of a file by managing mass storage
media. Files are normally organized into directories to make them easier to use.

Dept.Of ISE, APSCE 10


Operating Systems

 When multiple users have access to files, it may be desirable to control by whom and in
what ways (read, write, execute) files may be accessed.
The operating system is responsible for the following activities in connection with file
management:
 Creating and deleting files
 Creating and deleting directories to organize files
 Supporting primitives for manipulating files and directories
 Mapping files onto secondary storage
 Backing up files on stable (nonvolatile) storage media
Mass-Storage Management
 As the main memory is too small to accommodate all data and programs, and as the data that
it holds are erased when power is lost, the computer system must provide secondary storage
to back up main memory.
 Most modern computer systems use disks as the storage medium for both programs and data.
 Most programs—including compilers, assemblers, word processors, editors, and
formatters—are stored on a disk until loaded into memory and then use the disk as both the
source and destination of their processing.
 Hence, the proper management of disk storage is of central importance to a computer
system. The operating system is responsible for the following activities in connection with
disk management:
 Free-space management
 Storage allocation
 Disk scheduling

Caching
 Caching is an important principle of computer systems. Frequently used data are copied
faster storage system— the cache—as temporary data. When a particular piece of
information is required, first we check in the cache.
 Because caches have limited size, cache management is an important design problem.
Careful selection of the cache size and page replacement policy can result in greatly
increased performance.
 Data transfer from cache to CPU and registers is usually implicit a hardware function,
with no operating-system intervention.
 In contrast, transfer of data from disk to memory is usually controlled by the operating
system-explicit.
In a hierarchical storage structure, the same data may appear in different levels of the
storage system. For example, suppose to retrieve an integer A from magnetic disk to the
processing program. The operation proceeds by first issuing an I/O operation to copy the disk
block on which A resides to main memory. This operation is followed by copying A to the cache
and to an internal register. Thus, the copy of A appears in several places: on the magnetic disk, in
main memory, in the cache, and in an internal register.

Dept.Of ISE, APSCE 11


Operating Systems

In a multiprocessor environment, in addition to maintaining internal registers, each of the CPUs


also contains a local cache. In such an environment, a copy of A may exist simultaneously in
several caches. Since the various CPUs can all execute concurrently, any update done to the
value of A in one cache is immediately reflected in all other caches where A resides. This
situation is called cache coherency, and it is usually a hardware problem (handled below the
operating-system level).
I/O Systems
One of the purposes of an operating system is to hide the peculiarities of specific hardware
devices from the user. The I/O subsystem consists of several components:
 A memory-management component that includes buffering, caching, and
spooling
 A general device-driver interface
 Drivers for specific hardware devices
Only the device driver knows the peculiarities of the specific device to which it is assigned.

Protection and Security


Protection – any mechanism for controlling access of processes or users to resources defined by the OS
Protection improves reliability. A protection-oriented system provides a means to distinguish between
authorized and unauthorized usage
Security – defense of the system against internal and external attacks. Such attacks spread across a huge
range and include viruses and worms, denial-of service attacks etc.
Protection and security require the system to be able to distinguish among all its users. Most
operating systems maintain
 User identities (user IDs, security IDs) include name and associated number, one per user
User ID then associated with all files, processes of that user to determine access control.
 Group identifier (group ID) allows set of users to be defined and controls managed, then
also associated with each process, file.
 Privilege escalation allows user to change to effective ID with more rights to gain extra
permissions for an activity.
Distributed Systems
 A distributed system is a collection of systems that are networked to provide the users with
access to the various resources in the network.
 Access to a shared resource increases computation speed, functionality, data availability,
and reliability.
 A network is a communication path between two or more systems. Networks vary by the
protocols used(TCP/IP,UDP,FTP etc.), the distances between nodes, and the transport
media(copper wires, fiber-optic,wireless).
 Networks are characterized based on the distances between their nodes. A local-area
network (LAN) connects computers within a room, a floor, or a building.
 A wide-area network (WAN) usually links buildings, cities, or countries
 A metropolitan-area network (MAN) connects buildings within a city.
 BlueTooth and 802.11 devices use wireless technology to communicate over a distance of

Dept.Of ISE, APSCE 12


Operating Systems

several feet, in essence creating a small-area network such as might be found in a home.
 The transportation media to carry networks are also varied. They include copper wires, fiber
strands, and wireless transmissions between satellites, microwave dishes, and radios.

Special-Purpose Systems
There are different classes of computer systems, whose functions are more limited and specific
and it deal with limited computation domains.
1. Real-Time Embedded Systems
 Embedded computers are the most prevalent form of computers in existence. These devices
are found everywhere, from car engines and manufacturing robots to DVDs and
microwave ovens. They tend to have very specific tasks.
 The systems they run on are usually primitive, and so the operating systems provide
limited features. Usually, they have little or no user interface, preferring to spend their time
monitoring and managing hardware devices, such as automobile engines and robotic arms.
2. Handheld Systems
 include personal digital assistants (PDAs), such as Palm and Pocket-Pes, and cellular
telephones, many of which use special-purpose embedded operating systems.
 Developers of handheld systems and applications face many challenges, most of which are
due to the limited size of such devices. For example, a PDA is typically about 5 inches in
height and 3 inches in width, and it weighs less than one-half pound. Because of their size,
most handheld devices have small amounts of memory, slow processors, and small display
screens. We take a look now at each of these limitations.
3. Multimedia Systems
 Most operating systems are designed to handle conventional data such as text files,
progran'ls, word-processing documents, and spreadsheets. However, a recent trend in
technology is the incorporation of multimedia data into computer systems.
 Multimedia data consist of audio and video files as well as conventional files. These data
differ from conventional data in that multimedia data-such as frames of video-must be
delivered (streamed) according to certain time restrictions (for example, 30 frames per
second).
Computing Environments
The different computing environments are –
1. Traditional computing
 PCs connected to a network, terminals attached to mainframe or minicomputers providing
batch and timesharing
 Now portals allowing networked and remote systems access to same resources
 Home networks Used to be single system, then modems Now firewalled, networked
2. Client-Server Computing
 Dumb terminals supplanted by smart PCs
 Many systems now servers, responding to requests generated by clients
 Compute-server provides an interface to client to request services (i.e. database)
 File-server provides interface for clients to store and retrieve files

Dept.Of ISE, APSCE 13


Operating Systems

3. Peer-to-Peer Computing
 Another model of distributed system
 P2P does not distinguish clients and servers Instead all nodes are considered peers
 May each act as client, server or both Node must join P2P network
 Registers its service with central lookup service on network, or
 Broadcast request for service and respond to requests for service via discovery protocol
 Examples include Napster and Gnutella

4. Web-Based Computing
 Web has become ubiquitous PCs most prevalent devices
 More devices becoming networked to allow web access
 New category of devices to manage web traffic among similar servers: load balancers
 Use of operating systems like Windows 95, client-side, have evolved into Linux and
Windows XP, which can be clients and servers
5. Open-Source Operating Systems
 Operating systems made available in source-code format rather than just binary closed-
source Counter to the copy protection and Digital Rights Management (DRM) movement
 Started by Free Software Foundation (FSF), which has “copyleft” GNU Public License
(GPL) Examples include GNU/Linux, BSD UNIX (including core of Mac OS X), and Sun
Solaris
Operating-System Structures
Operating-System Services
An operating system provides an environment for the execution of programs. It provides certain services
to programs and to the users of those programs.

Dept.Of ISE, APSCE 14


Operating Systems

OS provide services for the users of the system, including:


1. User interface. Almost all operating systems have a UI. This interface can take several forms.
 command line interface uses text commands and a method for entering them (say, a
program to allow entering and editing of commands).
 batch in which commands and directives to control those commands are entered into files,
and those files are executed.
 Graphical user interface is a window system with a pointing device to direct I/0, choose
from menus, and make selections and a keyboard to enter text.
Some systems provide two or all three of these variations.
2. Program execution. The system must be able to load a program into memory and to run that
program. The program must be able to end its execution, either normally or abnormally
(indicating error).
3. File-system manipulation. The file system is of particular interest. Obviously, programs need
to read and write files and directories.
4. Communications. There are many circumstances in which one process needs to exchange
information with another process. Such communication cay occur between processes that are
executing on the same computer or between processes that are executing on different computer
systems.
Communications may be implemented via shared rnenwry or through message passing
5. Error detection. The operating system needs to be constantly aware of possible errors. Errors
may occur in the CPU and memory hardware (such as a memory error or a power failure), in
I/0 devices
Debugging facilities can greatly enhance the user's and programmer's abilities to use the system
efficiently.
Another set of operating-system functions exists not for helping the user but rather for ensuring the
efficient operation of the system itself. Systems with multiple users can gain efficiency by sharing the
computer resources among the users
1. Resource Allocation – Resources like CPU cycles, main memory, storage space, and I/O
devices must be allocated to multiple users and multiple jobs at the same time.
2. Accounting – There are services in OS to keep track of system activity and resource
usage, either for billing purposes or for statistical record keeping that can be used to
optimize future performance.
3. Protection and Security – The owners of information(file) in multiuser or networked
computer system may want to control the use of that information. When several separate
processes execute concurrently, one process should not interfere with other or with OS.
Protection involves ensuring that all access to system resources is controlled. Security of
the system from outsiders must also be done, by means of a password.

User Operating-System Interface


There are several ways for users to interface with the operating system.
1. Command-line interface, or command interpreter-
 allows users to directly enter commands to be performed by the operating system.
Command Interpreters are used to give commands to the OS.

Dept.Of ISE, APSCE 15


Operating Systems

 There are multiple command interpreters known as shells. In UNIX and Linux
systems, there are several different shells, like the Bourne shell, C shell, Bourne-
Again shell, Korn shell, and others
 The main function of the command interpreter is to get and execute the user-specified
command. Many of the commands manipulate files: create, delete, list, print, copy, execute,
and so on.

The commands can be implemented in two general ways-


I. The command interpreter itself contains the code to execute the command.
II. The code to implement the command is in a function in a separate file.
2. Graphical User Interface
 GUI allows users to interface with the operating system using pointer device and
menu system.
 rather than entering commands directly via a command-line interface, users employ a
mouse-based window and menu system.
 Graphical user interfaces first appeared on the Xerox Alto computer in 1973.
 Most modern systems allow individual users to select their desired interface, and to
customize its operation, as well as the ability to switch between different interfaces as
needed.

Bourne Shell Command Interpreter 1 The Mac OS X GUI 1

Dept.Of ISE, APSCE 16


Operating Systems

System Calls
System calls is a means to access the services of the operating system.
Generally written in C or C++, although some are written in assembly for optimal performance.

The below figure illustrates the sequence of system calls required to copy a file content from one
file(input file) to another file (output file).

 Most programmers do not use the low-level system calls directly, but instead use an
"Application Programming Interface", API.
 The APIs instead of direct system calls provides for greater program portability between different
systems.
 Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based systems
(including virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java
virtual machine (JVM)

Example of Standard API


 Consider the ReadFile() function in the Win32 API—a function for reading from a file

 A description of the parameters passed to ReadFile() HANDLE file—the file to be read


 LPVOID buffer—a buffer where the data will be read into and written from
 DWORD bytesToRead—the number of bytes to be read into the buffer
 LPDWORD bytesRead—the number of bytes read during the last read
 LPOVERLAPPED ovl—indicates if overlapped I/O is being used

Dept.Of ISE, APSCE 17


Operating Systems

 The API then makes the appropriate system calls through the system call interface, using a
system call table to access specific numbered system calls, as shown in Figure.
 Each system call has a specific numbered system call. The system call table (consisting of
system call number and address of the particular service) invokes a particular service routine
for a specific system call.
 The caller need know nothing about how the system call is implemented or what it does
during execution.

System Call Parameter Passing


Often, more information is required than simply identity of desired system call Exact type and amount of
information vary according to OS and call
 Three general methods used to pass parameters to the OS Simplest: pass the parameters in
registers. In some cases, may be more parameters than registers
 Parameters stored in a block, or table, in memory, and address of block passed as a parameter in a
register This approach taken by Linux and Solaris
 Parameters placed, or pushed, onto the stack by the program and popped off the stack by the
operating system
 Block and stack methods do not limit the number or length of parameters being passed

Parameter Passing via Table 1

Dept.Of ISE, APSCE 18


Operating Systems

Types of System Calls


The system calls can be categorized into six major categories:

a) Process Control
 Process control system calls include end, abort, load, execute, create process, terminate
process, get/set process attributes, wait for time or event, signal event, and allocate and free
memory.
 Processes must be created, launched, monitored, paused, resumed, and eventually stopped.
 When one process pauses or stops, then another must be launched or resumed
 Process attributes like process priority, max. allowable execution time etc. are set and
retrieved by OS.
 After creating the new process, the parent process may have to wait (wait time), or wait for
an event to occur(wait event).
 The process sends back a signal when the event has occurred (signal event).

Dept.Of ISE, APSCE 19


Operating Systems

b) File Management
The file management functions of OS are –
 File management system calls include create file, delete file, open, close, read, write,
reposition, get file attributes, and set file attributes.
 After creating a file, the file is opened. Data is read or written to a file.
 The file pointer may need to be repositioned to a point.
 The file attributes like filename, file type, permissions, etc. are set and retrieved using
system calls.
 These operations may also be supported for directories as well as ordinary files.

c) Device Management
 Device management system calls include request device, release device, read, write,
reposition, get/set device attributes, and logically attach or detach devices.
 When a process needs a resource, a request for resource is done. Then the control is granted
to the process. If requested resource is already attached to some other process, the requesting
process has to wait.
 In multiprogramming systems, after a process uses the device, it has to be returned to OS, so
that another process can use the device.
 Devices may be physical ( e.g. disk drives ), or virtual / abstract ( e.g. files, partitions, and
RAM disks ).

d) Information Maintenance
 Information maintenance system calls include calls to get/set the time, date, system data, and
process, file, or device attributes.
 These system calls care used to transfer the information between user and the OS.
Information like current time & date, no. of current users, version no. of OS, amount of free
memory, disk space etc. are passed from OS to the user.
e) Communication
 Communication system calls create/delete communication connection, send/receive
messages, transfer status information, and attach/detach remote devices.
 The message passing model must support calls to:
o Identify a remote process and/or host with which to communicate.
o Establish a connection between the two processes.
o Open and close the connection as needed.
oTransmit messages along the connection.
oWait for incoming messages, in either a blocking or non-blocking state.
oDelete the connection when no longer needed.
 The shared memory model must support calls to:
o Create and access memory that is shared amongst processes (and threads. )
o Free up shared memory and/or dynamically allocate it as needed.
 Message passing is simpler and easier, ( particularly for inter-computer communications), and is
generally appropriate for small amounts of data. It is easy to implement, but there are system calls for each
read and write process.

Dept.Of ISE, APSCE 20


Operating Systems

System Programs
A collection os programs that provide a convenient environment for program development and execution
(other than OS) are called system programs or system utilities.It is not a part of the kernel or command
interpreters.
System programs may be divided into five categories:
1. File management - programs to create, delete, copy, rename, print, list, and generally
manipulate files and directories.
2. Status information - Utilities to check on the date, time, number of users, processes running,
data logging, etc. System registries are used to store and recall configuration information for
particular applications.
3. File modification - e.g. text editors and other tools which can change file contents.
4. Programming-language support - E.g. Compilers, linkers, debuggers, profilers, assemblers,
library archive management, interpreters for common languages, and support for make.
5. Program loading and execution - loaders, dynamic loaders, overlay loaders, etc., as well as
interactive debuggers.
6. Communications - Programs for providing connectivity between processes and users,
including mail, web browsers, remote logins, file transfers, and remote command execution.

Operating-System Design and Implementation


Design Goals
Any system to be designed must have its own goals and specifications. Similarly, the OS to be built
will have its own goals depending on the type of system in which it will be used, the type of
hardware used in the system etc.
Requirements define properties which the finished system must have, and are a necessary step in
designing any large complex system. The requirements may be of two basic groups:
1. User goals (User requirements).
are featuring that users care about and understand like system should be convenient to use, easy
to learn, reliable, safe and fast
2. System goals (system requirements)
are written for the developers, ie. People who design the OS. Their requirements are like easy to
design, implement and maintain, flexible, reliable, error free and efficient.

Mechanisms and Policies


 Policies determine what is to be done. Mechanisms determine how it is to be
implemented.
 Example: in timer, counter and decrementing counter is the mechanism and deciding how
long the time has to be set is the policies.
 Policies change overtime. In the worst case, each change in policy would require a
change in the underlying mechanism.
 If properly separated and implemented, policy changes can be easily adjusted without re-
writing the code, just by adjusting parameters or possibly loading new data /
configuration files.

Dept.Of ISE, APSCE 21


Operating Systems

Implementation
 Traditionally OS were written in assembly language.
 In recent years, OS are written in C, or C++. Critical sections of code are still written in
assembly language.
 The first OS that was not written in assembly language was the Master Control Program
(MCP).
 The advantages of using a higher-level language for implementing operating systems are:
The code can be written faster, more compact, easy to port to other systems and is easier to
understand and debug.
 The only disadvantages of implementing an operating system in a higher-level language are
reduced speed and increased storage requirements.

Operating-System Structure
OS structure must be carefully designed. The task of OS is divided into small components and then
interfaced to work together.

Simple Structure
Many operating systems do not have well-defined structures. They started as small, simple, and
limited systems and then grew beyond their original scope.
MS-DOS – written to provide the most functionality in the least space not divided into modules. Although
MS-DOS has some structure, its interfaces and levels of Functionality are not well separated.

Layered Approach
The operating system is divided into a number of layers (levels), each built on top of lower layers. The
bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of only lower-
level layers
UNIX OS consists of two separable parts: the kernel and the system programs. The kernel is further
separated into a series of interfaces and device drivers. The kernel provides the file system, CPU
scheduling, memory management, and other operating-system functions through system calls.

UNIX System Structure


MS-DOS Layer Structure

Dept.Of ISE, APSCE 22


Operating Systems

Layered Approach
 The OS is broken into number of layers (levels). Each layer rests on the layer below it, and
relies on the services provided by the next lower layer.
 Bottom layer (layer 0) is the hardware and the topmost layer is the user interface. 
 A typical layer, consists of data structure and routines that can be invoked by higher-level
layer.
 Advantage of layered approach is simplicity of construction and debugging. 
 The layers are selected so that each uses functions and services of only lower-level layers. So
simplifies debugging and system verification. 
 The layers are debugged one by one from the lowest and if any layer doesn’t work, then error is due
to that layer only, as the lower layers are already debugged. Thus, the design and implementation is
simplified.
 A layer need not know how its lower level layers are implemented. Thus, hides the operations from
higher layers.

Disadvantages of layered approach:


 The various layers must be appropriately defined, as a layer can use only lower level
layers.
 Less efficient than other types, because any interaction with layer 0 required from top layer.
The system call should pass through all the layers and finally to layer 0. This is an overhead.
Microkernels
 The basic idea behind micro kernels is to remove all non-essential services from the kernel,
thus making the kernel as small and efficient as possible.
 The removed services are implemented as system applications.
 Most microkernels provide basic process and memory management, and message passing
between other services. 
Benefit of microkernel - System expansion can also be easier, because it only involves adding more
system applications, not rebuilding a new kernel. 
Mach was the first and most widely known microkernel, and now forms a major component of Mac
OSX.
Disadvantage of Microkernel is, it suffers from reduction in performance due to increases system
function overhead. 

Dept.Of ISE, APSCE 23


Operating Systems

MAC OS X Structure

Modules
Modern OS development is object-oriented, with a relatively small core kernel and a set of modules
which can be linked in dynamically. 
Each core component is separate
Each component talks to the others over known interfaces Each is loadable as needed within the kernel
Overall, similar to layers but with more flexibility

Solaris Modular Approach

Virtual Machines
The fundamental idea behind a virtual machine is to abstract the hardware of a single computer (the CPU,
memory, disk drives, network interface cards, and so forth) into several different execution environments,
thereby creating the illusion that each separate execution environment is running its own private
computer.
Host OS is the main OS installed in system and the other OS installed in the system are called
guest OS.

Dept.Of ISE, APSCE 24


Operating Systems

System modes. (A) Nonvirtual machine (b) Virtual machine

Virtual machines first appeared as the VM Operating System for IBM mainframes in 1972 

Benefits
 Able to share the same hardware and run several different execution environments(OS).
 Host system is protected from the virtual machines and the virtual machines are protected
from one another. A virus in guest OS, will corrupt that OS but will not affect the other guest
systems and host systems.
 Even though the virtual machines are separated from one another, software resources can be
shared among them. Two ways of sharing s/w resource for communication are: a)To share a
file system volume(part of memory). b)To develop a virtual communication network to
communicate between the virtual machines.
 The operating system runs on and controls the entire machine. Therefore, the current system
must be stopped and taken out of use while changes are made and tested. This period is
commonly called system development time. In virtual machines such problem is eliminated.
User programs are executed in one virtual machine and system development is done in
another environment.
 Multiple OS can be running on the developer’s system concurrently. This helps in rapid
porting and testing of programmers code in different environments.
 System consolidation – two or more systems are made to run in a single system.

Simulation –
Here the host system has one system architecture and the guest system is compiled in
different architecture. The compiled guest system programs can be run in an emulator that
translates each instructions of guest program into native instructions set of host system.

Para-Virtualization –
This presents the guest with a system that is similar but not identical to the guest’s preferred
system. The guest must be modified to run on the para-virtualized hardware.

Dept.Of ISE, APSCE 25


Operating Systems

Examples
VMware
VMware runs as an application on a host operating system such as Windows or Linux and allows
this host system to concurrently run several different guest operating systems as independent
virtual machines.

In below scenario, Linux is running as the host operating system; FreeBSD, Windows NT, and
Windows XP are running as guest operating systems. The virtualization layer is the heart of
VMware, as it abstracts the physical hardware into isolated virtual machines running as guest
operating systems. Each virtual machine has its own virtual CPU, memory, disk drives, network
interfaces, and so forth.

The Java Virtual Machine


 Java was designed from the beginning to be platform independent, by running Java only on a
Java Virtual Machine, JVM, of which different implementations have been developed for
numerous different underlying HW platforms.
 Java source code is compiled into Java byte code in .class files. Java byte code is binary
instructions that will run on the JVM.
 The JVM implements memory management and garbage collection.
 JVM consists of class loader and Java Interpreter. Class loader loads compiled .class files
from both java program and java API for the execution of java interpreter. Then it checks the
.class file for validity.

Dept.Of ISE, APSCE 26


Operating Systems

OPERATING SYSTEM GENERATION


 Operating systems are designed to run on any of a class of machines; the system must be configured
for each specific computer site
 SYSGEN program obtains information concerning the specific configuration of the hardware system
 Booting – starting a computer by loading the kernel
 Bootstrap program – code stored in ROM that is able to locate the kernel, load it into memory, and
start its execution

SYSTEM BOOT
 Operating system must be made available to hardware so hardware can start it
 Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
Sometimes two-step process where boot block at fixed location loads bootstrap loader
 When power initialized on system, execution starts at a fixed memory location Firmware used to hold
initial boot code

PROCESS MANAGEMENT
Processes Concept
An operating system executes a variety of programs: Batch system – jobs, Time-shared systems – user
programs or tasks terms job and process can be used almost interchangeably
Process is a program in execution; process execution must progress in sequential fashion A process
includes: program counter stack, data section

The Process
Process memory is divided into four sections
 The stack is used to store local variables, function
parameters, function return values, return address
etc.
 The heap is used for dynamic memory allocation.
 The data section stores global and static variables.
 The text section comprises the compiled program
code.
Note that, there is a free space between the stack and the
heap. When the stack is full, it grows downwards and when
the heap is full, it grows upwards

Dept.Of ISE, APSCE 27


Operating Systems

Process State
As a process executes, it changes state.A Process has 5 states. Each process may be in one of the
following states –

New - The process is in the stage of being created.


Ready - The process has all the resources it needs to run. It is waiting to be assigned to the
processor.
Running – Instructions are being executed..
Waiting - The process is waiting for some event to occur. For example the process may be waiting
for keyboard input, disk access request, inter-process messages, a timer to go off, or a child process
to finish.
Terminated - The process has completed its execution.



Process Control Block
For each process there is a Process Control Block (PCB), which stores the process-specific
information.
Process State – The state of the process may be new, ready, running, waiting, and so on.
Program counter – The counter indicates the address of the next instruction to be executed for
this process.
CPU registers - The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers. Along
with the program counter, this state information must be saved when an interrupt occurs, to allow
the process to be continued correctly afterward.
CPU scheduling information- This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
Memory-management information – This include information such as the value of the base
and limit registers, the page tables, or the segment tables.
Accounting information – This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
I/O status information – This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.

The PCB simply serves as the repository for any information that may vary from process to process.

Dept.Of ISE, APSCE 28


Operating Systems

Process Scheduling
 The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.
 The objective of time sharing is to switch the CPU among processes so frequently that users
can interact with each program while it is running.
 To meet these objectives, the process scheduler selects an available process (possibly from
a set of several available processes) for program execution on the CPU.
 The main objective of process scheduling is to keep the CPU busy at all times.

Process Scheduling Queues


1. Job queue – set of all processes in the system
2. Ready queue – set of all processes residing in main memory, ready and waiting to execute
3. Device queues – set of processes waiting for an I/O device Processes migrate among the
various queues
 These queues are generally stored as a linked list of PCBs
 . A queue header will contain two pointers - the head pointer pointing to the first PCB and the tail
pointer pointing to the last PCB in the list.
 Each PCB has a pointer field that points to the next process in the queue.
 A common representation of process scheduling is a queueing diagram. Each rectangular box in
the diagram represents a queue.
 Two types of queues are present: the ready queue and a set of device queues.
 The circles represent the resources that serve the queues, and the arrows indicate the flow of
processes in the system.

Dept.Of ISE, APSCE 29


Operating Systems

 A new process is initially put in the ready queue. It waits in the ready queue until it is selected for
execution and is given the CPU.
 Once the process is allocated the CPU and is executing, one of several events could occur:
 The process could issue an I/O request, and then be placed in an I/O queue.
 The process could create a new subprocess and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt, and be put
back in the ready queue.
 In the first two cases, the process eventually switches from the waiting state to the ready state, and
is then put back in the ready queue.
 A process continues this cycle until it terminates, at which time it is removed from all queues.

Dept.Of ISE, APSCE 30


Operating Systems

Schedulers
Schedulers are software which selects an available program to be assigned to CPU.
1. A long-term scheduler or Job scheduler – selects jobs from the job pool (of secondary memory,
disk) and loads them into the memory. It is invoked very infrequently (seconds, minutes). The
long-term scheduler controls the degree of multiprogramming
2. The short-term scheduler, or CPU Scheduler – selects job from memory and assigns the CPU to
it. Short-term scheduler is invoked very frequently (milliseconds) Þ (must be fast)
 The medium-term scheduler - selects the process in ready queue and reintroduced into the
memory.

Processes can be described as either:


1. I/O-bound process – spends more time doing I/O than computations,
2. CPU-bound process – spends more time doing computations and few I/O operations.

An efficient scheduling system will select a good mix of CPU-bound processes and I/O
bound processes.
 If the scheduler selects more I/O bound process, then I/O queue will be full and ready queue
will be empty.
 If the scheduler selects more CPU bound process, then ready queue will be full and I/O queue
will be empty.

 Time sharing systems employ a medium-term scheduler


 It swaps out the process from ready queue and swap in the process to ready queue.
 When system loads get high, this scheduler will swap one or more processes out of the ready
queue for a few seconds, in order to allow smaller faster jobs to finish up quickly and clear the
system.
 This process is called swapping.

Context Switch
 The task of switching a CPU from one process to another process is called context switching.
 When CPU switches to another process, the system must save the state of the old process and
load the saved state for the new process via a context switch
 Context of a process is represented in the PCB
 Context-switch time is overhead; the system does no useful work while switching Time
dependent on hardware support

Operations on Processes
1. Process Creation
 A process may create several new processes.
 The creating process is called a parent process, and the new processes are called the children
of that process.
 Each of these new processes may in turn create other processes. Every process has a unique
process ID.

Dept.Of ISE, APSCE 31


Operating Systems

 On typical Solaris systems, the process at the top of the tree is the ‘sched’ process with PID of
0.
 The ‘sched’ process creates several children processes – init, pageout and fsflush. Pageout and
fsflush are responsible for managing memory and file systems.
 The init process with a PID of 1, serves as a parent process for all user processes.

 A process will need certain resources (CPU time, memory, files, I/O devices) to accomplish its
task.
 When a process creates a subprocess, the subprocess may be able to obtain its resources in two
ways
 directly from the operating system.
 Subprocess may take the resources of the parent process.
 The resource can be taken from parent in two ways –
 The parent may have to partition its resources among its children
 Share the resources among several children.
 There are two options for the parent process after creating the child:
 Wait for the child process to terminate and then continue execution. The parent makes a
wait() system call.
 Run concurrently with the child, continuing to execute without waiting.
 Two possibilities for the address space of the child relative to the parent:
 The child process is a duplicate of the parent process (it has the same program and data
as the parent).
 The child process has a new program loaded into it.

Dept.Of ISE, APSCE 32


Operating Systems

UNIX example
 fork system call creates new process
 exec system call used after a fork to replace the process’ memory space with a new
program
Process Creation

C Program Forking Separate Process


int main()
{
pid_t pid;
/* fork another process */
pid = fork();
if (pid < 0)
{
/* error occurred */ fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (pid == 0)
{ /* child process */
execlp("/bin/ls", "ls", NULL);
}
else
{
/* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child Complete"); exit(0);
}
}

Windows example
 in Windows. Processes are created in the Win32 API using the CreateProcess () function
 which is similar to fork ()
 Two parameters passed to CreateProcess () are instances of the STARTUPINFO and
PROCESS_INFORMATION structures.
 STARTUPINFO specifies many properties of the new process, such as window size and

Dept.Of ISE, APSCE 33


Operating Systems

appearance and handles to standard input and output files.


 The PROCESS_INFORMATION structure contains a handle and the identifiers to the newly
created process and its thread.
 ZeroMemory () function to allocate memory for each of these structures
 The first two parameters passed to CreateProcess () are the application name and command-line
parameters. If the application name is NULL (as it is in this case), the command-line parameter
specifies the application to load.
 WaitForSingleObj ect () is passed a handle of the child process-pi. hProcess-and waits for this
process to complete. Once the child process exits, control returns from the WaitForSingleObj ect ()
function in the parent process

#include <stdio.h>
#include <windows.h>
int main(VOID)
{
STARTUPINFO si;
PROCESS_INFORMATION pi;
}
II allocate memory
ZeroMemory(&si, sizeof(si));
si.cb = sizeof(si);
ZeroMemory(&pi, sizeof(pi));
II create child process
if (!CreateProcess(NULL, II use command line
"C:\\WINDOWS\\system32\\mspaint.exe", II command line
NULL, II don't inherit process handle
{
}
NULL, II don't inherit thread handle
FALSE, II disable handle inheritance
0, II no creation flags
NULL, II use parent's environment block
NULL, II use parent's existing directory
&si,
&pi))
fprintf(stderr, "Create Process Failed");
return -1;
II parent will wait for the child to complete
WaitForSingleObject(pi.hProcess, INFINITE);
printf("Child Complete");
II close handles
CloseHandle(pi.hProcess);
CloseHandle(pi.hThread);

Dept.Of ISE, APSCE 34


Operating Systems

2. Process Termination
 A process terminates when it finishes executing its last statement and asks the operating
system to delete it, by using the exit( ) system call.
 All of the resources assigned to the process like memory, open files, and I/O buffers, are
deallocated by the operating system.
 A process can cause the termination of another process by using appropriate system call.
The parent process can terminate its child processes by knowing of the PID of the child.
 A parent may terminate the execution of children for a variety of reasons, such as:
 The child has exceeded its usage of the resources, it has been allocated.
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system terminates all the children. This is
called cascading termination.

Note : Processes which are trying to terminate but which cannot because their parent is not
waiting for them are termed zombies. These are eventually inherited by init as orphans and
killed off. (Modern UNIX shells do not produce as many orphans and zombies as older
systems used to. )
Interprocess Communication
Processes executing may be either co-operative or independent processes.
Independent Processes – processes that cannot affect other processes or be affected by other
processes executing in the system.
Cooperating Processes – processes that can affect other processes or be affected by other processes
executing in the system.
Co-operation among processes are allowed for following reasons –
 Information Sharing - There may be several processes which need to access the same file. So
the information must be accessible at the same time to all users.
 Computation speedup - Often a solution to a problem can be solved faster if the problem can
be broken down into sub-tasks, which are solved simultaneously ( particularly when multiple
processors are involved. )
 Modularity - A system can be divided into cooperating modules and executed by sending
information among one another.
 Convenience - Even a single user can work on multiple task by information sharing.
 Cooperating processes require some type of inter-process communication. This is allowed by two
models : 1(Shared Memory systems 2)Message Passing systems.

 In the shared-memory model, a region of memory that is shared by cooperating processes is
established. Processes can then exchange information by reading and writing data to the shared
region.
 In the message passing model, communication takes place by means of messages exchanged
between the cooperating processes. The two communications models are contrasted in Figure

Dept.Of ISE, APSCE 35


Operating Systems

Sl. No. Shared Memory Message passing

1. A region of memory is shared by Message exchange is done among the processes


communicating processes, into which the by using objects.
information is written and read

2. Useful for sending large block of data Useful for sending small data.

3. System call is used only to create shared System call is used during every read and write
memory operation.
4. Message is sent faster, as there are no Message is communicated slowly.
system calls
1. Shared-Memory Systems
A region of shared-memory is created within the address space of a process, which needs to
communicate. Other processes that needs to communicate uses this shared memory.
The process should take care that the two processes will not write the data to the shared memory at
the same time.
 Consider a Producer-Consumer Problem. A producer process produces information that is
consumed by a consumer process. For example, a compiler may produce assembly code, which is
consumed by an assembler. The assembler, in turn, may produce object modules, which are
consumed by the loader.
 One solution to the producer-consumer problem uses shared memory where a buffer of items that
can be filled by the producer and emptied by the consumer is available in a region of memory that
is shared by the producer and consumer processes.
 Two types of buffers can be used
o unbounded-buffer places no practical limit on the size of the buffer
o bounded-buffer assumes that there is a fixed buffer size
 The shared buffer is implemented as a circular array with two logical pointers: in and out. The
variable in points to the next free position in the buffer; out points to the first full position in the
buffer. The buffer is empty when in== out; the buffer is full when ((in+ 1)% BUFFER_SIZE) ==
out

Dept.Of ISE, APSCE 36


Operating Systems

#define BUFFER_SIZE 10
typedef struct
{
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

 The producer process has a local variable nextProduced in which the new item to be produced is
stored. The consumer process has a local variable next Consumed in which the item to be
consumed is stored. This scheme allows at most BUFFER_SIZE - 1 items in the buffer at the same
time.

Producer process
while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out); /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}

Consumer process
while (true)
{
while (in == out); // do nothing -- nothing to consume
// remove an item from the buffer
item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}

2. Message-Passing Systems
 Mechanism for processes to communicate and to synchronize their actions.
 Message system – processes communicate with each other without resorting to shared variables
IPC facility provides two operations:
o send(message) – message size fixed or variable
o receive(message)
 If two processes P and Q wish to communicate, they need to:
o establish a communication link between them
o exchange messages via send/receive Implementation of communication link: physical
(e.g., shared memory, hardware bus) logical (e.g., logical properties)

Dept.Of ISE, APSCE 37


Operating Systems

 several methods for logically implementing a link and the send 0 and receive() operations:
o Direct or indirect communication
o Synchronous or asynchronous communication
o Automatic or explicit buffering
The following issues are related to each of these factors

1. Naming
Processes that want to communicate must have a way to refer to each other. They can use either direct or
indirect communication.
i. Direct communication- each process that wants to communicate must explicitly name the
recipient or sender of the
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of communication link used in this scheme
o Links are established automatically
o A link is associated with exactly one pair of communicating processes Between each pair
there exists exactly one link
o The link may be unidirectional, but is usually bi-directional
This scheme exhibits symmetry in addressing; that is, both the sender process and the receiver
process must name the other to communicate.
o send(P, message) -Send a message to process P.
o receive (id, message) -Receive a message from any process; the variable id is set to the
name of the process with which communication has taken place.
ii. Indirect Communication-
o Messages are directed and received from mailboxes (also referred to as ports) Each
mailbox has a unique id.
o Processes can communicate only if they share a mailbox Properties of communication link
o Link established only if processes share a common mailbox A link may be associated with
many processes
o Each pair of processes may share several communication links Link may be unidirectional
or bi-directional
Operations
Now suppose that processes P1, P2, and P3 all share mailbox A. Process P1 sends a message to
A, while both P2 and P3 execute a receive() from A. Which process will receive the message
sent by P1? The answer depends on which of the following methods we choose
o Allow a link to be associated with two processes at most.
o Allow at most one process at a time to execute a receive 0 operation.
o Allow the system to select arbitrarily which process will receive the message (that is,
either P2 or P3, but not both, will receive the message).
A mailbox may be owned either by a process or by the operating system. If the mailbox is
owned by a process then we distinguish between the owner and the user
Since each mailbox has a unique owner, there can be no confusion about which process
should receive a message sent to this mailbox.

Dept.Of ISE, APSCE 38


Operating Systems

A mailbox that is owned by the operating system has an existence of its own. It is
independent and is not attached to any particular process.
The operating system then must provide a mechanism that allows a process to do the
following:
o Create a new mailbox.
o Send and receive messages through the mailbox.
o Delete a mailbox.

2. Synchronization
 Message passing may be either blocking or non-blocking
 Blocking is considered synchronous
 Blocking send has the sender block until the message is received
 Blocking receive has the receiver block until a message is available Non-blocking is
considered asynchronous
 Non-blocking send has the sender send the message and continue
 Non-blocking receive has the receiver receive a valid message or null

3. Buffering
 Queue of messages attached to the link; implemented in one of three ways
o Zero capacity – queue has 0 messages capacity. Thus, the link cannot have any messages
waiting in it. In this case, the sender must block until the recipient receives the message.
o Bounded capacity The queue has finite length n; thus, at most n messages can reside in it.
If the queue is not full when a new message is sent, the message is placed in the queue and
the sender can continue execution without waiting. The link's capacity is finite, however. If
the link is full, the sender must block until space is available in the queLie.
o Unbounded capacity – The queue's length is potentially infinite; thus, any number of
messages can wait in it. The sender never blocks

Dept.Of ISE, APSCE 39


Operating Systems

Questions
1) Define an Operating System? What is system's viewpoint of an Operating System?
2) What is OS? Explain multiprogramming and time sharing systems.
3) Explain dual mode operation in OS with a neat block diagram
4) What are system calls? Briefly Explain its types. Write the system call sequence to copy a file from source
to destination
5) What are virtual machines? Explain with block diagram. Point out its benefits
6) Explain the advantages of layered approach, with a diagram.
7) Explain the types of multiprocessor systems and the types of clustering. What are fault tolerant systems?
8) What are the activities for which the operating system is responsible for, in connection with:
i. Process management ii) File management
9) Differentiate between multiprogramming and multiprocessing.
10) What are the different ways in which the Pthread terminates
11) Explain any two facilities provided for implementing interacting process in programming language and
operating system.
12) 1What are the essential properties of batch, real time and distributed operating systems
13) Is separation of mechanism and policy desirable while designing an operating system? Discuss with
example.
14) Explain how an Operating System can be viewed as a resource manager.
15) What is a distributed operating system? What are the advantages of the distributed operating syste (6)
16) What are system calls ? With examples explain different categories of system calls
17) Briefly explain the clustered systems and real time systems.
18) Explain the ‘graceful degradation’ and ‘fault tolerant’ in a multiprocessor system
19) What is a ‘virtual machine’? Explain the just-in-time (JIT) compiler, used in a java virtual machine
20) Define: (i) Micro Kernel (ii) Bootstrap program (iii) Caching (iv) trap (v) Job Pool
21) What are the OS operations? Explain. (6)
22) ) Give the features of symmetric and asymmetric multiprocessing systems. (4)
23) Give the features of symmetric and asymmetric multiprocessing systems.
24) List and explain the advantages of multi processor system.
25) Differentiate between direct and indirect inter process communication
26) Describe the actions an operating system takes to context switch between processes.
27) What is a process? With a state diagram, explain states of a process. Also write the structure of process
control block (8)
28) Define IPC (Inter Process Communication). What are the different methods used for logical
implementation of a message passing system?
29) Describe the implementation of IPC using shared memory and message passing.
30) Briefly explain the common classes of services provided by the various operating systems for helping the
user and for ensuring the efficient operation of the system.

Dept.Of ISE, APSCE 40


Operating Systems

Dept.Of ISE, APSCE 41

You might also like