Os Guides 1-2
Os Guides 1-2
Study Guide
Module 1
INTRODUCTION TO OPERATING SYSTEM
What is a Kernel?
Kernel is the central part of an OS which manages system resources and is always resident in
memory. It also acts like a bridge between application and hardware of the computer. It is also
the first program that loads after the bootloader.
Bootloader is a program that loads and starts the boot time tasks and processes of an OS. It
also places the OS of a computer into memory.
1
1. User Interface (UI) refers to the part of an OS, or device that allows a user to enter and
receive information.
Types of UI:
Command line interface
Batch based interface
Graphical User Interface
2. Program Execution. The OS must have the capability to load a program into memory
and execute that program.
3. File system manipulation. Programs need has to be read and then write them as files
and directories. File handling portion of OS also allows users to create and delete files by
specific name along with extension, search for a given file and / or list file information.
4. Input / Output Operations. A program which is currently executing may require I/O, which
may involve file or other I/O device. The OS is responsible for reading and/or writing data
from I/O devices such as disks, tapes, printers, keyboards, etc.
5. Communication. Process needs to swap over information with other process. Processes
executing on same computer system or on different computer systems can communicate
using operating system support.
6. Resource Allocation The OS manages the different computer resources such as CPU
time, memory space, file storage space, I/O devices, etc. and allocates them to different
application programs and users.
7. Error Detection The operating system should be able to detect errors within the computer
system (CPU, memory, I/O, or user program) and take the appropriate action.
8. Job Accounting. OS keeps track of time and resources used by various tasks and users,
this information can be used to track resource usage for a particular user or group of user.
9. Security and Protection. Protection is any mechanism for controlling access of
processes or users to resources defined by the OS. Security is a defense of the system
against internal and external attacks (denial-of-service, worms, viruses, identity theft, theft
of service)
2
Time-sharing Operating System
Time-sharing or multitasking is logical extension in which CPU switches jobs so frequently that
users can interact with each job while it is running, creating interactive computing.
• Examples: Unix, Linux, Multics and Windows
The important points about the previous figure displaying Computer System Organization is:
One or more CPUs, device controllers connect through common bus providing access to
shared memory.
The I/O devices and the CPU both execute concurrently.
3
Some of the processes are scheduled for the CPU and at the same time, some are
undergoing input/output operations.
There are multiple device controllers, each in charge of a particular device such as
keyboard, mouse, printer etc.
There is buffer available for each of the devices. The input and output data can be stored
in these buffers. Buffer is a region of memory used to temporarily hold data while it is being
moved from one place to another.
The data is moved from memory to the respective device buffers by the CPU for I/O
operations and then this data is moved back from the buffers to memory.
The device controllers use an interrupt to inform the CPU that I/O operation is completed.
What is an Interrupt?
An operating system is interrupt driven.
Interrupt is a signal emitted by hardware or software when a process or an event needs
immediate attention.
It alerts the processor temporarily to a high priority process requiring interruption of the
current working process and then return to its previous task.
Types of Interrupts:
Hardware interrupt
Software interrupt
Hardware interrupt is a signal created and sent to the CPU that is caused by some action taken
by a hardware device.
Example: When a key is pressed or when the mouse is moved.
Software Interrupt arises due to illegal and erroneous use of an instruction or data. It often occurs
when an application software terminates or when it requests the operating system for some
service.
Example: stack overflow, division by zero, invalid opcode, etc. These are also called traps.
Interrupt Handling
The operating system preserves the state of the CPU by storing registers and the program counter
Determines which type of interrupt has occurred:
Polling – operating system sends signal to each devices asking if they have a request
Vectored interrupt system – requesting device sends interrupt to the operating system.
Separate segments of code determine what action should be taken for each type of interrupt.
4
A system call is a way for programs to interact with the OS. A computer program makes a system
call when it makes a request to the OS‘s kernel.
Single-Processor System
There is one main CPU capable of executing a general-purpose instruction set, including
instructions from user processes.
Multiprocessor System
Also known as parallel-system or multicore.
First appeared in servers and now in smartphones and tablet computers.
A recent trend in CPU design is to include multiple computing cores on a single chip. Such
multiprocessor systems are termed multicore. They can be more efficient than multiple chips with
single core.
A dual-core design with two cores on the same chip. Each core has its own register set as well
as its own local cache.
Clustered System
Like multiprocessor systems, but multiple systems working together
Usually sharing storage via a storage-area network (SAN)
Provides a high-availability service which survives failures
Asymmetric clustering has one machine in hot-standby mode
5
Symmetric clustering has multiple nodes running applications, monitoring each other
Some clusters are for high-performance computing (HPC).
Computing Environment
Office computing environment
o PCs connected to a network, terminals attached to mainframe or minicomputers
providing batch and timesharing
o Now portals allowing networked and remote systems access to same resources
Mobile computing
o Refers to computing on handheld smartphones and tablet computers.
Distributed system
o It is a collection of physically separate, possibly heterogeneous computer systems
that are networked to provide users with access to the various resources that the
system maintains.
o Network operating system is an operating system that provides services across
the network.
Virtualization
o It is a technology that allows operating systems to run as applications within other
operating system.
o Virtualization plays a major role in cloud computing as it provides a virtual storage
and computing services to the cloud clients which is only possible through
virtualization.
6
o It is one member of the class software that also includes emulation. Emulation is
used when the source CPU type is different from the target CPU type.
Example: virtual machine, OracleVirtualBox
Cloud Computing
It is a type of computing that delivers computing, storage and even applications as a
service across a network.
It is a logical extension of virtualization.
Types of Cloud
Public cloud – cloud available via the Internet
Private cloud – cloud run by a company for that company’s own use
Hybrid cloud – cloud that includes both public and private
7
Applied Operating System
Study Guide
Module Number
2
OBJECTIVES
PROCESS CONCEPT
Example:
1
PROCESS ARCHITECTURE
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in the
program.
When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data. The figure shows a simplified layout of a
process inside main memory
Heap This is dynamically allocated memory to a process during its run time.
2
PROCESS STATE
As a process executes, it changes state. The current activity of a process party defines its state.
Each sequential process may be in one of following states:
new terminated
admitted exit
interrupt
ready running
waiting
Each process is represented in the operating system by a process control block (PCB) – also
called a task control block. A PCB is a data block or record containing many pieces of the
information associated with a specific process including:
1. Process state. The state may be new, ready, running, waiting, or halted.
2. Program Counter. The program counter indicates the address of the next instruction to
be executed for this process.
3. CPU Registers. These include accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code information. Along with the program
counter, this information must be saved when an interrupt occurs, to allow the process to
be continued correctly afterward.
3
4. CPU Scheduling Information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
5. Memory Management Information. This information includes limit registers or page
tables.
6. Accounting Information. This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
7. I/O Status Information. This information includes outstanding I/O requests, I/O devices
(such as disks) allocated to this process, a list of open files, and so on.
• The PCB simply serves as the repository for any information that may vary from process to
process.
process
pointer
state
process number
program counter
registers
memory limits
.
.
.
4
PROCESS CONCEPT
Example of the CPU being switched from one process to another. This is also known as Context
Switch Diagram
interrupt or system
executing
call
CONCURRENT PROCESSES
• The processes in the system can execute concurrently; that is, many processes may be
multitasked on a CPU.
• A process may create several new processes, via a create-process system call, during
the course of execution. Each of these new processes may in turn create other processes.
• The creating process is the parent process whereas the new processes are the children
of that process.
• When a process creates a sub-process, the sub-process may be able to obtain its
resources directly from the OS or it may use a subset of the resources of the parent
process.
Restricting a child process to a subset of the parent’s resources prevents any process from
overloading the system by creating too many processes.
When a process creates a new process, two common implementations exist in terms of execution:
5
• A process terminates when it finishes its last statement and asks the operating system to
delete it using the exit system call.
• A parent may terminate the execution of one of its children for a variety of reason, such as:
1. The child has exceeded its usage of some of the resources it has been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the OS does not allow a child to continue if its parent
terminates. In such systems, if a process terminates, then all its children must also be
terminated by the operating system. This phenomenon is referred to as cascading
termination.
1. Its execution is deterministic; that is, the result of the execution depends solely on the
input state.
2. Its execution is reproducible; that is, the result of the execution will always be the same
for the same input.
3. Its execution can be stopped and restarted without causing ill effects.
• A process is cooperating if it can affect or be affected by the other processes. Clearly, any
process that shares data with other processes is a cooperating process. Such a process has
the following characteristics:
1. The results of its execution cannot be predicted in advance, since it depends on relative
execution sequence.
2. The result of its execution is nondeterministic since it will not always be the same for the
same input.
SCHEDULING CONCEPTS
• The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization.
• Multiprogramming also increases throughput, which is the amount of work the system
accomplishes in a given time interval (for example, 17 processes per minute).
6
Example:
Given two processes, P0 and P1.
process P0
start idle; idle; idle; stop
input input input
process P1
start idle; idle; idle; stop
input input input
If the system runs the two processes sequentially, then CPU utilization is only 50%.
• The idea of multiprogramming is if one process is in the waiting state, then another process
which is in the ready state goes to the running state.
process P0
start idle; idle; idle; stop
input input input
process P1
start idle; idle; idle; stop
input input input
TYPES OF SCHEDULER
• Long-term scheduler (or Job scheduler) selects processes from the secondary
storage and loads them into memory for execution.
• The long-term scheduler executes much less frequently.
7
• There may be minutes between the creation of new processes in the system.
• The long-term scheduler controls the degree of multiprogramming – the number of
processes in memory.
• Because of the longer interval between executions, the long-term scheduler can afford to
take more time to select a process for execution.
• Short-term scheduler (or CPU scheduler) selects process from among the processes
that are ready to execute, and allocates the CPU to one of them.
• The short-term scheduler must select a new process for the CPU frequently.
• A process may execute for only a few milliseconds before waiting for an I/O request.
• Because of the brief time between executions, the short-term scheduler must be very
fast.
• Medium-term scheduler removes (swaps out) certain processes from memory to
lessen the degree of multiprogramming (particularly when thrashing occurs).
• At some later time, the process can be reintroduced into memory and its execution can
be continued where it left off.
• This scheme is called swapping.
SCHEDULING CONCEPTS
• Switching the CPU to another process requires some time to save the state of the old
process and loading the saved state for the new process. This task is known as context
switch.
• Context-switch time is pure overhead, because the system does no useful work while
switching and should therefore be minimized.
• Whenever the CPU becomes idle, the OS (particularly the CPU scheduler) must select
one of the processes in the ready queue for execution.
CPU SCHEDULER
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example,
I/O request, invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state (for example,
completion of I/O).
4. When a process terminates.
• For circumstances 1 and 4, there is no choice in terms of scheduling. A new process (if
one exists in the ready queue) must be selected for execution. There is a choice,
however, for circumstances 2 and 3.
• When scheduling takes place only under circumstances 1 and 4, the scheduling scheme
is non-preemptive; otherwise, the scheduling scheme is preemptive.
8
CPU SCHEDULER
• Non-preemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or switching
states. No process is interrupted until it is completed, and after that processor
switches to another process.
• Preemptive scheduling works by dividing time slots of CPU to a given process. The
time slot given might be able to complete the whole process or might not be able to it.
When the burst time of the process is greater than CPU cycle, it is placed back into the
ready queue and will execute in the next chance. This scheduling is used when the
process switch to ready state.
• Different CPU-scheduling algorithms have different properties and may favour one class
of processes over another.
• Many criteria have been suggested for comparing CPU-scheduling algorithms.
• The characteristics used for comparison can make a substantial difference in the
determination of the best algorithm. The criteria should include: CPU Utilization,
Throughput, Turnaround Time, Waiting Time, and Response Time
1. CPU Utilization measures how busy is the CPU. CPU utilization may range from 0 to 100
percent. In a real system, it should range from 40% (for a lightly loaded system) to 90% (for
a heavily loaded system).
2. Throughput is the amount of work completed in a unit of time. In other words throughput is
the processes executed to number of jobs completed in a unit of time. The scheduling
algorithm must look to maximize the number of jobs processed per time unit.
3. Turnaround Time measures how long it takes to execute a process. Turnaround time is
the interval from the time of submission to the time of completion. It is the sum of the
periods spent waiting to get into memory, waiting in the ready queue, executing in the CPU,
and doing I/O.
4. Waiting Time is the time a job waits for resource allocation when several jobs are
competing in multiprogramming system. Waiting time is the total amount of time a process
spends waiting in the ready queue.
5. Response Time is the time from the submission of a request until the system makes the
first response. It is the amount of time it takes to start responding but not the time that it
takes to output that response.
9
A good CPU scheduling algorithm maximizes CPU utilization and throughput and minimizes
turnaround time, waiting time and response time.
• In most cases, the average measure is optimized. However, in some cases, it is desired
to optimize the minimum or maximum values, rather than the average.
• For example, to guarantee that all users get good service, it may be better to minimize
the maximum response time.
• For interactive systems (time-sharing systems), some analysts suggests that
minimizing the variance in the response time is more important than averaging
response time.
• A system with a reasonable and predictable response may be considered more
desirable than a system that is faster on the average, but is highly variable.
Non-preemptive:
• First-Come First-Served(FCFS)
• Shortest Job First (SJF)
• Priority Scheduling (Non-preemptive)
Preemptive:
• Shortest Remaining Time First (SRTF)
• Priority Scheduling (Preemptive)
• Round-robin (RR)
10
SUBTOPIC 2: CPU SCHEDULING TECHNIQUES – NON PREEMPTIVE
Example 1:
• Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds (ms).
• Illustrate the Gantt chart and compute for the average waiting time and average
turnaround time
Given:
Arrival Burst Waiting Turnaround
Process
Time time Time Time
P1 0 5 0-0 = 0 5-0 = 5
P2 0 3 5-0 = 5 8-0 = 8
P3 0 8 8-0 = 8 16-0 =16
P4 0 6 16-0 = 16 22-0 = 22
29/4 51/4
Average
7.25 ms 12.75 ms
Gantt Chart:
Formulas:
11
Example 2:
Arrival Burst Waiting Turnaround
Process
Time time Time Time
P1 0 5 0-0 = 0 5-0 = 5
P2 1 3 5-1 = 4 8-1 = 7
P3 2 8 8-2 =6 16-2 = 14
P4 3 6 16-3 = 13 22-3 = 19
23/4 45/4
Average
5.75 ms 11.25 ms
Gantt Chart:
• SJF algorithm associates with each process the length of the latter’s next CPU burst.
• When the CPU is available, it is assigned to the process that has the smallest next
CPU burst.
• If two processes have the same length next CPU burst, FCFS scheduling is used to
break the tie.
12
Example 1:
Gantt Chart:
Example 2:
Gantt Chart:
13
• Priority scheduling (non-preemptive) algorithm is one of the most common scheduling
algorithms in batch systems.
• Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Example 1:
Consider the lowest priority value gets the highest priority. In this case, process with priority
value (1) is the highest priority.
Gantt Chart:
14
SUBTOPIC 3: CPU SCHEDULING TECHNIQUES–PREEMPTIVE
• A new process arriving may have a shorter next CPU burst than what is left of the currently
executing process.
Example :
Gantt Chart:
15
PRIORITY SCHEDULING (P)
Example 3:
Gantt Chart:
• The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.
16
Example:
Time Quantum = 3ms
Gantt Chart:
• The performance of the RR algorithm depends heavily on the size of the time
quantum.
• If the time quantum is too large (infinite), the RR policy degenerates into the FCFS
policy.
• If the time quantum is too small, then the effect of the context-switch time becomes a
significant overhead.
• As a general rule, 80 percent of the CPU burst should be shorter than the time
quantum.
REFERENCES
17