0% found this document useful (0 votes)
93 views37 pages

Os Module1

The document provides an overview of computer systems and operating systems. It discusses that an operating system acts as an intermediary between the user and computer hardware. The computer system consists of hardware, operating system, application programs, and users. The operating system manages system resources and acts as a control program. When a computer is powered on, a bootstrap program loads and initializes the operating system kernel. The operating system then manages concurrent access to shared system resources by hardware devices and CPUs through interrupts. Storage is hierarchical with main memory and secondary storage like magnetic disks. Caching improves access speed by copying data between faster and slower storage.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
93 views37 pages

Os Module1

The document provides an overview of computer systems and operating systems. It discusses that an operating system acts as an intermediary between the user and computer hardware. The computer system consists of hardware, operating system, application programs, and users. The operating system manages system resources and acts as a control program. When a computer is powered on, a bootstrap program loads and initializes the operating system kernel. The operating system then manages concurrent access to shared system resources by hardware devices and CPUs through interrupts. Storage is hierarchical with main memory and secondary storage like magnetic disks. Caching improves access speed by copying data between faster and slower storage.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 37

Module 1

COMPUTER SYSTEM AND OPERATING SYSTEM OVERVIEW


OVER VIEW OF OPERATING SYSTEM
What is an Operating System?
A program that acts as an intermediary between a user of a computer and the computer hardware
Operating system goals:
Execute user programs and make solving user problems easier
Make the computer system convenient to use
Use the computer hardware in an efficient manner
Computer System Structure
Computer system can be divided into four components
Hardware – provides basic computing resources
CPU, memory, I/O devices
Operating system
Controls and coordinates use of hardware among various applications and users
Application programs – define the ways in which the system resources are used to solve the computing
problems of the users
Word processors, compilers, web browsers, database systems, video games
Users
People, machines, other computers
Four Components of a Computer System

Operating System Definition

OS is a resource allocator
Manages all resources
Decides between conflicting requests for efficient and fair resource use
OS is a control program
Controls execution of programs to prevent errors and improper use of the computer
No universally accepted definition
Everything a vendor ships when you order an operating system” is good approximation
But varies wildly

Computer Science & Engineering , RYMEC


“The one program running at all times on the computer” is the kernel. Everything else is either a
system program (ships with the operating system) or an application program
Computer Startup
bootstrap program is loaded at power-up or reboot
Typically stored in ROM or EPROM, generally known as firmware
Initializes all aspects of system
Loads operating system kernel and starts execution
Computer System Organization
Computer-system operation
One or more CPUs, device controllers connect through common bus providing access to shared memory
Concurrent execution of CPUs and devices competing for memory cycles

Computer-System Operation A modern general-purpose computer system consists of one or more CPUs and a
number of device controllers connected through a common bus that provides access to shared memory (Figure 2).
Each device controller is in charge of a specific type of device (for example, disk drives, audio devices, or video
displays). The CPU and the device controllers can execute in parallel, competing for memory cycles. To ensure
orderly access to the shared memory, a memory controller synchronizes access to the memory.

How Computer works:

Initially, when the computer is powered up or rebooted, it needs to have an initial program to run. This initial
program, or bootstrap program, tends to be simple. Typically, it is stored within the computer hardware in read-
only memory (ROM) or electrically erasable programmable read-only memory (EEPROM), known by the general
term firmware. It initializes all aspects of the system, from CPU registers to device controllers to memory contents.
The bootstrap program must know how to load the operating system and how to start executing that system. To
accomplish this goal, the bootstrap program must locate the operating-system kernel and load it into memory. Once
the kernel is loaded and executing, it can start providing services to the system and its users. Some services are
provided outside of the kernel, by system programs that are loaded into memory at boot time to become system
processes, or system daemons that run the entire time the kernel is running. On UNIX, the first system process is
“init,” and it starts many other daemons. Once this phase is complete, the system is fully booted, and the system
waits for some event to occur. The occurrence of an event is usually signaled by an interrupt from either the
hardware or the software. Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by

Computer Science & Engineering , RYMEC


way of the system bus. Software may trigger an interrupt by executing a special operation called a system call (also
called a monitor call).

Computer-System Operation
I/O devices and the CPU can execute concurrently
Each device controller is in charge of a particular device type
Each device controller has a local buffer
CPU moves data from/to main memory to/from local buffers
I/O is from the device to local buffer of controller
Device controller informs CPU that it has finished its operation by causing An interrupt

Common Functions of Interrupts


Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which
contains the addresses of all the service routines
Interrupt architecture must save the address of the interrupted instruction
Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interruptnA
trap is a software-generated interrupt caused either by an error or a user request
An operating system is interrupt driven
Interrupt Handling
The operating system preserves the state of the CPU by storing registers and the program counter
Determines which type of interrupt has occurred:
polling

Computer Science & Engineering , RYMEC


vectored interrupt system
Separate segments of code determine what action should be taken for each type of interrupt

Interrupt Timeline

I/O Structure
After I/O starts, control returns to user program only upon I/O compl
Wait instruction idles the CPU until the next interrupt
Wait loop (contention for memory access)
At most one I/O request is outstanding at a time, no simultaneous I/O
After I/O starts, control returns to user program without waiting for I/
call – request to the operating system to allow user to wait for I/O completion
Device-status table contains entry for each I/O device indicating its type, address, and
state Operating system indexes into I/O device table to determine device status and to
modify table entry to include interrupt

Direct Memory Access Structure


Used for high-speed I/O devices able to transmit information at close to memory speeds
Device controller transfers blocks of data from buffer storage directly to main memory without CPU
intervention
Only one interrupt is generated per block, rather than the one interrupt per
byte
Storage Structure
Main memory – only large storage media that the CPU can access directly
Secondary storage – extension of main memory that provides large nonvolatile storage capacity
Magnetic disks – rigid metal or glass platters covered with magnetic recording material
Disk surface is logically divided into tracks, which are subdivided into sectors
The disk controller determines the logical interaction between the device and the computer

The CPU can load instructions only from memory, so any programs to run must be stored there. General-purpose
computers run most of their programs from rewritable memory, called main memory (also called random-access
memory, or RAM). Main memory commonly is implemented in a semiconductor technology called dynamic
random-access memory (DRAM). A typical instruction–execution cycle, as executed on a system with a von
Neumann architecture, first fetches an instruction from memory and stores that instruction in the instruction
register. Ideally, we want the programs and data to reside in main memory permanently. This arrangement usually
is not possible for the following two reasons: 1. Main memory is usually too small to store all needed programs
and data permanently. 2. Main memory is a volatile storage device that loses its contents when power is turned off
or otherwise lost. Thus, most computer systems provide secondary storage as an extension of main memory. The
main requirement for secondary storage is that it be able to hold large quantities of data permanently.

Computer Science & Engineering , RYMEC


The most common secondary-storage device is a magnetic disk, which provides storage for both programs and
data. Most programs (system and application) are stored on a disk until they are loaded into memory.

Storage Hierarchy
Storage systems organized in hierarchy
Speed
Cost
Volatily

Caching – copying information into faster storage system; main memory can be viewed as a last cache for
secondary storage

Important principle, performed at many levels in a computer (in hardware, operating system, software)
Information in use copied from slower to faster storage temporarily
Faster storage (cache) checked first to determine if information is there
If it is, information used directly from the cache
(fast) If not, data copied to cache and used there
Cache smaller than storage being cached
Cache management important design problem
Cache size and replacement policy

Computer-System Architecture
Most systems use a single general-purpose processor (PDAs through
mainframes) Most systems have special-purpose processors as well
Multiprocessors systems growing in use and importance
Also known as parallel systems, tightly-coupled systems

Advantages include
1. Increased throughput: increasing the number of processors, we expect to get more work done in less time. The speed-
up ratio with N processors is not N, however; rather, it is less than N. W
2. Economy of scale: Multiprocessor systems can cost less than equivalent multiple single-processor systems, because
they can share peripherals, mass storage, and power supplies. If several programs operate on the same set of data, it is cheaper

Computer Science & Engineering , RYMEC


to store those data on one disk and to have all the processors share them than to have many computers with local disks and
many copies of the data.
3. Increased reliability – graceful degradation or fault tolerance

1..Asymmetric Multiprocessing
2.Symmetric Multiprocessing

How a Modern Computer Works


Symmetric Multiprocessing Architecture

A Dual-Core Design

Clustered Systems

Like multiprocessor systems, but multiple systems working together


Usually sharing storage via a storage-area network (SAN)
Provides a high-availability service which survives failures
Asymmetric clustering has one machine in hot-standby mode
Symmetric clustering has multiple nodes running applications, monitoring each other
Some clusters are for high-performance computing (HPC)
Applications must be written to use parallelization

Computer Science & Engineering , RYMEC


Operating System Structure
Multiprogramming needed for efficiency
Single user cannot keep CPU and I/O devices busy at all times
Multiprogramming organizes jobs (code and data) so CPU always has one to Execute
A subset of total jobs in system is kept in memory

Computer Science & Engineering , RYMEC


One job selected and run via job scheduling
When it has to wait (for I/O for example), OS switches to another job
Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently that users
can interact with each job while it is running, creating interactive computing
Response time should be < 1 second
Each user has at least one program executing in memory [process
If several jobs ready to run at the same time [ CPU scheduling
If processes don’t fit in memory, swapping moves them in and out to run
Virtual memory allows execution of processes not completely in memory
Memory Layout for Multiprogrammed System

Operating-System Operations
Interrupt driven by hardware
Software error or request creates exception or trap
Division by zero, request for operating system service
Other process problems include infinite loop, processes modifying each Other or the operating system
Dual-mode operation allows OS to protect itself and other system components
User mode and kernel mode
Mode bit provided by hardware

Provides ability to distinguish when system is running user code or kernel code
Some instructions designated as privileged, only executable in kernel mode
System call changes mode to kernel, return from call resets it to user
Transition from User to Kernel Mode
Timer to prevent infinite loop / process hogging resources
Set interrupt after specific period
Operating system decrements counter
When counter zero generate an interrupt
Set up before scheduling process to regain control or terminate program that exceeds allotted time

Computer Science & Engineering , RYMEC


UNIT - 1

OPERATING SYSTEM FUNCTIONS

Process Management
A process is a program in execution. It is a unit of work within the system. Program is a passive entity,
process is an active entity.
Process needs resources to accomplish its task
CPU, memory, I/O, files
Initialization data
Process termination requires reclaim of any reusable resources
Single-threaded process has one program counter specifying location of next instruction to execute
Process executes instructions sequentially, one at a time, until completion
Multi-threaded process has one program counter per thread
Typically system has many processes, some user, some operating system running concurrently on one or
more CPUs
Concurrency by multiplexing the CPUs among the processes / threads

Process Management Activities


The operating system is responsible for the following activities in connection with process
management:
Creating and deleting both user and system processes
Suspending and resuming processes
Providing mechanisms for process synchronization
Providing mechanisms for process communication
Providing mechanisms for deadlock handling

Memory Management
All data in memory before and after processing
All instructions in memory in order to execute
Memory management determines what is in memory when
Optimizing CPU utilization and computer response to users

Computer Science & Engineering , RYMEC


Memory management activities
Keeping track of which parts of memory are currently being used and by whom
Deciding which processes (or parts thereof) and data to move into and out of memory
Allocating and deallocating memory space as needed
Storage Management
OS provides uniform, logical view of information storage
Abstracts physical properties to logical storage unit - file
Each medium is controlled by device (i.e., disk drive, tape drive)
Varying properties include access speed, capacity, data-transfer rate, access method (sequential or
random)
File-System management
Files usually organized into directories
Access control on most systems to determine who can access what
OS activities include
Creating and deleting files and directories
Primitives to manipulate files and dirs
Mapping files onto secondary storage
Backup files onto stable (non-volatile) storage media
Mass-Storage Management

Usually disks used to store data that does not fit in main memory or data that must be kept for a “long”
period of time
Proper management is of central importance
Entire speed of computer operation hinges on disk subsystem and its algorithms
MASS STORAGE activities
Free-space management
Storage allocation
Disk scheduling
Some storage need not be fast
Tertiary storage includes optical storage, magnetic tape
Still must be managed
Varies between WORM (write-once, read-many-times) and RW (read-write)
Performance of Various Levels of Storage

De Page 10
pt. of Computer Science and Engineering
Migration of Integer A from Disk to Register
Multitasking environments must be careful to use most recent value, no matter where it is stored in the
storage hierarchy

Multiprocessor environment must provide cache coherency in hardware such that all CPUs have the
most recent value in their cache
Distributed environment situation even more complex
Several copies of a datum can exist

I/O Subsystem
One purpose of OS is to hide peculiarities of hardware devices from the user
I/O subsystem responsible for
Memory management of I/O including buffering (storing data temporarily while it is being transferred),
caching (storing parts of data in faster storage for performance), spooling (the overlapping of output of
one job with input of other jobs)
General device-driver interface
Drivers for specific hardware devices
Protection and Security
Protection – any mechanism for controlling access of processes or users to resources defined by the OS
Security – defense of the system against internal and external attacks
Huge range, including denial-of-service, worms, viruses, identity theft, theft of service
Systems generally first distinguish among users, to determine who can do what
User identities (user IDs, security IDs) include name and associated number, one per user
User ID then associated with all files, processes of that user to determine access control
Group identifier (group ID) allows set of users to be defined and controls managed, then also associated
with each process, file
Privilege escalation allows user to change to effective ID with more rights
DISTRIBUTED SYSTEMS
Computing Environments
Traditional computer
Blurring over time
Office environment
PCs connected to a network, terminals attached to mainframe or minicomputers providing batch
and timesharing
Now portals allowing networked and remote systems access to same resources
Home networks
Used to be single system, then modems
Now firewalled, networked
Client-Server Computing

Dept. of Computer Science and Engineering Page 11


Dumb terminals supplanted by smart PCs
Many systems now servers, responding to requests generated by clients
Compute-server provides an interface to client to request services (i.e. database)
File-server provides interface for clients to store and retrieve files

Peer-to-Peer Computing

Another model of distributed system


P2P does not distinguish clients and servers
Instead all nodes are considered peers
May each act as client, server or both
Node must join P2P network
Registers its service with central lookup service on network, or
Broadcast request for service and respond to requests for service via discovery protocol
Examples include Napster and Gnutella
Web-Based Computing
Web has become ubiquitous
PCs most prevalent devices
More devices becoming networked to allow web access
New category of devices to manage web traffic among similar servers: load balancers
Use of operating systems like Windows 95, client-side, have evolved into Linux and Windows XP,
which can be clients and servers

Open-Source Operating Systems


Operating systems made available in source-code format rather than just binary closed-source
Counter to the copy protection and Digital Rights Management (DRM) movement
Started by Free Software Foundation (FSF), which has “copyleft” GNU Public License (GPL)
Examples include GNU/Linux, BSD UNIX (including core of Mac OS X), and Sun Solaris
Operating System Services
One set of operating-system services provides functions that are helpful to the user:
User interface - Almost all operating systems have a user interface (UI)
Varies between Command-Line (CLI), Graphics User Interface (GUI), Batch
Program execution - The system must be able to load a program into memory and to run that program,
end execution, either normally or abnormally (indicating error)
I/O operations - A running program may require I/O, which may involve a file or an I/O device

Dept. of Computer Science and Engineering Page 12


File-system manipulation - The file system is of particular interest. Obviously, programs need to read
and write files and directories, create and delete them, search them, list file Information, permission
management.

A View of Operating System Services

Operating System Services


One set of operating-system services provides functions that are helpful to the user
Communications – Processes may exchange information, on the same computer or between computers
over a network Communications may be via shared memory or through message passing (packets
moved by the OS)
Error detection – OS needs to be constantly aware of possible errors May occur in the CPU and memory
hardware, in I/O devices, in user program For each type of error, OS should take the appropriate action
to ensure correct and consistent computing Debugging facilities can greatly enhance the user’s and
programmer’s abilities to efficiently use the system

Another set of OS functions exists for ensuring the efficient operation of the system itself via resource
sharing
Resource allocation - When multiple users or multiple jobs running concurrently, resources must be
allocated to each of them
Many types of resources - Some (such as CPU cycles, main memory, and file storage) may have special
allocation code, others (such as I/O devices) may have general request and release code
Accounting - To keep track of which users use how much and what kinds of computer resources
Protection and security - The owners of information stored in a multiuser or networked computer
system may want to control use of that information, concurrent processes should not interfere with each
other
Protection involves ensuring that all access to system resources is controlled
Security of the system from outsiders requires user authentication, extends to defending external I/O
devices from invalid access attempts
If a system is to be protected and secure, precautions must be instituted throughout it. A chain is only as
strong as its weakest link.
User Operating System Interface - CLI
Command Line Interface (CLI) or command interpreter allows direct command entry

Dept. of Computer Science and Engineering Page 13


Sometimes implemented in kernel, sometimes by systems program
Sometimes multiple flavors implemented – shells
Primarily fetches a command from user and executes it
Sometimes commands built-in, sometimes just names of programs
If the latter, adding new features doesn’t require shell modification
User Operating System Interface - GUI

User-friendly desktop metaphor interface


Usually mouse, keyboard, and monitor
Icons represent files, programs, actions, etc
Various mouse buttons over objects in the interface cause various actions (provide information, options,
execute function, open directory (known as a folder)
Invented at Xerox PARC
Many systems now include both CLI and GUI interfaces
Microsoft Windows is GUI with CLI “command” shell
Apple Mac OS X as “Aqua” GUI interface with UNIX kernel underneath and shells available
Solaris is CLI with optional GUI interfaces (Java Desktop, KDE)
Bourne Shell Command Interpreter

Dept. of Computer Science and Engineering Page 14


The Mac OS X GUI

System Calls

Programming interface to the services provided by the OS


Typically written in a high-level language (C or C++)
Mostly accessed by programs via a high-level Application Program Interface (API) rather than direct
system call usenThree most common APIs are Win32 API for Windows, POSIX API for POSIX-based
systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java
virtual machine (JVM)
Why use APIs rather than system calls?(Note that the system-call names used throughout this text are
generic)
Example of System Calls

Dept. of Computer Science and Engineering Page 15


Example of Standard API
Consider the ReadFile() function in the
Win32 API—a function for reading from a file

A description of the parameters passed to ReadFile() HANDLE file—the file to be read


LPVOID buffer—a buffer where the data will be read into and written from
DWORD bytesToRead—the number of bytes to be read into the buffer
LPDWORD bytesRead—the number of bytes read during the last read
LPOVERLAPPED ovl—indicates if overlapped I/O is being used

System Call Implementation


Typically, a number associated with each system call
System-call interface maintains a table indexed according to these
Numbers
The system call interface invokes intended system call in OS kernel and returns status of the system call
and any return values
The caller need know nothing about how the system call is implemented
Just needs to obey API and understand what OS will do as a result call
Most details of OS interface hidden from programmer by API
Managed by run-time support library (set of functions built into libraries included with compiler)
API – System Call – OS Relationship

System Call Parameter Passing


Often, more information is required than simply identity of desired system call
Exact type and amount of information vary according to OS and call
Three general methods used to pass parameters to the OS
Simplest: pass the parameters in registers
In some cases, may be more parameters than registers
Parameters stored in a block, or table, in memory, and address of block passed as a parameter in a
register
This approach taken by Linux and Solaris
Parameters placed, or pushed, onto the stack by the program and popped off the stack by the operating
system
Standard C Library
Block Example
and stack methods do not limit the number or length of parameters being passed

Parameter Passing via Table

Dept. of Computer Science and Engineering Page 16


Types of System Calls
Process control
File management
Device management
Information maintenance
Communications
Protection

Examples of Windows and Unix System Calls

MS-DOS execution

(a) At system startup (b) running a program

Dept. of Computer Science and Engineering Page 17


FreeBSD Running Multiple Programs

System Programs
System programs provide a convenient environment for program development and execution. The can be
divided into:
File manipulation
Status information
File modification
Programming language support
Program loading and execution
Communications
Application programs
Most users’ view of the operation system is defined by system programs, not the actual system calls
Provide a convenient environment for program development and execution
Some of them are simply user interfaces to system calls; others are considerably more complex
File management - Create, delete, copy, rename, print, dump, list, and generally manipulate files and
directories
Status information
Some ask the system for info - date, time, amount of available memory, disk space, number of users
Others provide detailed performance, logging, and debugging information
Typically, these programs format and print the output to the terminal or other output devices
Some systems implement a registry - used to store and retrieve configuration information
File modification
Text editors to create and modify files
Special commands to search contents of files or perform transformations of the text
Programming-language support - Compilers, assemblers, debuggers and interpreters sometimes
provided
Program loading and execution- Absolute loaders, relocatable loaders, linkage editors, and overlay-
loaders, debugging systems for higher-level and machine language

Dept. of Computer Science and Engineering Page 18


Communications - Provide the mechanism for creating virtual connections among processes, users, and
computer systems
Allow users to send messages to one another’s screens, browse web pages, send electronic-mail
messages, log in remotely, transfer files from one machine to another
Operating System Design and Implementation
Design and Implementation of OS not “solvable”, but some approaches have proven successful
Internal structure of different Operating Systems can vary widely
Start by defining goals and specifications
Affected by choice of hardware, type of system
User goals and System goals
User goals – operating system should be convenient to use, easy to learn, reliable, safe, and fast
System goals – operating system should be easy to design, implement, and maintain, as well as flexible,
reliable, error-free, and efficient
Important principle to separate
Policy: What will be done?
Mechanism: How to do it?
Mechanisms determine how to do something, policies decide what will be done
The separation of policy from mechanism is a very important principle, it allows maximum flexibility if
policy decisions are to be changed later
Simple Structure
MS-DOS – written to provide the most functionality in the least space
Not divided into modules
Although MS-DOS has some structure, its interfaces and levels of Functionality are not well separated

MS-DOS Layer Structure

Layered Approach

The operating system is divided into a number of layers (levels), each built on top of lower layers. The
bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of only
lower-level layers
Dept. of Computer Science and Engineering Page 19
Traditional UNIX System Structure

UNIX

UNIX – limited by hardware functionality, the original UNIX operating system had limited structuring.
The UNIX OS consists of two separable parts
Systems programs
The kernel
Consists of everything below the system-call interface and above the physical hardware
Provides the file system, CPU scheduling, memory management, and other operating-system
functions; a large number of functions for one level
Layered Operating System

Micro kernel System Structure


Moves as much from the kernel into “user” space
Communication takes place between user modules using message passing
Benefits:
Easier to extend a microkernel
Easier to port the operating system to new architectures
Dept. ofMore reliableScience
Computer (less code
andisEngineering
running in kernel mode) Page 20
More secure
Detriments:
Performance overhead of user space to kernel space communication

Mac OS X Structure

Modules

Most modern operating systems implement kernel modules


Uses object-oriented approach
Each core component is separate
Each talks to the others over known interfaces
Each is loadable as needed within the kernel
Overall, similar to layers but with more flexible

Solaris Modular Approach


Virtual Machines
A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the
operating system kernel as though they were all hardware

Dept. of Computer Science and Engineering Page 21


Shri Vishnu Engineering College for Women

A virtual machine provides an interface identical to the underlying bare hardware


The operating system host creates the illusion that a process has its own processor and (virtual memory)
Each guest provided with a (virtual) copy of underlying computer
Virtual Machines History and Benefits
First appeared commercially in IBM mainframes in 1972
Fundamentally, multiple execution environments (different operating systems) can share the same
hardware
Protect from each other
Some sharing of file can be permitted, controlled
Commutate with each other, other physical systems via networking
Useful for development, testing
Consolidation of many low-resource use systems onto fewer busier systems
“Open Virtual Machine Format”, standard format of virtual machines, allows a VM to run within many
different virtual machine (host) platforms

Para-virtualization
Presents guest with system similar but not identical to hardware
Guest must be modified to run on paravirtualized hardwareF
Guest can be an OS, or in the case of Solaris 10 applications running in containers
Solaris 10 with Two Containers

Dept. of Computer Science and Engineering Page 22


Shri Vishnu Engineering College for Women
VMware Architecture

The Java Virtual Machine

Operating-System Debugging

Debugging is finding and fixing errors, or bugs


OSes generate log files containing error information
Failure of an application can generate core dump file capturing memory of the process
Operating system failure can generate crash dump file containing kernel memory
Beyond crashes, performance tuning can optimize system performance
Kernighan’s Law: “Debugging is twice as hard as writing the code in the rst place. Therefore, if you
write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”
DTrace tool in Solaris, FreeBSD, Mac OS X allows live instrumentation on production systems
Probes fire when code is executed, capturing state data and sending it to consumers of those probes

Dept. of Computer Science and Engineering Page 23


Shri Vishnu Engineering College for Women
Solaris 10 dtrace Following System Call

Operating System Generation


Operating systems are designed to run on any of a class of machines; the system must be configured for
each specific computer site
SYSGEN program obtains information concerning the specific configuration of the hardware system
Booting – starting a computer by loading the kernel
Bootstrap program – code stored in ROM that is able to locate the kernel, load it into memory, and start
its execution
System Boot
Operating system must be made available to hardware so hardware can start it
Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
Sometimes two-step process where boot block at fixed location loads bootstrap loader
When power initialized on system, execution starts at a fixed memory location Firmware used to hold
initial boot code

Dept. of Computer Science and Engineering Page 24


Shri Vishnu Engineering College for Women

UNIT -2

PROCESS MANAGEMENT

Process Concept
An operating system executes a variety of programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Textbook uses the terms job and process almost interchangeably
Process – a program in execution; process execution must progress in sequential fashion
A process includes:
program counter
stack
data section
Process in Memory

Process State

As a process executes, it changes state new: The


process is being created running:
Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
Diagram of Process State

Dept. of Computer Science and Engineering Page 25


Shri Vishnu Engineering College for Women
Process Control Block (PCB)

Information associated with each process


Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
Accounting information
I/O status information

CPU Switch From Process to Process

Process Scheduling Queues

Job queue – set of all processes in the system


Ready queue – set of all processes residing in main memory, ready and waiting to execute
Device queues – set of processes waiting for an I/O device
Processes migrate among the various queues

Dept. of Computer Science and Engineering Page 26


Shri Vishnu Engineering College for Women
Ready Queue and Various I/O Device Queues

Representation of Process Scheduling

Schedulers
Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready
queue
Short-term scheduler (or CPU scheduler) – selects which process should be executed next and
allocates CPU

Dept. of Computer Science and Engineering Page 27


Shri Vishnu Engineering College for Women

Addition of Medium Term Scheduling

Short-term scheduler is invoked very frequently (milliseconds) Þ (must be fast)


Long-term scheduler is invoked very infrequently (seconds, minutes) Þ (may be slow)
The long-term scheduler controls the degree of multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
Context Switch
When CPU switches to another process, the system must save the state of the old process and load the
saved state for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is overhead; the system does no useful work while switching
Time dependent on hardware support
Process Creation
Parent process create children processes, which, in turn create other processes, forming a tree of
processes
Generally, process identified and managed via a process identifier (pid)
Resource sharing
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Execution
Parent and children execute concurrently
Parent waits until children terminate
Address space
Child duplicate of parent
Child has a program loaded into it
UNIX examples
fork system call creates new process
exec system call used after a fork to replace the process’ memory space with a new program

Dept. of Computer Science and Engineering Page 28


Shri Vishnu Engineering College for Women

Process Creation

C Program Forking Separate Process

int main()
{
pid_t pid;
/* fork another process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (pid == 0) { /* child process */
execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child Complete");
exit(0);
}
}

A tree of processes on a typical Solaris

Dept. of Computer Science and Engineering Page 29


Shri Vishnu Engineering College for Women
Process Termination
Process executes last statement and asks the operating system to delete it (exit)
Output data from child to parent (via wait)
Process’ resources are deallocated by operating system
Parent may terminate execution of children processes (abort)
Child has exceeded allocated resources
Task assigned to child is no longer required
If parent is exiting Some operating system do not allow child to continue if its parent terminates
All children terminated - cascading termination
Interprocess Communication
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes, including sharing data
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
Cooperating processes need interprocess communication (IPC)
Two models of IPC
Shared memory
Message passing

Communications Models

Cooperating Processes
Independent process cannot affect or be affected by the execution of another process
Cooperating process can affect or be affected by the execution of another process
Advantages of process cooperation
Information sharing
Computation speed-up
Modularity
Convenience

Dept. of Computer Science and Engineering Page 30


Shri Vishnu Engineering College for Women
Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces information that is consumed by a
consumer process
unbounded-buffer places no practical limit on the size of the buffer
bounded-buffer assumes that there is a fixed buffer size
Bounded-Buffer – Shared-Memory Solution
Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Solution is correct, but can only use BUFFER_SIZE-1 elements

Bounded-Buffer – Producer
while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}

Bounded Buffer – Consumer


while (true) {
while (in == out)
; // do nothing -- nothing to consume
// remove an item from the buffer
item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}
Interprocess Communication – Message Passing
Mechanism for processes to communicate and to synchronize their actions
Message system – processes communicate with each other without resorting to shared variables
IPC facility provides two operations:
send(message) – message size fixed or variable
receive(message)
If P and Q wish to communicate, they need to:
establish a communication link between them
exchange messages via send/receive
Implementation of communication link
physical (e.g., shared memory, hardware bus)
logical (e.g., logical properties)

Dept. of Computer Science and Engineering Page 31


Shri Vishnu Engineering College for Women
Direct Communication
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of communication link
Links are established automatically
A link is associated with exactly one pair of communicating processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-directional

Indirect Communication
Messages are directed and received from mailboxes (also referred to as ports)
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes
Each pair of processes may share several communication links
Link may be unidirectional or bi-directional
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive operation
Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.
Synchronization
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
Blocking send has the sender block until the message is received
Blocking receive has the receiver block until a message is available
Non-blocking is considered asynchronous
Non-blocking send has the sender send the message and continue
Non-blocking receive has the receiver receive a valid message or null

Dept. of Computer Science and Engineering Page 32


Shri Vishnu Engineering College for Women
Buffering
Queue of messages attached to the link; implemented in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Examples of IPC Systems - POSIX
POSIX Shared Memory
Process first creates shared memory segment
segment id = shmget(IPC PRIVATE, size, S IRUSR | S IWUSR);
Process wanting access to that shared memory must attach to it
shared memory = (char *) shmat(id, NULL, 0);
Now the process could write to the shared memory
printf(shared memory, "Writing to shared memory");
When done a process can detach the shared memory from its address space
shmdt(shared memory);
Examples of IPC Systems - Mach
Mach communication is message based
Even system calls are messages
Each task gets two mailboxes at creation- Kernel and Notify
Only three system calls needed for message transfer
msg_send(), msg_receive(), msg_rpc()
Mailboxes needed for commuication, created via
port_allocate()
Examples of IPC Systems – Windows XP
Message-passing centric via local procedure call (LPC) facility
Only works between processes on the same system
Uses ports (like mailboxes) to establish and maintain communication channels
Communication works as follows:
The client opens a handle to the subsystem’s connection port object
The client sends a connection request
The server creates two private communication ports and returns the handle to one of them to the client
The client and server use the corresponding port handle to send messages or callbacks and to listen for
replies

Dept. of Computer Science and Engineering Page 33


Shri Vishnu Engineering College for Women
Local Procedure Calls in Windows XP

Communications in Client-Server Systems


Sockets
Remote Procedure Calls
Remote Method Invocation (Java)
Sockets
A socket is defined as an endpoint for communication
Concatenation of IP address and port
The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
Communication consists between a pair of sockets
Socket Communication

Remote Procedure Calls


Remote procedure call (RPC) abstracts procedure calls between processes on networked systems
Stubs – client-side proxy for the actual procedure on the server
The client-side stub locates the server and marshalls the parameters
The server-side stub receives this message, unpacks the marshalled parameters, and peforms the
procedure on the server

Dept. of Computer Science and Engineering Page 34


Shri Vishnu Engineering College for Women

Execution of RPC

Remote Method Invocation


Remote Method Invocation (RMI) is a Java mechanism similar to RPCs
RMI allows a Java program on one machine to invoke a method on a remote object

Marshalling Parameters

Dept. of Computer Science and Engineering Page 35


Shri Vishnu Engineering College for Women
Threads
To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the basis of
multithreaded computer systems
To discuss the APIs for the Pthreads, Win32, and Java thread libraries
To examine issues related to multithreaded programming
Single and Multithreaded Processes

Benefits Responsiveness
Resource Sharing
Economy
Scalability
Multicore Programming
Multicore systems putting pressure on programmers, challenges include
Dividing activities
Balance
Data splitting Data
dependency Testing
and debugging
Multithreaded Server Architecture

Dept. of Computer Science and Engineering Page 36


Shri Vishnu Engineering College for Women
Concurrent Execution on a Single-core System

Parallel Execution on a Multicore System

User Threads
Thread management done by user-level threads librarynThree primary thread libraries:
POSIX Pthreadsl Win32 threads
Java threads
Kernel Threads
Supported by the Kernel
Examples
Windows XP/2000
Solaris
Linux
Tru64 UNIX
Mac OS X
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
Many-to-One
Many user-level threads mapped to single kernel thread
Examples:
Solaris Green Threads
GNU Portable Threads

One-to-One
Each user-level thread maps to kernel thread
Examples
Windows NT/XP/2000
Linux

Dept. of Computer Science and Engineering Page 37

You might also like