OSL Lab Manual
OSL Lab Manual
1
Mission of the Institute
To be a Globally, Socially conscious institute of research and innovation
with an excellence in professional education and to take up the
challenges of change for benefit of the society.
2
MIT COLLEGE OF Program Evaluation Objectives (PEOs), Program
ENGINEERING, KOTHRUD, Outcomes (POs), Course Outcomes (COs) and their
PUNE Mapping
Department: Information Technology Academic Year: 2014-15
To inculcate the Knowledge Appliance ability, Oral & written communication skills and leadership
III. qualities in graduates to create proficient professionals.
To provide the graduates with Congenial Learning Environment along with awareness of ethics and
IV. promote self-reliance to cultivate good citizenship.
Problem Analysis: An ability to logically analyze a problem, identify and define the
B
computing requirements appropriate to its solution.
Design: An ability to design, implement and evaluate a computer-based system, process,
C component or program to meet desired needs with consideration to health care, safety,
environment etc.
Investigation of Complex Problems: An ability to apply research based knowledge and
D
interpretation of data to provide solutions to complex engineering problems.
Development tools and packages: An ability to select and use modern software
E development tools, packages and programming languages to develop new applications and
services according to recent trends in Information Technology
Engineering and Society: - An ability to demonstrate professional, legal and social
F
responsibilities relevant to engineering practice.
G Ethics: An ability to apply ethical principles and norms in day to day engineering practices.
Environmental and Social Impact: An ability to analyze the local and global impact of
H
Information and Communication Technology on individuals, organizations and society.
Individual and Teamwork: An ability to function efficiently as an individual as well as with
I
teams to accomplish a common goal in diverse environment.
Communication: An ability to acquire written and verbal skills to communicate effectively
J
with society on complex engineering problems.
Project Management: An ability to visualize and work on multidisciplinary project from
K
inception to deployment as per client requirements
Lifelong Learning: An ability to recognize the importance of professional development and
L
engage in lifelong learning.
3
Mapping of Program Educational Objective & Program Outcomes:-
Programme Programme Outcomes
Educational
Objectives a b c D e f G h i j k l
I
II
III
IV
V
Course Outcomes
i. Describe OS support for processes and threads
ii. Recognize CPU Scheduling, synchronization, and deadlock.
iii. Use C / C++ and Unix commands, and develop various system programs under Linux to
make use of OS concepts related to process synchronization, shared memory, file systems,
etc.
4
The course structure includes the following:
Weekly load(in Hrs) Semester Examination Scheme of Marks
Short
Code Subject
Name MAX.
Practical Tutorials PR/DRG TW PR OR
MARKS
Operating
314454 System OSL 04 -- -- 50 50 100
Laboratory
General Instructions
All students are to attend laboratory sessions or classes with punctuality and preparation.
Requisite records should be brought to the classes and instructions should be followed
accordingly.
Students should safe-guard their personal belongings and valuables.
Students are encouraged to voice their opinions and engage each other in healthy technical
discussions and debates.
Students should keep their bags properly outside the lab in the predefined area.
Students should switch off the computers, fans, and such other appliances after their sessions.
5
MIT College of Engineering, Pune.
Information Technology Department
(Operating Systems Lab)
1. Process control system calls: The demonstration of fork, execve and wait system calls along with
zombie and orphan states.
a. Implement the C program in which main program accepts the integers to be sorted.
Main program uses the fork system call to create a new process called a child process.
Parent process sorts the integers using merge sort and waits for child process using wait
system call to sort the integers using quick sort. Also demonstrate zombie and orphan
states.
b. Implement the C program in which main program accepts an integer array. Main
program uses the fork system call to create a new process called a child process. Parent
process sorts an integer array and passes the sorted array to child process through the
command line arguments of execve system call. The child process uses execve system
call to load new program that uses this sorted array for performing the binary search to
search the particular item in the array.
3. Thread synchronization using counting semaphores and mutual exclusion using mutex.
Application to demonstrate: producer-consumer problem with counting semaphores and mutex.
a. Pipes : Full duplex communication between parent and child processes. Parent process writes a
pathname of a file (the contents of the file are desired) on one pipe to be read by child process and
child process writes the contents of the file on second pipe to be read by parent process and displays on
standard output.
b. FIFOs: Full duplex communication between two independent processes. First process accepts
sentences and writes on one pipe to be read by second process and second process counts number of
characters, number of words and number of lines in accepted sentences, writes this output in a text file
6
and writes the contents of the file on second pipe to be read by first process and displays on standard
output.
c. Signals : Detecting the termination of multiple child processes: Implement the C program to
demonstrate the use of SIGCHLD signal. A parent process Creates multiple child process (minimum
three child processes). Parent process should be Sleeping until it creates the number of child processes.
Child processes send SIGCHLD signal to parent process to interrupt from the sleep and force the
parent to call wait for the Collection of status of terminated child processes.
6. Linux Kernel configuration, compilation and rebooting from the newly compiled kernel.
Requirements:
a. Get a Linux kernel source code from www.kernel.org
b. Menu based configuration of Linux kernel using menuconfig/xconfig/gconfig
c. Creating a monolithic compressed image of a kernel
d. Compilation of kernel modules
e. Installation of kernel modules
f. Finalize installation
7. Kernel space programming: Implement and add a loadable kernel module to Linux kernel,
demonstrate using insmod, lsmod and rmmod commands. A sample kernel space program should print
the "Hello World" while loading the kernel module and "Goodbye World" while unloading the kernel
module.
8. Implement a new system call, add this new system call in the Linux kernel (any kernel source,
any architecture and any Linux kernel distribution) and demonstrate the use of same.
7
Index
Sr.No. Title Page No.
1. Process control system calls 9
2. Thread management using pthread library 45
3. Thread synchronization 62
4. Deadlock Avoidance Using Semaphores 77
5. Inter process communication in Linux using following 83
6. Linux Kernel configuration, compilation 119
7. Kernel space programming 126
8. Implement a new system call 133
8
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO 01
TITLE OF THE EXPERIMENT :
AIM :
Process control system calls: The demonstration of fork, execve and wait system calls along with zombie and
orphan states.
a. Implement the C program in which main program accepts the integers to be sorted. Main program
uses the fork system call to create a new process called a child process. Parent process sorts the
integers using merge sort and waits for child process using wait system call to sort the integers using
quick sort. Also demonstrate zombie and orphan states.
b. Implement the C program in which main program accepts an integer array. Main program uses the
fork system call to create a new process called a child process. Parent process sorts an integer array
and passes the sorted array to child process through the command line arguments of execve system
call. The child process uses execve system call to load new program that uses this sorted array for
performing the binary search to search the particular item in the array.
OBJECTIVE:
This assignment covers the UNIX process control commonly called for process creation, program execution
and process termination. Also covers process model, including process creation, process destruction and
daemon processes.
THEORY:
Process in UNIX:
Process IDs
9
Each process in a Linux system is identified by its unique process ID, sometimes referred to as pid. Process IDs
are 16-bit numbers that are assigned sequentially by Linux as new processes are created.
When referring to process IDs in a C or C++ program, always use the pid_t typedef, which is defined in
<sys/types.h>.A program can obtain the process ID of the process its running in with the getpid() system call,
and it can obtain the process ID of its parent process with the getppid() system call.
Creating Processes
1. Using system
The system function in the standard C library provides an easy way to execute a command from within a
program, much as if the command had been typed into a shell. In fact, system creates a subprocess running
the standard Bourne shell (/bin/sh)and hands the command to that shell for execution.
The system function returns the exit status of the shell command. If the shell itself cannot be run, system
returns 127; if another error occurs, system returns 1.
2. Using fork
A process can create a new process by calling fork. The calling process becomes the parent, and the
created process is called the child. The fork function copies the parent's memory image so that the
new process receives a copy of the address space of the parent. Both processes continue at the
instruction after the fork statement (executing in their respective memory images).
SYNOPSIS
#include <unistd.h>
pid_t fork(void);
The fork function returns 0 to the child and returns the child's process ID to the parent. When fork
fails, it returns 1.
When a process creates a child, both parent and child proceed with execution from the point of the
10
fork. The parent can execute wait to block until the child finishes. The wait function causes the caller
to suspend execution until a child's status becomes available or until the caller receives a signal.
SYNOPSIS
#include <sys/wait.h>
If wait returns because the status of a child is reported, these functions return the process ID of that
child. If an error occurs, these functions return 1
Example:
pid_t childpid;
childpid = wait(NULL);
if (childpid != -1)
printf("Waited for child with pid %ld\n", childpid);
Status values
The status argument of wait is a pointer to an integer variable. If it is not NULL, this function stores the
return status of the child in this location. The child returns its status by calling exit, _exit or return from
main.
A zero return value indicates EXIT_SUCCESS; any other value indicates EXIT_FAILURE.
POSIX specifies six macros for testing the child's return status. Each takes the status value returned by
a child to wait as a parameter. Following are the two such macros:
SYNOPSIS
#include <sys/wait.h>
WIFEXITED(int stat_val)
WEXITSTATUS(int stat_val)
New program execution within the existing process (The exec Function)
The fork function creates a copy of the calling process, but many applications require the child
process to execute code that is different from that of the parent. The exec family of functions provides
a facility for overlaying the process image of the calling process with a new image. The traditional way
to use the forkexec combination is for the child to execute (with an exec function) the new program
while the parent continues to execute the original code.
SYNOPSIS
11
#include <unistd.h>
1. int execl(const char *path, const char *arg0, ... /*, char *(0) */);
2. int execle (const char *path, const char *arg0, ... /*, char *(0),
char *const envp[] */);
3. int execlp (const char *file, const char *arg0, ... /*, char *(0) */);
4. int execv(const char *path, char *const argv[]);
5. int execve (const char *path, char *const argv[], char *const envp[]);
6. int execvp (const char *file, char *const argv[]);
All exec functions return 1 if unsuccessful. In case of success these functions never return to the calling
function.
Process Termination
Normally, a process terminates in one of two ways. Either the executing program calls the exit() function, or
the programs main function returns. Each process has an exit code: a number that the process returns to its
parent.The exit code is the argument passed to the exit function, or the value returned from main.
Zombie Processes
If a child process terminates while its parent is calling a wait function, the child process vanishes and its
termination status is passed to its parent via the wait call. But what happens when a child process terminates
and the parent is not calling wait? Does it simply vanish? No, because then information about its
terminationsuch as whether it exited normally and, if so, what its exit status iswould be lost. Instead,
when a child process terminates, is becomes a zombie process.
A zombie process is a process that has terminated but has not been cleaned up yet. It is the responsibility of
the parent process to clean up its zombie children. The wait functions do this, too, so its not necessary to
track whether your child process is still executing before waiting for it. Suppose, for instance, that a program
forks a child
process, performs some other computations, and then calls wait. If the child process has not terminated at
that point, the parent process will block in the wait call until the child process finishes. If the child process
finishes before the parent process calls wait, the child process becomes a zombie. When the parent process
calls wait, the zombie childs termination status is extracted, the child process is deleted, and the wait call
returns immediately.
12
1. Creates new process without fully copying the address space of the parent.
2. vfork guarantees that the child runs first, until the child calls exec or exit.
3. When child calls either of these two functions(exit, exec), the parent resumes.
IINPUT:
OUTPUT:
Example 1
13
Example 2
Example 3
14
Example 4
#include <stdio.h>
#include <sys/types.h>
#include <sys/wait.h>
void show_return_status(void)
{
pid_t childpid;
int status;
childpid = wait(&status);
if (childpid == -1)
perror("Failed to wait for child");
else if (WIFEXITED(status))
printf("Child %ld terminated with return status %d\n",
(long)childpid, WEXITSTATUS(status));
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int main(void)
{
pid_t childpid;
childpid = fork();
if (childpid == -1) {
perror("Failed to fork");
return 1;
}
if (childpid == 0) {
/* child code */
execl("/bin/ls", "ls", "-l", NULL);
perror("Child failed to exec ls");
return 1;
}
if (childpid != wait(NULL)) {
/* parent code */
perror("Parent failed to wait due to signal or error");
return 1;
}
return 0;
}
15
16
Example 7
17
Example 8
18
19
Example 9:
program where parent process sorts array elements in descending order and child process sorts array
elements in descending order.
20
21
Example 10: Count number of vowels in the given sentence implement program using vfork().
22
23
Example: Executing new program
1. execv1.c
oneIntArg.c
24
2. array1.c
arrayDisp.c
25
26
3. array.c
27
asc.c
28
dsc.c
29
30
/*Zombie State*/
#include<stdio.h>
#include<sys/types.h>
int main()
int *status=NULL;
pid_t cpid;
cpid = fork();
if( cpid == 0 )
//sleep(5);
else
sleep(5);
//cpid = wait(status);
return 0;
31
/* Orphan State*/
#include<stdio.h>
#include<sys/types.h>
int main()
pid_t cpid;
cpid = fork();
if( cpid == 0 )
sleep(20);
else
sleep(2);
return 0;
32
/* Implement the C program in which main program accepts the integers to be sorted.
Main program uses the fork system call to create a new process called a child process.
Parent process sorts the integers using merge sort and waits for child process using wait system call to sort the
integers using quick sort. */
#include<stdio.h>
#include<sys/types.h>
#include<stdlib.h>
#include<unistd.h>
#define MAX 20
/*
The <sys/types.h> header file include definitions for system data types such as :
All the data types defined under sys types are preceded by _t
*/
int n;
int i;
scanf("%d", &n);
33
printf("\n\t Enter the elements:\n");
scanf("%d", &arr[i]);
int i;
printf("\n");
}//end of for
printf("\n");
//Quick sort
int piv,i,j,temp;
piv=a[low];
i=low+1;
j=high;
34
while(1)
i++;
j--;
if(i<j)
temp=a[i];
a[i]=a[j];
a[j]=temp;
else
break;
}//end of while
temp=a[low];
a[low]=a[j];
a[j]=temp;
return j;
}//end of partition
int j=0;
j = partition(arr,low,high);
35
rec_quick(arr,low,j-1);//for first array
}//end of if
//Merge_sort
int i,m,k,l,temp[MAX];
l=low;
i=low;
m=mid+1;
while((l<=mid)&&(m<=high)){
if(arr[l]<=arr[m]){
temp[i]=arr[l];
l++;
else{
temp[i]=arr[m];
m++;
i++;
36
if(l>mid){
for(k=m;k<=high;k++){
temp[i]=arr[k];
i++;
else{
for(k=l;k<=mid;k++){
temp[i]=arr[k];
i++;
for(k=low;k<=high;k++){
arr[k]=temp[k];
int mid;
if(low<high){
mid=(low+high)/2;
m_partition(arr,low,mid);
m_partition(arr,mid+1,high);
mergeSort(arr,low,mid,high);
37
}
int main()
pid_t cpid;
accept_nos(arr);
display(arr);
cpid = fork();
if( cpid == 0 )
//sleep(20);
rec_quick(arr,low,n-1);
display(arr);
else
/*Parent process waiting for child process, sorting array elements in descending order and
38
//sleep(20);
printf("\n\n\t Parent process resumed after the execution of child process with PID %d", cpid);
m_partition(arr,0,n-1);
display(arr);
cpid = wait(status); //wait till all the the processes not terminated
return 0;
/* Implement the C program in which main program accepts an integer array. Main program uses the fork
system call to create a new process called a child process. Parent process sorts an integer array and passes
the sorted array to child process through the command line arguments of execve system call. The child
process uses execve system call to load new program that uses this sorted array for performing the binary
search to search the particular item in the array. */
Part I
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#define MAX 20
int i,j,a,b;
char temp;
for(i=0;i<n-1;i++)
39
{
for(j=i+1;j<n;j++)
if(arr[i]>arr[j])
temp=arr[i];
arr[i]=arr[j];
arr[j]=temp;
for(i=0;i<n;i++)
printf("%d ",arr[i]);
main()
pid_t pid,cpid;
int status;
int i,n;
int arr[MAX];
char no[MAX];
40
void *ptr[MAX];
scanf("%d",&n);
for(i=0;i<n;i++)
scanf("%d",&arr[i]);
pid = fork();
if (pid == 0)
execve("./b",argv,envp);
else
41
sort_asc(arr,n);
for(i=0;i<n;i++)
ptr[i]=arr[i];
cpid = wait(&status);
Part II
#include<stdio.h>
#define MAX 10
int r,key,a[MAX],n=5;
void *c = argv[1];
// c = argv[1];
printf("%s", argv[1]);
42
printf("number is %d",*p);
printf("\n\t Hello\n");
for(r=0;r<n;r++)
a[r] = atoi(c);
printf("%d\n",a[r]);
c++;
printf("Enter key:");
scanf("%d",&key);
r = binary_search(a, n, key);
printf("%d",r);
low = 0;
high = n - 1;
if (a[mid] == key)
43
return mid;
else
References:
William Stallings, Operating System: Internals and Design Principles, Prentice Hall, 8th Edition,
44
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO - 02
TITLE OF THE EXPERIMENT : Thread management using pthread library
1. Aim/Title/Problem Statement:
Implement matrix multiplication using multithreading. Application should have pthread_create, pthread_join,
pthread_exit. In the program, every thread must return the value and must be collected in pthread_join in the
main function. Final sum of rowcolumn multiplication must be done by main thread (main function).
2. OBJECTIVE:
3. THEORY:
Thread Basics:
THREADS, LIKE PROCESSES, ARE A MECHANISM TO ALLOW A PROGRAM to do more than one thing at a
time.As with processes, threads appear to run concurrently. Conceptually, a thread exists within a process. A
thread is a unit of execution, associated with a process with its own ID, stack, stack pointer, PC, condition
codes and general purpose registers. Multiple threads associated with a process run concurrently in the
context of that process, sharing its code, data, heap, shared libraries and open files.
45
Modern view of a process:
Multiple threads can be associated with a process. Each thread has its own logical control flow (Sequence of
PC values). Each thread starts the same code, data and kernel context. Each thread has its own thread id (tid).
A thread context switch is faster than the process context switch. Threads unlike processes, are not organized
in a rigid parent child hierarchy. The threads associated with a process form a pool
of peers, independent of which threads were created by which other threads. The main thread is distinguished
from other threads only in the sense that it is always the first thread to run in the process. A thread can kill any
of its peers, or wait for any of its peer to terminate. Each peer can read and write the same shared data.
46
Threads associated with a process:
47
Thread levels
Kernel is not aware of thread activity but it is still managing process activity. When a thread makes a system
call, the whole process will be blocked but for the thread library that thread is still in the running state. So
thread states are independent of the process states.
Advantages of ULT:
Thread switching does not involve the kernel no mode switching. Scheduling can be application specific
choose the best algorithm. ULT can run on any OS only needs a thread library.
Disadvantages of ULT:
Most system calls are blocking and the kernel blocks processes so all threads within the process will be
blocked. The kernel can only assign processes to processors. Two threads within the same process cant run
simultaneously on two processors.
All thread management is done by the kernel. No thread library but an API(system calls) to the kernel thread
facility exists. Kernel maintains context information for the process and the threads. Switching between
threads requires the kernel. Scheduling is performed on thread basis.
Advantages of KLT:
48
Kernel can simultaneously schedule many threads of the same process on many processors. Blocking is done
on a thread level. Kernel routines can be multithreading.
Disadvantages of KLT:
Thread switching within the same process involves the kernel. E.g. if we have 2 mode switches per thread
switch, this results in a significant slow down.
Idea is to combine best of both the approaches. Thread creation done in the user space. Bulk of scheduling and
synchronization of threads done in the user space. The programmer may adjust the no. of KLTs. Solaris is an
example of OS with this approach.
Creating Threads:
The pthread create() function gives back a thread identi_er that can be used in other calls. The second
parameter is a pointer to a thread attribute object that you can use to set the thread's attributes. The null
pointer means to use default attributes. The third parameter is a pointer to the function the thread is to
49
execute. The _nal parameter is the argument to the function. By using void pointers here, any sort of data
could potentially be sent to the thread function provided proper casts are applied.
Terminating threads
The thread terminates implicitly when its top level thread routine returns.
The thread terminates explicitly by calling the routine: pthread_exit();
Another thread terminates the current thread by calling the pthread_cancel function with the ID of the current
thread.
The pthread_join function blocks until thread tid terminates, assign the (void*) pointer returned by the thread
routine to the location pointed to by thread_return, and then reaps any memory resources held by the
terminated thread.
If thread_return is not NULL, the return value of tid is stored in the location pointed to by thread_return. The
return value of tid is either the argument it gave to pthread_exit or PTHREAD_CANCELED if tid was canceled.
50
Returning Results from Threads
If the second argument we pass to pthread_join is non-null, the threads return value will be placed in the
location pointed to by that argument. The thread return value, like the thread argument, is of type void*. If
you want to pass back a single int or other small number, we can do this easily by casting the value to void*
and then casting back to the appropriate type after calling pthread_join.
Thread Synchronization
In order to effectively work together the threads in a program usually need to share information or coordinate
their activity. Many ways to do this have been devised and such techniques usually go under the name of
thread synchronization.
Mutual Exclusion
When writing multi-threaded programs it is frequently necessary to enforce mutually exclusive access to a
shared data object. This is done with mutex objects. The idea is to associate a mutex with each shared data
object and then require every thread that wishes to use the shared data object to first lock the mutex before
doing so.
3. Call pthread mutex unlock() to release the exclusive access and allow another thread to use the shared data
object.
It is important to understand that if a thread attempts to lock the mutex while some other thread has it
locked, the second thread is blocked until the first releases the mutex with pthread_mutex_unlock(). Other
synchronization primitives used in POSIX threads are: semaphore and condition variables.
51
4. INPUT:
Two matrices as inputs
5. OUTPUT:
Resultant matrix
Example 1:
#include<pthread.h>
int main()
pthread_t tid;
pthread_create(&tid,NULL,thread,NULL);
exit(0);
printf(Hello World!\n);
return NULL;
52
Example 2:
int main()
pthread_t tid;
pthread_create(&tid,NULL,thread,NULL);
pthread_join(tid,NULL);
/* Thread Routine*/
printf(Hello World!\n);
return NULL;
Example 3:
typedef struct
int num1;
int num2;
}NUM;
53
int main()
pthread_t th1;
NUM N1;
int ret_val;
printf("Enter num1\n");
scanf("%d",&n1);
printf("Enter num2\n");
scanf("%d",&n2);
N1.num1 = n1;
N1.num2 = n2;
pthread_join(th1, (void*)&ret_val);
printf("sum = %d\n",ret_val);
return 0;
int a = N2->num1;
int b = N2->num2;
int sum = a + b;
return (void*)sum;
54
Example4 : A shared variable protected by semaphores.
55
56
57
58
Write multithreaded matrix multiplication using POSIX threads
59
60
7. Conclusion:
8. FAQs:
1. What is thread?
2. Differentiate between process & thread?
3. What are the steps in thread creations?
4. Difference between ULT and KLT.
61
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO - 03
TITLE OF THE EXPERIMENT : Thread synchronization
1. Aim/Title/Problem Statement:
Thread synchronization using counting semaphores and mutual exclusion using mutex.
2. OBJECTIVE:
3. THEORY:
Thread Synchronization
In order to effectively work together the threads in a program usually need to share information or coordinate
their activity. Many ways to do this have been devised and such techniques usually go under the name of
thread synchronization.
Mutual Exclusion
When writing multi-threaded programs it is frequently necessary to enforce mutually exclusive access to a
shared data object. This is done with mutex objects. The idea is to associate a mutex with each shared data
object and then require every thread that wishes to use the shared data object to first lock the mutex before
doing so.
62
Here are the particulars:
Call pthread mutex unlock() to release the exclusive access and allow another thread to use the shared
data object.
It is important to understand that if a thread attempts to lock the mutex while some other thread has it
locked, the second thread is blocked until the first releases the mutex with pthread_mutex_unlock().
Other synchronization primitives used in POSIX threads are: semaphore and condition variables.
When writing multi-threaded programs it is frequently necessary to enforce mutually exclusive access to a
shared data object. This is done with mutex objects. The idea is to associate a mutex with each shared data
object and then require every thread that wishes to use the shared data object to first lock the mutex before
doing so.
It is important to understand that if a thread attempts to lock the mutex while some other thread has it
locked, the second thread is blocked until the first releases the mutex with pthread_mutex_unlock().
mutex functions:
pthread_mutex_init()
pthread_mutex_lock()
pthread_mutex_unlock()
Mutex Locks
A mutex is a special variable that can be either in the locked state or the unlocked state. If the mutex is
locked, it has a distinguished thread that holds or owns the mutex. If no thread holds the mutex, we say
the mutex is unlocked, free or available. When the mutex is free and a thread attempts to acquire the
mutex, that thread obtains the mutex and is not blocked. The mutex or mutex lock is the simplest and
most efficient thread synchronization mechanism. Programs use mutex locks to preserve critical
sections and to obtain exclusive access to resources. A mutex is meant to be held for short periods of
time. Mutex functions are not thread cancellation points and are not interrupted by signals.
the variable. For mutex variables that are dynamically allocated or that don't have the default mutex
attributes, call pthread_mutex_init to perform initialization.
int pthread_mutex_init(
pthread_mutex_t *mutex,
const pthread_mutexattr_t *restrict attr
);
The mutex parameter of pthread_mutex_init is a pointer to the mutex to be initialized. Pass NULL for
the attr parameter of pthread_mutex_init to initialize a mutex with the default attributes.
POSIX has two functions, pthread_mutex_lock and pthread_mutex_trylock for acquiring a mutex.
The pthread_mutex_lock function blocks until the mutex is available, while the
pthread_mutex_trylock always returns immediately. The pthread_mutex_unlock function releases
the specified mutex. All three functions take a single parameter, mutex, a pointer to a mutex.
A POSIX semaphore is a variable of type sem_t with associated atomic operations for initializing, incrementing
and decrementing its value.
The POSIX:SEM Semaphore Extension defines two types of semaphores, named and unnamed. The difference
between unnamed and named semaphores is analogous to the difference between ordinary pipes and named
pipes (FIFOs).
#include <semaphore.h>
sem_t sem;
The POSIX Extension does not specify the underlying type of sem_t. One possibility is that sem_t acts like a
file descriptor.
If successful, sem_init initializes sem. POSIX does not specify the return value on success, but the rationale
mentions that sem_init may be required to return 0 in a future specification. If unsuccessful, sem_init
returns 1.
The sem_init function initializes the unnamed semaphore referenced by sem to value. The value
parameter cannot be negative. Our examples use unnamed semaphores with pshared equal to 0, meaning
that the semaphore can be used only by threads of the process that initializes the semaphore. If pshared is
nonzero, any process that can access sem can use the semaphore.
The following code segment initializes an unnamed semaphore to be used by threads of the process.
sem_t semA;
if (sem_init(&semA, 0, 1) == -1)
perror("Failed to initialize semaphore semA");
The sem_post function implements classic semaphore signaling. If no threads are blocked on sem, then
sem_post increments the semaphore value. If at least one thread is blocked on sem, then the semaphore
value is zero. In this case, sem_post causes one of the threads blocked on sem to return from its sem_wait
function, and the semaphore value remains at zero.
The sem_wait function implements the classic semaphore wait operation. If the semaphore value is 0,
the calling thread blocks until it is unblocked by a corresponding call to sem_post.
65
Readers /Writers problem
There is a data area shared among a number of processes. The data area could be a file, a block of main
memory, or even a bank of processor registers. There are a number of processes that only read the data area
(readers) and a number that only write to the data area (writers).
Thus, readers are processes that are not required to exclude one another and writers are processes that are
required to exclude all other processes, readers and writers alike.
The general statement is this: there are one or more producers generating some type of data (records,
characters) and placing these in a buffer. There is a single consumer that is taking items out of the buffer one
at a time.
The system is to be constrained to prevent the overlap of buffer operations. That is, only one agent (producer
or consumer) may access the buffer at any one time. The problem is to make sure that the producer wont try
to add data into the buffer if its full and that the consumer wont try to remove data from an empty buffer
4. INPUT:
Number of producers and consumers
5. OUTPUT:
A shared File is created and read item by the reader process
1. What is mutual exclusion and what are the requirements to enforce mutual exclusion?
2. What is meant by critical section?
3. Explain the concept of semaphore?
4. Explain wait and signal functions associated with semaphores.
5. What is meant by binary and counting semaphores?
66
7. PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS: (Max 5)
67
Example-2 Reader Writer using Semaphore
68
69
70
Example-3 Reader / Writer using Mutex
71
72
73
Example-4 Reader / Writer where shared data is a file
74
75
76
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO - 04
TITLE OF THE EXPERIMENT : Deadlock Avoidance Using Semaphores
1. Aim: Implement the deadlock-free solution to Dining Philosophers problem to illustrate the
problem of deadlock and/or starvation that can occur when many synchronized threads are
competing for limited resources.
2. OBJECTIVE:
3. THEORY:
Deadlock
A set of processes is deadlocked if each process in the set is waiting for an event that only another
process in the set can cause. Because all the processes are waiting, none of them will ever cause
any of the events that could wake up any of the other members of the set, and all the processes
continue to wait forever. For this model, we assume that processes have only a single thread and
that there are no interrupts possible to wake up a blocked process. The no-interrupts condition is
needed to prevent an otherwise deadlocked process from being awake.
Coffman et al. (1971) showed that four conditions must hold for there to be a deadlock:
1. Mutual exclusion condition. Each resource is either currently assigned to exactly one process or
is available.
2. Hold and wait condition. Processes currently holding resources granted earlier can request new
resources.
3. No preemption condition. Resources previously granted cannot be forcibly taken away from a
process. They must be explicitly released by the process holding them.
4. Circular wait condition. There must be a circular chain of two or more processes, each of which
is waiting for a resource held by the next member of the chain.
77
Dealing with deadlock
Prevention: if any of the four necessary condition is denied, a deadlock can not occur. So we will find out the
restrictions due to that these necessary conditions cant be occurred in the system.
Deadlock avoidance: like prevention we will find some strategies due to which deadlock cant be occurred in
the system.
Recovery: detect when deadlock has occurred and recover from it.
Deadlock Avoidance
In most systems, resources are requested one at a time. The system must be able to decide whether
granting a resource is safe or not and only make the allocation when it is safe. Thus the question
arises: Is there an algorithm that can always avoid deadlock by making the right choice all the time?
The answer is a qualified yeswe can avoid deadlocks, but only if certain information is available in
advance.
At any instant of time, there is a current state consisting of E, A, C, and R. A state is said to be safe if it
is not deadlocked and there is some scheduling order in which every process can run to completion
even if all of them suddenly request their maximum number of resources immediately. It is easiest to
illustrate this concept by an example using one resource.
we have a state in which A has 3 instances of the resource but may need as many as 9 eventually. B
currently has 2 and may need 4 altogether, later. Similarly, C also has 2 but may need an
additional 5. A total of 10 instances of the resource exist, so with 7 resources already allocated,
there are 3 still free.
The state of Fig, (a) is safe because there exists a sequence of allocations that allows all processes
to complete. Namely, the scheduler could simply run B exclusively, until it asked for and got
two more instances of the resource, leading to the state of Fig, (b). When B completes, we get the
state of Fig, c). Then the scheduler can run C leading eventually to Fig (d). When C completes, we
get Fig(e). Now A can get the six instances of the resource it needs and also complete. Thus the state
of Fig, (a) is safe because the system, by careful scheduling, can avoid deadlock. Now suppose we
have the initial state shown in Fig, (a), but this time A requests and gets another resource, giving Fig,
78
(b). Can we find a sequence that is guaranteed to work? Let us try. The scheduler could run B until it
asked for all its resources, as shown in Fig, (c).
Eventually, B completes and we get the situation of Fig, (d). At this point we are stuck. We only have
four instances of the resource free, and each of the active processes needs five. There is no sequence
that guarantees completion. Thus the allocation decision that moved the system from Fig, (a) to Fig,
(b) went from a safe state to an unsafe state. Running A or C next starting at Fig, (b) does not work
either. In retrospect, As request should not have been granted. It is worth noting that an unsafe state is
not a deadlocked state. Starting at Fig. (b), the system can run for a while. In fact, one process can
even complete. Furthermore, it is possible that A might release a resource before asking for any more,
allowing C to complete and avoiding deadlock altogether. Thus the difference between a safe state and
an unsafe state is that from a safe state the system can guarantee that all processes will finish; from an
unsafe state, no such guarantee can be given.
Five philosophers sit around a table, which is set with 5 plates (one for each philosopher), 5
chopsticks, and a bowl of rice. Each philosopher alternately thinks and eats. To eat, she needs the two
chopsticks next to her plate. When finished eating, she puts the chopsticks back on the table, and
continues thinking. Philosophers are processes, and chopsticks are resources.
The system may deadlock: if all 5 philosophers take up their left chopstick simultaneously, the system
will halt (unless one of them puts one back). A philosopher may starve if her neighbors have
alternating eating patterns.
repeat
wait(chopstick[i]);
wait(chopstick[(i+1) mod 5]);
...
eat
...
signal(chopstick[i]);
signal(chopstick[(i+1) mod 5]);
...
think
79
...
until false;
Can suffer from deadlock (e.g. all philosophers decide to eat at the same time and all pick up their left
chopstick first) and/orstarvation. Some ways to avoid deadlock are:
4. INPUT
5. OUTPUT:
The program output shows the deadlock free solution of the dining philosopher problem.
80
#include<stdio.h>
#include<semaphore.h>
#include<pthread.h>
#define N 5
#define THINKING 0
#define HUNGRY 1
#define EATING 2
#define LEFT (ph_num+4)%N
#define RIGHT (ph_num+1)%N
sem_t mutex;
sem_t S[N];
int state[N];
int phil_num[N]={0,1,2,3,4};
int main()
{
int i;
pthread_t thread_id[N];
sem_init(&mutex,0,1);
for(i=0;i<N;i++)
sem_init(&S[i],0,0);
for(i=0;i<N;i++)
{
pthread_create(&thread_id[i],NULL,philospher,&phil_num[i]);
printf("Philosopher %d is thinking\n",i+1);
}
for(i=0;i<N;i++)
pthread_join(thread_id[i],NULL);
}
81
test(ph_num);
sem_post(&mutex);
sem_wait(&S[ph_num]);
sleep(1);
}
82
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO 05-1
TITLE OF THE EXPERIMENT : Inter process communication in Linux using pipe
Aim:
Pipes : Full duplex communication between parent and child processes. Parent process writes a
pathname of a file (the contents of the file are desired) on one pipe to be read by child process and
child process writes the contents of the file on second pipe to be read by parent process and displays on
standard output.
FIFOs: Full duplex communication between two independent processes. First process accepts
sentences and writes on one pipe to be read by second process and second process counts number of
characters, number of words and number of lines in accepted sentences, writes this output in a text file
and writes the contents of the file on second pipe to be read by first process and displays on standard
output.
Signals : Detecting the termination of multiple child processes. Implement the C program to
demonstrate the use of SIGCHLD signal. A parent process Creates multiple child process (minimum
three child processes). Parent process should be Sleeping until it creates the number of child processes.
Child processes send SIGCHLD signal to parent process to interrupt from the sleep and force the
parent to call wait for the Collection of status of terminated child processes.
1.OBJECTIVE:
Learn Inter-process communication among processes using pipe, named pipe and signals under the
Linux environment.
2.THEORY:
a) Pipes
Pipes are the oldest form of UNIX System IPC and are provided by all UNIX systems. Pipes have two
limitations.
Historically, they have been half duplex (i.e., data flows in only one direction).
Pipes can be used only between processes that have a common ancestor. Normally, a pipe is
created by a process, that process calls fork, and the pipe is used between the parent and the
child.
83
SYNOPSIS
#include <unistd.h>
The pipe function creates a communication buffer that the caller can access through the file descriptors
fildes[0] and fildes[1]. The data written to fildes[1] can be read from fildes[0] on a first-in-first-out
basis.
A pipe has no external or permanent name, so a program can access it only through its two descriptors.
For this reason, a pipe can be used only by the process that created it and by descendants that inherit
the descriptors on fork.
When a process calls read on a pipe, the read returns immediately if the pipe is not empty. If the pipe is
empty, the read blocks until something is written to the pipe, as long as some process has the pipe open
for writing. On the other hand, if no process has the pipe open for writing, a read from an empty pipe
returns 0, indicating an end-of-file condition.
Example:
int filedes[2];
rc=pipe(filedes);
if(rc==-1)
perror(pipe failed);
exit(1);
84
fork and pipe
A single process would not use a pipe. They are used when two processes wish to communicate in a
one way fashion. A pipe opened before fork becomes shared between two processes.
Kernel activity
85
Data flowing in a pipe are managed directly by the kernel. We can think them flowing through the kernel.
And
For this purposes write() and read() system calls are used. Following file descriptors can be used for block I/O
with write() and read()system calls:
86
#include<unistd.h>
read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf. If the
count is zero read returns 0 and has no other results.
Return value:
#include<unistd.h>
Write writes up to count bytes to the file referenced by the file descriptor fd from the buffer starting at
buf. On success the number of bytes written are returned(zero indicates nothing was written)on error
1 is returned.
3.INPUT:
4.OUTPUT:
File contents read by the child and send it to the parent and display on standard output
5.FAQS:
87
Example-1:
88
Example-2:
89
Example-3:
90
91
Example-4:
92
Example-5: Producer/Consumer using pipe Producer.c
93
Consumer.c
Prod1.c
94
Cons1.c
95
96
97
98
:
Example-7: Pass an integer array from child to parent using a pipe
99
100
Example-8: Pass an integer array from parent to child, then child adds all elements and result is
transferred to the parent using a pipe
101
102
103
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO 05-2
TITLE OF THE EXPERIMENT : Inter process communication in Linux using named
pipe
Aim:
Full duplex communication between two independent processes. First process accepts sentences and
writes on one pipe to be read by second process and second process counts number of characters,
number of words and number of lines in accepted sentences, writes this output in a text file and writes
the contents of the file on second pipe to be read by first process and displays
on standard output.
1. OBJECTIVE:
Learn Inter-process communication (Named pipe) as a data transfer mechanism among processes.
2.THEORY:
Named pipes exist as a device special file in the file system. Processes of different ancestry can
share data through a named pipe. When all I/O is done by sharing processes, the named pipe
remains in the file system for later use.
FIFO
Two un related processes can communicate via FIFO. A FIFO is also known as named pipe. That
is, its like a pipe, except that it has a name. The name is that of a file that multiple processes can
open, read and write.
Creating a FiFO
mkfifo()
mknod()
104
# include <sys/stat.h>, <fcntl.h>, <unistd.h>
The mknod system call creates the file referenced by pathname. The type of the file created and its
access permissions are determined by the mode value. Most often the mode for the file is created by
ORing a symbolic constant indicating the file type with the file access permissions.
S_IFDIR directory
The dev argument for mknod is used only when a character or block special file is specified. When
generating a FIFO, the dev argument should be left as 0.
# include <sys/stat.h>
S_IRWXU 00700 user (file owner) has read, write and execute permission
105
S_IXUSR 00100 user has execute permission
FIFO Operations
An ``open'' system call or library function should be used to physically open up a channel to the pipe.
Normally, blocking occurs on a FIFO. if the FIFO is opened for reading, the process will "block" until
some other process opens it for writing. If this behavior is undesirable, the O_NONBLOCK flag can be used
in an open() call to disable the default blocking action.
Reading from and writing to a named pipe are very similar to reading and writing from or to a normal file.
The standard C library function calls read( ) and write( ) can be used for reading from and writing to a
named pipe. These operations ( read and write ) are blocking, by default.
Limitations
Named pipes can only be used for communication among processes on the same host machine.
Named pipes can be created only in the local file system of the host, that is, you cannot create a
106
named pipe on the NFS file system. Due to their basic blocking nature of pipes, careful
programming is required for the client and server, in order to avoid deadlocks.
3.INPUT:
4.OUTPUT:
File contents read by the child and send it to the parent and display on standard output
5.FAQS:
fifoserver.c
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <unistd.h>
#include <linux/stat.h>
#define FIFO_FILE "MYFIFO"
int main()
{
FILE *fp;
char readbuf[80];
while(1) {
fp = fopen(FIFO_FILE, "r");
107
printf("Received string: %s\n", readbuf);
fclose(fp);
return 0;
fifoclient.c
#include <stdio.h>
#include <stdlib.h>
FILE *fp;
if ( argc != 2 ) {
exit(1);
perror("fopen");
exit(1);
fputs(argv[1], fp);
fclose(fp);
return(0);
108
}
109
110
111
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO 05-3
TITLE OF THE EXPERIMENT : Inter process communication in Linux using signal
1. Aim:
Detecting the termination of multiple child processes: Implement the C program to demonstrate the use
of SIGCHLD signal. A parent process Creates multiple child process (minimum three child processes).
Parent process should be Sleeping until it creates the number of child processes. Child processes send
SIGCHLD signal to parent process to interrupt from the sleep and force the parent to call wait for the
Collection of status of terminated child processes.
2. OBJECTIVE:
Learn Inter-process communication (UNIX signals) as a data transfer mechanism among processes.
3. THEORY:
Signal Basics
A signal is an event generated by the UNIX and Linux systems in response to some condition, upon
receipt of which a process may in turn take some action. We use the term raise to indicate the
generation of a signal, and the term catch to indicate the receipt of a signal. Signals are raised by
some error conditions, such as memory segment violations, floating-point processor errors, or illegal
instructions. They are generated by the shell and terminal handlers to cause interrupts and can also be
explicitly sent from one process to another as a way of passing information or modifying behavior. In
all these cases, the programming interface is the same. Signals can be raised, caught and acted upon, or
(for some at least) ignored.
Signal names are defined by including the header file signal.h. They all begin with SIG and include
those listed in the following table. Signals, to be short, are various notifications sent to a process in
order to notify it of various "important" events.
112
Signal Name Description
SIGHUP Hangup
SIGTERM Termination
If a process receives one of these signals without first arranging to catch it, the process will be
terminated immediately. Usually, a core dump file is created. This file, called core and placed in the
current directory, is an image of the process that can be useful in debugging.
Signals interrupt whatever the process is doing, and force it to handle them immediately. Each signal
has an integer number that represents it (1, 2 and so on). A symbolic name is associated with each
signal that is usually defined in the file /usr/include/signal.h or one of the files included by it directly
or indirectly (HUP, INT and so on).
1. Sending Signals Using The Keyboard: The most common way of sending signals to processes is
using the keyboard. There are certain key presses that are interpreted by the system as requests to
send signals to the process with which we are interacting:
Ctrl-C
113
Pressing this key causes the system to send an INT signal (SIGINT) to the running process. By default,
this signal causes the process to immediately terminate.
Ctrl-Z
Pressing this key causes the system to send a TSTP signal (SIGTSTP) to the running process. By
default, this signal causes the process to suspend execution.
Ctrl-\
Pressing this key causes the system to send a ABRT signal (SIGABRT) to the running process. By
default, this signal causes the process to immediately terminate.
2. Sending Signals from The Command Line: the kill command is used to send signals to the
processes running or currently active in the system.
kill
The parameters required to use kill command are the signal name (or number), and a process ID as
shown below.
The above command has the same affects as pressing Ctrl-C in the shell that runs that process. If no
signal name or number is specified, the default is to send a TERM signal to the process, which
normally causes its termination, and hence the name of the kill command. On most shells, using the 'fg'
command will resume execution of the process (that was suspended with Ctrl-Z), by sending it a
CONT signal.
3. Sending Signals Using System Calls: A third way of sending signals to processes is by using the
kill system call. This is the normal way of sending a signal from one process to another. There are
two common functions: kill and raise
The signal is send to a process pid. If pid is 0, the signal is sent to all processes, except system
processes. It returns 0 on success and -1 on error condition.
114
Sends the signal sig to the executing program.
An application program can specify a function called a signal handler to be invoked when a specific
signal is received. When a signal handler is invoked on receipt of a signal, it is said to catch the signal.
A process can deal with a signal in one of the following ways:
If we install no signal handlers of our own, the runtime environment sets up a set of default signal
handlers for our program. For example, the default signal handler for the TERM signal calls the exit()
system call. The default handler for the ABRT signal calls the abort() system call, which causes the
process's memory image to be dumped into a file named 'core' in the process's current directory, and
then exit.
The signal() System Call: The signal() system call is used to set a signal handler for a single
signal type. signal() accepts a signal number and a pointer to a signal handler function, and sets
that handler to accept the given signal.
#include<signal.h>
The function signal() will call the func function if the process receives a signal sig.
Return values:
Function func() can have three values: SIG_DFL, SIG_IGN or A function address
SIG_DFL: a pointer to a system default function SIG_DFL(), which will terminate the process upon
receipt of sig.
SIG_IGN: a pointer to system ignore function SIG_IGN() which will disregard the sig action.
For our convenience, there are two pre-defined signal handler functions that we can use, instead of
writing our own: SIG_IGN and SIG_DFL.
SIG_IGN:
Causes the process to ignore the specified signal. For example, in order to ignore Ctrl-C completely
(useful for programs that must NOT be interrupted in the middle, or in critical sections), write this:
signal(SIGINT, SIG_IGN);
SIG_DFL:
Causes the system to set the default signal handler for the given signal (i.e. the same handler the
system would have assigned for the signal when the process started running): signal(SIGTSTP,
SIG_DFL);
4. INPUT:
5. OUTPUT:
6. FAQS:
Example-1:
#include<stdio.h>
#include<signal.h>
void sigproc(void);
void quitproc(void);
int main()
{
116
signal(SIGINT,sigproc);
signal(SIGQUIT,quitproc);
printf("");
for(;;)
pause();
}
void sigproc(void)
{
signal(SIGINT,sigproc);
printf("You have pressed ctrl-c\n");
}
void quitproc(void)
{
printf("ctrl\\pressed to quit\n");
exit(0);
}
Example-2
#include<stdio.h>
#include<signal.h>
#include<sys/types.h>
#include<unistd.h>
void sigint();
void sigquit();
int main()
{
pid_t pid;
pid=vfork();
if(pid<0)
{
perror("fork error\n");
exit(1);
}
if(pid==0)
{
//child
signal(SIGINT,sigint);
signal(SIGQUIT,sigquit);
for(;;);
}
117
else
{
//parent
printf("\nPARENT: sending SIGINT\n\n");
kill(pid,SIGINT);
sleep(3);
printf("\nPARENT: sending SIGQUIT\n\n");
kill(pid,SIGQUIT);
sleep(3);
}
}
void sigint()
{
signal(SIGINT,sigint);//reset signal
printf("\nCHILD: I have received a SIGINT\n");
}
void sigquit()
{
printf("My DADDY has killed me\n");
exit(0);
}
118
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO 06
TITLE OF THE EXPERIMENT : Kernel compilation and configuration
Aim:
Linux Kernel configuration, compilation and rebooting from the newly compiled kernel.
Requirements:
1.OBJECTIVE:
2.THEORY:
We need to compile the kernel because sometime some software or modules expected that some kernel flags
needed to set while compile (build) the kernel. These kinds of scenario we need to rebuild the kernel with
specified configuration flags are set. When new kernel is released, compile the new kernel and install in our
system. But this is not a recommending way to install the system. so use our distribution(Ubuntu, red-
hat release ) kernel update.
In Linux system use uname -r or cat /proc/version to find the current kernel version
119
Step 1 : Download the latest kernel
Goto http://kernel.org/ website and get the latest version of the kernel source. i will get 3.3.3
wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.3.3.tar.bz2 and extract the source code.
tar xf linux-3.3.3.tar.bz2
cd linux-3.3.3
in linux-3.3.3 folder contain lots of files and folders. some folders are use following purpose
arch - this folder shows the what are the architecture port the Linux
crypto - contain cryptography implementations like AES, DES
drivers - all device drivers modules (IDE, SCSI, Ethernet device, Wireless)
fs - all file-system implementation code (ext4, FAT, NTFS)
net - all network protocol implementations (IPv4, IPv6, tcp, 802.11)
While compile the kernel source code, we need configuration file. that configuration file contain lots of
variable to help to understand what are the modules we need to compile. Using make command we can build
the configuration, but this command is interactive, its ask more than 1000 question about enable or disable
the particular module, like which file-system r u want and IP-Tables related modules like SIP components
(state-full inspection firewall). So the best way is copy the current Linux config file. its stored in /boot/config-
<version>.
cp /boot/config-2.6.35.22-generic .config
Now we got old configuration. now we change/add new configuration settings. For this purpose there is lots of
options are avialable
120
There is GUI option is also available
here load the old config file and add/change the settings then save the config file.
make gconfig ==> this provides GTK 2 based GUI Configuration for GNOME
121
f or this we need to install libgtk2.0-dev, libglad2-dev through apt-get install
make xconfig ==> this provides QT based GUI Configuration for KDE
122
make
above code is take 1 to 2 hours depending on system performance. In this step it compile the kernel and
store the kernel in binary form toarch/x86/boot/ bzImage file
then based on kernel headers it will compile the kernel modules (device driver, file-system, network,...)
and generate .ko files. ko means kernel object. These modules also called Loadable Kernel Modules (LKM).
make modules_install
This step copy the all kernel modules (*.ko) to /lib/modules/<version>/kernel/ folder.
make install
This step copy the the kenel from arch/x86/boot/bzImage to /bootfolder and copy the .config file
to /boot/config-<latest-version> and generate the System.map file.
update-initramfs -c -k 3.3.3
here 3.3.3 is new kernel version.
update-grub
this command automatically probe the kernels in /boot folder and add the entries in its configuration
file, grub.cfg
123
now restart the system , we will see the new kernel is added in boot loader entries. then choose new kernel in
boot loader.
now open the terminal and issue uname -r command, its shows the current kernel version.
In Debian system's (Ubuntu, Linux Mint,..) there is one tool "make-kpkg " is available to automate the kernel
compilation process.
124
This one line command is compile the kernel and its modules and make the binary form in deb file.
make-kpkg --initrd --append-to-version=-ramki kernel_image kernel_headers
this one line make the kernel and its headers with custom suffix -ramki
--initrd specify that make initramfs disk file
kernel_image option to make the kernel and its modules
kernel_headers option to make all header files of the kernel. (optional)
while executing this line install the kernel to /boot folder. install the modules in /lib/modules folder and
create the initramfs file and update the grub like generic way. all the above operation is happen automatically.
3.INPUT:
4.OUTPUT:
125
MIT College of Engineering, Pune.
Information Technology Department
(Operating System Lab)
EXPERIMENT NO 07
TITLE OF THE EXPERIMENT : Kernel space programming
Aim:
Implement and add a loadable kernel module to Linux kernel, demonstrate using insmod, lsmod and
rmmod commands. A sample kernel space program should print the "Hello World" while loading the
kernel module and "Goodbye World" while unloading the kernel module.
1.OBJECTIVE:
2.THEORY:
While most small and medium-sized applications perform a single task from beginning to end, every
kernel module just registers itself in order to serve future requests, and its initialization function
terminates immediately. In other words, the task of the modules initialization function is to prepare
for later invocation of the modules functions; its as though the module were saying, Here I am, and
this is what I can do. The modules exit function (hello_exit in the example)gets invoked just before
the module is unloaded. It should tell the kernel, Im not there anymore; dont ask me to do anything
else. This kind of approach to programming is similar to event driven programming, but while not all
applications are event-driven, each and every kernel module is. Another major difference between
event-driven applications and kernel code is in the exit function: whereas an application that terminates
can be lazy in releasing resources or avoids clean up altogether, the exit function of a module must
carefully undo everything the init function built up, or the pieces remain around until the system is
rebooted.
Incidentally, the ability to unload a module is one of the features of modularization that youll most
appreciate, because it helps cut down development time; you can test successive versions of your new
driver without going through the lengthy shutdown/reboot cycle each time.
As a programmer, you know that an application can call functions it doesnt define: the linking stage
resolves external references using the appropriate library of functions. printf is one of those callable
functions and is defined in libc. A module, on the other hand, is linked only to the kernel, and the only
functions it can call are the ones exported by the kernel; there are no libraries to link to. The printk
function used in hello.c earlier, for example, is the version of printf defined within the kernel and
exported to modules. It behaves similarly to the original function, with a few minor differences, the
main one being lack of floating-point support.
126
Figure below shows how function calls and function pointers are used in a module to add new
functionality to a running kernel.
Because no library is linked to modules, source files should never include the usual header files,
<stdarg.h> and very special situations being the only exceptions. Only functions that are actually part
of the kernel itself may be used in kernel modules. Anything related to the kernel is declared in
headers found in the kernel source tree you have set up and configured; most of the relevant headers
live in include/linux and include/asm, but other subdirectories of include have been added to host
material associated to specific kernel subsystems.
Another important difference between kernel programming and application programming is in how
each environment handles faults: whereas a segmentation fault is harmless during application
development and a debugger can always be used to trace the error to the problem in the source code, a
kernel fault kills the current process at least, if not the whole system.
A module runs in kernel space, whereas applications run in user space. This concept is at the base of
operating systems theory. The role of the operating system, in practice, is to provide programs with a
consistent view of the computers hardware. In addition, the operating system must account for
independent operation of programs and protection against unauthorized access to resources. This
nontrivial task is possible only if the CPU enforces protection of system software from the
applications.
Every modern processor is able to enforce this behavior. The chosen approach is to implement
different operating modalities (or levels)in the CPU itself. The levels have different roles, and some
operations are disallowed at the lower levels; program code can switch from one level to another only
through a limited number of gates. Unix systems are designed to take advantage of this hardware
127
feature, using two such levels. All current processors have at least two protection levels, and some, like
the x86 family, have more levels; when several levels exist, the highest and lowest levels are used.
Under Unix, the kernel executes in the highest level (also called supervisor mode), where everything is
allowed, whereas applications execute in the lowest level
(the so-called user mode), where the processor regulates direct access to hardware and unauthorized
access to memory.
We usually refer to the execution modes as kernel space and user space. These terms encompass not
only the different privilege levels inherent in the two modes, but also the fact that each mode can have
its own memory mappingits own address spaceas well.
Unix transfers execution from user space to kernel space whenever an application issues a system call
or is suspended by a hardware interrupt. Kernel code executing a system call is working in the context
of a processit operates on behalf of the calling process and is able to access data in the processs
address space. Code that handles interrupts, on the other hand, is asynchronous with respect to
processes and is not related to any particular process.
The role of a module is to extend kernel functionality; modularized code runs in kernel space. Usually
a driver performs both the tasks outlined previously: some functions in the module are executed as part
of system calls, and some are in charge of interrupt handling.
Although kernel modules dont execute sequentially as applications do, most actions performed by the
kernel are done on behalf of a specific process. Kernel code can refer to the current process by
accessing the global item current, defined in <asm/current.h>, which yields a pointer to struct
task_struct, defined by <linux/sched.h>.
The current pointer refers to the process that is currently executing. During the execution of a system
call, such as open or read, the current process is the one that invoked the call. Kernel code can use
process-specific information by using current, if it needs to do so. Actually, current is not truly a global
variable. The need to support SMP systems forced the kernel developers to develop a mechanism that
finds the current process on the relevant CPU. This mechanism must also be fast, since references to
current happen frequently. The result is an architecture-dependent mechanism that, usually, hides a
pointer to the task_struct structure on the kernel stack. The details of the implementation remain
hidden to other kernel subsystems though, and a device driver can just include <linux/sched.h> and
refer to the current process. For example,the following statement prints the process ID and the
command name of the current process by accessing certain fields in struct task_struct:
Many programming books begin with a hello world example as a way of showing the simplest
possible program. This book deals in kernel modules rather than programs; so, for the impatient reader,
the following code is a complete hello world module:
#include <linux/init.h>
#include <linux/module.h>
128
MODULE_LICENSE("Dual BSD/GPL");
static int hello_init(void)
{
printk(KERN_ALERT "Hello, world\n");
return 0;
}
static void hello_exit(void)
{
printk(KERN_ALERT "Goodbye, cruel world\n");
}
module_init(hello_init);
module_exit(hello_exit);
This module defines two functions, one to be invoked when the module is loaded into the kernel
(hello_init)and one for when the module is removed (hello_exit). The module_init and module_exit
lines use special kernel macros to indicate the role of these two functions. Another special macro
(MODULE_LICENSE)is used to tell the kernel that this module bears a free license; without such a
declaration, the kernel complains when the module is loaded.
The printk function is defined in the Linux kernel and made available to modules; it behaves similarly
to the standard C library function printf. The kernel needs its own printing function because it runs by
itself, without the help of the C library. The module can call printk because, after insmod has loaded it,
the module is linked to the kernel and can access the kernels public symbols (functions and variables,
as detailed in the next section). The string KERN_ALERT is the priority of the message. Weve
specified a high priority in this module, because a message with the default priority might not show up
anywhere useful, depending on the kernel version you are running, the version of the klogd daemon,
and your configuration.
You can test the module with the insmod and rmmod utilities, as shown below. Note that only the
superuser can load and unload a module.
% make
make[1]: Entering directory `/usr/src/linux-2.6.10'
CC [M] /home/ldd3/src/misc-modules/hello.o
Building modules, stage 2.
MODPOST
CC /home/ldd3/src/misc-modules/hello.mod.o
LD [M] /home/ldd3/src/misc-modules/hello.ko
make[1]: Leaving directory `/usr/src/linux-2.6.10'
% su
root# insmod ./hello.ko
Hello, world
root# rmmod hello
Goodbye cruel world
root#
According to the mechanism your system uses to deliver the message lines, your output may be
different. In particular, the previous screen dump was taken from a text console; if you are running
insmod and rmmod from a terminal emulator running under the window system, you wont see
anything on your screen. The message goes to one of the system log files, such as /var/log/messages
(the name of the actual file varies between Linux distributions).
129
Compiling Modules
As the first step, we need to look a bit at how modules must be built. The build process for modules
differs significantly from that used for user-space applications; the kernel is a large, standalone
program with detailed and explicit requirements on how its pieces are put together. The build process
also differs from how things were done with previous versions of the kernel; the new build system is
simpler to use and produces more correct results, but it looks very different from what came before.
The kernel build system is a complex beast, and we just look at a tiny piece of it. The files found in the
Documentation/kbuild directory in the kernel source are required reading for anybody wanting to
understand all that is really going on beneath the surface.
There are some prerequisites that you must get out of the way before you can build kernel modules.
The first is to ensure that you have sufficiently current versions of the compiler, module utilities, and
other necessary tools. The file Documentation/Changes in the kernel documentation directory always
lists the required tool versions; you should consult it before going any further. Trying to build a kernel
(and its modules) with the wrong tool versions can lead to no end of subtle, difficult problems. Note
that, occasionally, a version of the compiler that is too new can be just as problematic as one that is too
old; the kernel source makes a great many assumptions about the compiler, and new releases can
sometimes break things for a while.
If you still do not have a kernel tree handy, or have not yet configured and built that kernel, now is the
time to go do it. You cannot build loadable modules for a 2.6 kernel without this tree on your
filesystem. It is also helpful (though not required) to be actually running the kernel that you are
building for.
Once you have everything set up, creating a makefile for your module is straightforward.
In fact, for the hello world example shown earlier in this chapter, a single line will suffice:
obj-m := hello.o
Readers who are familiar with make, but not with the 2.6 kernel build system, are likely to be
wondering how this makefile works. The above line is not how a traditional makefile looks, after all.
The answer, of course, is that the kernel build system handles the rest. The assignment above (which
takes advantage of the extended syntax provided by GNU make)states that there is one module to be
built from the object file hello.o. The resulting module is named hello.ko after being built from the
object file. If, instead, you have a module called module.ko that is generated from two source files
(called, say, file1.c and file2.c), the correct incantation would be:
obj-m := module.o
module-objs := file1.o file2.o
For a makefile like those shown above to work, it must be invoked within the context of the larger
kernel build system. If your kernel source tree is located in, say, your ~/kernel-2.6 directory, the make
command required to build your module (typed in the directory containing the module source and
makefile) would be:
This command starts by changing its directory to the one provided with the C option (that is, your
kernel source directory). There it finds the kernels top-level makefile. The M= option causes that
makefile to move back into your module source directory before trying to build the modules target.
130
This target, in turn, refers to the list of modules found in the obj-m variable, which weve set to
module.o in our examples. Typing the previous make command can get tiresome after a while, so the
kernel developers have developed a sort of makefile idiom, which makes life easier for those building
modules outside of the kernel tree. The trick is to write your makefile as follows:
Once again, we are seeing the extended GNU make syntax in action. This makefile is read twice on a
typical build. When the makefile is invoked from the command line, it notices that the
KERNELRELEASE variable has not been set. It locates the kernel source directory by taking
advantage of the fact that the symbolic link build in the installed modules directory points back at the
kernel build tree. If you are not actually running the kernel that you are building for, you can supply a
KERNELDIR= option on the command line, set the KERNELDIR environment variable, or rewrite
the line that sets KERNELDIR in the makefile. Once the kernel source tree has been found, the
makefile invokes the default: target, which runs a second make command (parameterized in the
makefile as $(MAKE))to invoke the kernel build system as described previously. On the second
reading, the makefile sets obj-m, and the kernel makefiles take care of
actually building the module.
This mechanism for building modules may strike you as a bit unwieldy and obscure. Once you get
used to it, however, you will likely appreciate the capabilities that have been programmed into the
kernel build system. Do note that the above is not a complete makefile; a real makefile includes the
usual sort of targets for cleaning up unneeded files, installing modules, etc.
After the module is built, the next step is loading it into the kernel. As weve already pointed out,
insmod does the job for you. The program loads the module code and data into the kernel, which, in
turn, performs a function similar to that of ld, in that it links any unresolved symbol in the module to
the symbol table of the kernel.
Unlike the linker, however, the kernel doesnt modify the modules disk file, but rather an in-memory
copy. insmod accepts a number of command-line options (for details, see the manpage), and it can
assign values to parameters in your module before linking it to the current kernel. Thus, if a module is
correctly designed, it can be configured at load time; load-time configuration gives the user more
flexibility than compile-time configuration, which is still used sometimes. Interested readers may
want to look at how the kernel supports insmod: it relies on a
system call defined in kernel/module.c. The function sys_init_module allocates kernel memory to hold
a module (this memory is allocated with vmalloc; see the section vmalloc and Friends in Chapter 8);
it then copies the module text into that memory region, resolves kernel references in the module via the
kernel symbol table, and calls the modules initialization function to get everything going.
131
If you actually look in the kernel source, youll find that the names of the system calls are prefixed
with sys_. This is true for all system calls and no other functions; its useful to keep this in mind when
grepping for the system calls in the sources.
The modprobe utility is worth a quick mention. modprobe, like insmod, loads a module into the kernel.
It differs in that it will look at the module to be loaded to see whether it references any symbols that
are not currently defined in the kernel. If any such references are found, modprobe looks for other
modules in the current module search path that define the relevant symbols. When modprobe finds
those modules (which are needed by the module being loaded), it loads them into the kernel as well. If
you use insmod in this situation instead, the command fails with an unresolved symbols message left
in the system logfile. As mentioned before, modules may be removed from the kernel with the rmmod
utility. Note that module removal fails if the kernel believes that the module is still in use (e.g., a
program still has an open file for a device exported by the modules), or if the kernel has been
configured to disallow module removal. It is possible to configure the kernel to allow forced
removal of modules, even when they appear to be busy. If you reach a point where you are considering
using this option, however, things are likely to have gone wrong badly enough that a reboot may well
be the better course of action.
The lsmod program produces a list of the modules currently loaded in the kernel. Some other
information, such as any other modules making use of a specific module, is also provided. lsmod
works by reading the /proc/modules virtual file. Information on currently loaded modules can also be
found in the sysfs virtual filesystem under /sys/module.
Every nontrivial module also requires a cleanup function, which unregisters interfaces and returns all
resources to the system before the module is removed. This function is defined as:
4.INPUT:
5.OUTPUT:
Aim: add this new system call in the Linux kernel (any kernel source, any architecture and any Linux
kernel distribution) and demonstrate the use of same.
1. OBJECTIVE:
2.THEORY:
Download a kernel source from the authorized web site and perform following steps:
133
We will define a new system - call named as sys_hello(). Make a new directory called hello under
the kernel directory. Using cd command goto the uncompressed directory after applying tar
command on downloaded kernel file. Then use mkdir and create hello directory
Now create the file hello.c in the hello directory. Use any editor of your choice and open a file
named as hello.c as shown below and write the program statements in it.
134
Now create the Makefile in the hello directory for hello.c program compilation and build.
Add the hello directory into kernel Makefile. Use cd command and go to the kernel directory where
kernel Makefile resides as shown below.
135
Open kernel Makefile and add following line at the place mentioned here
$ nano Makefile
Add the new system - call into syscall table (Assembly file actually). Goto syscalls directory from the
kernel directory using cd command as shown in bellow screen
136
Now list out the contents of the syscalls directory. Two file with .tbl extensions are listed here one is
for 32 bit system and another is for 64 bit system kernel. Choose the file according to your
requirements. Here this is for 32 bit system.
137
Goto the end of this file and add the entry for your system call as shown in next slide
Goto the linux directory under include and open the header file syscalls.h
$ nano syscalls.h
138
Open syscalls.h header file using any editor of choice and add entries related to your system call as shown
in next slide
Add the syscall number (macro definition eventually and the number varies). Open file unistd_32.h
located in asm directory using editor of your choice arch/x86/include/asm/unistd_32.h.
Compile, update the kernel and run the testing file (using dmesg to check the kernel log)
139
run the testing file
systemCallHello.c
#include <stdio.h>
#include <linux/kernel.h>
#include <sys/syscall.h>
#include <unistd.h>
int main()
{
long int ret_val = syscall(357);
printf(System call sys_hello return %ld\n, rtn);
return 0;
}
140