Scheduling) (Compatibility Mode)
Scheduling) (Compatibility Mode)
Single and Multithreaded Processes A thread is a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems
Benefits
Advantages of Thread
A process with multiple threads make a great server for example printer server. Because threads can share common data, they do not need to use interprocess communication. Because of the very nature, threads can take advantage of multiprocessors. Threads are economical in the sense that: They only need a stack and storage for registers therefore, threads are cheap create.
to
Threads use very little resources of an operating system in which they are working. That is, threads do not need new address space Context switching are fast when working with threads. The reason is that we only have to save and/or restore PC, SP and registers. But this cheapness does not come free - the biggest drawback is that there is protection between threads. no
Kernel-Level Threads In this method, the kernel knows about and manages the threads. Advantages: Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads. Kernel-level threads are especially good for applications that frequently block. Disadvantages: The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads.
Advantages of Threads over Multiple Processes Context Switching Threads are very inexpensive to create and destroy. For example, they require space to store, the PC, the SP, and the general-purpose registers, but they do not require space to share memory information, Information about open files of I/O devices in use, etc. With so little context, it is much faster to switch between threads. In other words, it is relatively easier for a context switch using threads. Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for example, sharing code section, data section, Operating System resources like open file etc. Disadvantages of Threads over Multiprocesses Blocking The major disadvantage if that if the kernel is single threaded, a system call of one thread will block the whole process and CPU may be idle during the blocking period. Security Since there is, an extensive sharing among threads there is a potential problem of security. It is quite possible that one thread over writes the stack of another thread (or damaged shared data) although it is very unlikely since threads are meant to cooperate on a single task.
Synchronization
Background
Concurrent access to shared data may result in data inconsistency (e.g., due to race conditions) Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers.
We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. Incremented by producer after producing a new buffer Decremented by consumer after consuming a buffer
A race condition occurs when multiple processes access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes
Critical Sections
A section of code, common to n cooperating processes, in which the processes may be accessing common variables.
A Critical Section Environment contains: Entry Section Code requesting entry into the critical section. Critical Section Code in which only one process can execute at any one time. Exit Section The end of the critical section, releasing or allowing others in. Remainder Section Rest of the code AFTER the critical section.
Monitors: programming language technique. Key Idea: Only one process may be active within the monitor at a time Hardware Test-and-Set: atomic machine-level instruction
In computer science, the test-and-set instruction is an instruction used to write to a memory location and return its old value as a single atomic (i.e. non-interruptible) operation. If multiple processes may access the same memory, and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process is done
Peterson's Algorithm a simple algorithm that can be run by two processes to ensure mutual exclusion for one resource (say one variable or data structure) does not require any special hardware it uses busy waiting (a spinlock) Semaphore a variable used for signalling between processes two main operations on semaphore are:
Wait for next process (or acquire) Signal to other process (or release)
A resource such as a shared data structure is protected by a semaphore. You must acquire the semaphore before using the resource.
Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.
Problem
Allow multiple readers to read at the same time. Only one writer can access the shared data at the same time.
Dining-Philosophers Problem
The dining philosophers problem is summarized as five philosophers sitting at a table doing one of two things: eating or thinking. While eating, they are not thinking, and while thinking, they are not eating. The five philosophers sit at a circular table with a large bowl of spaghetti in the center. A fork is placed in between each pair of adjacent philosophers, and as such, each philosopher has one fork to his left and one fork to his right. As spaghetti is difficult to serve and eat with a single fork, it is assumed that a philosopher must eat with two forks. Each philosopher can only use the forks on his immediate left and immediate right. Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1