Lecture Thread
Lecture Thread
• Responsiveness
• Resource Sharing
• Economy
• Scalability
Multicore Programming
• Multicore systems putting pressure on programmers, challenges include
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
Multithreaded Server Architecture
Concurrent Execution on a Single-core System
Parallel vs Concurrency Execution on a Multicore System
A system is parallel if it can perform more than one task simultaneously: T1, T2, T3, T4.
In contrast, a concurrent system supports more than one task by allowing the tasks to
make progress: T1 & T2 then T3 & T4, ….
Amdahl’s Law
• S is the portion of the application that must be performed serially on a system with N
processing cores.
• 75 percent parallel and 25 percent serial. If we run this application on a system with two
processing cores, we can get a speedup of 1.6 times. If we add two additional cores (for a
total of four), the speedup is 2.28 times.
• This is the fundamental principle behind Amdahl’s Law: the serial portion of an application
can have a disproportionate effect on the performance we gain by adding additional
computing cores.
Programming Challenges
• Identifying tasks.
• Balance.
• Data splitting.
• Data dependency: task execution must be synchronized
• Testing and debugging: testing concurrent programs is inherently more
difficult
• Data parallelism versus Task parallelism
Before Threads…
• Recall that a process consists of:
• program(s)
• data
• stack
• PCB
• all stored in the process image
• Process (context) switch is pure overhead
Process Characterization
• Resource ownership
• address space to hold process image
• I/O devices, files, etc.
• Execution
• a single execution path (thread of control)
• execution state, PC & registers, stack
Refining Terminology
• Distinguish the two characteristics
• Process: resource ownership
• Thread: unit of execution (dispatching) - AKA lightweight process (LWP)
• Multi-threading: support multiple threads of execution within a single
process
• Process, as we have known it thus far, is a single-threaded process
Threads and Processes
• Decouple the resource allocation aspect from the control aspect
• Thread abstraction - defines a single sequential instruction stream (PC,
stack, register values)
• Process - the resource context serving as a “container” for one or more
threads (shared address space)
Threads
• A sequential execution stream within a process (also called lightweight
process)
• Threads in a process share the same address space
• Easier to program I/O overlapping with threads than signals
• Responsive user interface
• Run some program activities “in the background”
• Multiple CPUs sharing the same memory
Threads and Processes
• Thread abstraction - defines a single sequential instruction stream (PC,
stack, register values)
• Process - the resource context serving as a “container” for one or more
threads (shared address space)
Process vs. Threads
• Processes do not usually share memory
• Process context switch changes page table and other memory
mechanisms
• Threads in a process share the entire address space
• Processes have their own privileges (file accesses, e.g.)
• Threads in a process share all privileges
• Threads have each a stack and a register set
Example
Thread Control Block (TCB)
• Examples
• Windows XP/2000
• Solaris
• Linux
• Tru64 UNIX
• Mac OS X
User-Level Threads Advantages
• One-to-One
• Many-to-Many
Many-to-One
• Many user-level threads mapped to single kernel thread
• Examples:
• Solaris Green Threads
• GNU Portable Threads
Many-to-One Model
One-to-One
• Each user-level thread maps to kernel thread
• Examples
• Windows NT/XP/2000
• Linux
• Solaris 9 and later
One-to-one Model
Many-to-Many Model