Memory Management
GROUP 6
BACKGROUND (CHAPTER 8)
MEMORY
- central to the operation of a modern
computer system
- it consists of a large array of words or bytes, each with its own address
The CPU fetches instructions from memory according to the value of the
program counter.
These instructions may cause additional loading from and storing to
specific memory addresses.
The memory unit sees only a stream of memory addresses; it does not
know how they are generated or what they are for (instructions or data).
SWAPPING
A process must be in memory to be executed. A
process, however, can be swapped temporarily out of
memory to a backing store and then brought back into
memory for continued execution.
For example, assume a multiprogramming environment
with a round-robin CPU-scheduling algorithm.
When a quantum expires, the memory manager will
start to swap out the process that just finished and to
swap another process into the memory space that has
been freed
A variant of this swapping policy is used for priority-
based scheduling algorithms. If a higher-priority process
arrives and wants service, the memory manager can
swap out the lower-priority process and then load and
execute the higher-priority process.
Swapping requires a backing store. The backing store is
commonly a fast disk. It must be large enough to
accommodate copies of all memory images for all
users, and it must provide direct access to these
memory images.
CONTIGUOUS MEMORY ALLOCATING
The memory is usually divided into two partitions:
One for the resident operating system (equal sized) and
one for the user experience (variable sized)
Fixed sized partitioning – Equal Sized
Each partition may contain Exactly one process
ISSUES of Fixed sized partitioning – Equal sized
Internal fragmentation
External fragmentation
Size of process
Fixed degree of multiprogramming
Fixed sized partitioning - Variable sized
The operating system keeps a table indicating which parts of
memory are available and which are occupied
The first-fit, best-fit, and worst-fit strategies are the ones
most commonly used to select a free hole from the set
of available enough.
First-Fit
– Allocate the first hole that is
big enough.
Best-Fit– Allocate the smallest hole
that is big enough.
Worst-Fit – Allocate the largest hole.
PAGING
-Isa memory management
technique in which the memory is
divided into fixed size pages
Basic method
-Breaks down physical memory into fix sized blocks called frames
-Breaks down logical memory into blocks of the same size called pages
Paging hardware
-Every address generated is divided into Page number and Page offset
:Page number – used as an index into a page table
:Page offset – combined with page address to define physical memory
address
-Size of page is defined by the hardware
:Typically a power of 2,varying between 512 byte and 16 mb per page
Reason: if the size of logical address is 2m and page size is 2n then the high
order m-n bits of logical address designate the page number
Shared Pages
Sharecommon code especially in
time sharing environment
Structure of the Page Table
Hierarchical Paging
-two level paging algorithm In which the paged table is
also paged.
Hashed Page Table
-common approach in handling address spaces larger
than 32 bits
Inverted Page Tables
-A solution when tables consume a lot of memory
Segmentation
An. important aspect of memory management that
became unavoidable with paging is the separation of
the user's view of memory from the actual physical
memory.
As we have already seen, the user's view of memory is
not the same as the actual physical memory. The user's
view is mapped onto physical memory.
This mapping allows differentiation between logical
memory and physical memory.
Basic Method
Each of these segments is of variable length; the length
is intrinsically defined by the purpose of the segment in
the program. Elements within a segment are identified
by their offset from the begim1.ing of the segment: the
first statement of the program, the seventh stack frame
entry in the stack, the fifth instruction of the Sqrt (), and
so on.
Normally, the user program is compiled, and the compiler automatically
constructs segments reflecting the input program. A C compiler might create
separate segments for the following:
The code Global variables
The heap, from which memory is allocated
The stacks used by each thread
The standard C library
Segmentation Hardware
Although the user can now refer to objects in the program by a
two-dimensional address, the actual physical memory is still, of
course, a one-dimensional sequence of bytes.
Thus, we must define an implementation to map two-dimensional
user-defined addresses into one-dimensional physical addresses.
Each entry in the segment table has a segment base and a
segment limit. The segment base contains the startilcg physical
address where the segment resides in memory, and the segment
limit specifies the length of the segment.
BACKGROUND (CHAPTER 9)
Virtual memory – separation of user logical memory from physical
memory.
Only part of the program needs to be in memory for execution
Logical address space can therefore be much larger than
physical address space
Allows address spaces to be shared by several processes
Allows for more efficient process creation
Virtual memory can be implemented via:
Demand paging
Demand segmentation
Virtual Memory That is Larger Than
Physical Memory
Virtual-address Space
Shared Library Using Virtual Memory
Demand Paging
Pages brought into memory only as needed
Removes restriction: entire program in memory
Requires high-speed page access
Exploits programming techniques
Modules written sequentially
All pages not necessary needed simultaneously
Examples
User-written error handling modules
Mutually exclusive modules
Certain program options: mutually exclusive or not accessible
Tables given fixed amount of space: fraction used
Made virtual memory feasible:
Provides appearance of almost infinite or nonfinite physical
memory
Jobs run with less main memory than required in paged memory
allocation scheme
Requires high-speed direct access storage device
Works directly with CPU
Swapping: how and when pages passed in memory
Depends on predefined policies
The Memory Manager requires three tables:
Job Table
Page Map Table: has three new fields
Status: If requested page is already in memory
(If a page is already in memory then it saves the time required for retrieving a page
from disk)
Modified: If page contents have been modified
(This also saves time because if the page has not been modified, then page doesn’t
have to be rewritten to disk. The original page, already in the disk, is correct.)
Referenced: If page has been referenced recently
(Determines which page remains in main memory and which is swapped out
because it determines which pages show the most processing activity and which are
relatively inactive.)
Memory Map Table
Swapping Process
Exchanges resident memory page with secondary storage
page
Involves
Copying resident page to disk (if it was modified)
Writing new page into the empty page frame
Requires close interaction between:
Hardware components
Software algorithms
Policy schemes
Hardware instruction processing
Page fault: failure to find page in memory
Page fault handler
Part of operating system
Determines if empty page frames in memory
Yes: requested page copied from secondary storage
No: swapping occurs
Deciding page frame to swap out if all are busy
Directly dependent on the predefined policy for page
Thrashing
An excessive amount of page swapping between main
memory and secondary storage
Due to main memory page removal that is called back shortly
thereafter
Produces inefficient operation
Occurs across jobs (global replacement of pages)
Large number of jobs competing for a relatively few number of free
pages
Occurs within a job (local replacement of pages)
In loops crossing page boundaries
Thrashing
Generally, thrashing is caused by processes not having enough
pages in memory
using global replacement, can occur when process steal frames
from each other
but, can even happen using local replacement
thrashing processes lead
to low CPU utilization
OS (long-term scheduler) thinks
it needs to increase degree
of multiprogramming
more processes are added to
the system (taking frames from
existing processes)
worse thrashing
Demand Paging
Advantages
Jobno longer constrained by the size of physical
memory (concept of virtual memory)
Utilizes
memory more efficiently than previous
schemes (section of jobs that were seldom or not at
all used (such as error routines) weren’t loaded into
memory unless they were specifically requested)
Disadvantages
Increased overhead caused by tables and page
interrupts
Copy-on-Write
Copy-on-write finds its main use in sharing the
virtual memory of operating system processes.
Typically, the process does not modify any
memory and immediately executes a new
process, replacing the address space entirely.
Copy-on-write can be implemented efficiently using the
page table by marking certain pages of memory as
read-only and keeping a count of the number of
references to the page.
When data is written to these pages, the kernel
intercepts the write attempt and allocates a new
physical page, initialized with the copy-on-write data,
although the allocation can be skipped if there is only
one reference.
The kernel then updates the page table with the new
(writable) page, decrements the number of references,
and performs the write.
Allocating Kernel Memory
When a process running in user rnode requests
additional memory/ pages are allocated from the list of
free page frames maintained by the kernel. This list is
typically populated using a page-replacement
algorithm such as those discussed in Section 9.4 and
most likely contains free pages scattered throughout
physical memory/ as explained earlier. Remember/ too/
that if a user process requests a single byte of memory/
internal fragmentation will result/ as the process will be
granted an entire page frame.
Kernel memory/ however1 is often allocated from a free-memory pool
different from the list used to satisfy ordinary user-mode processes. There
are two primary reasons for this:
1.The kernel requests memory for data structures of varying sizes, some of
which are less than a page in size. As a result1 the kernel must use
memory conservatively and attempt to minimize waste due to
fragmentation. This is especially important because many operating
systems do not subject kernel code or data to the paging system.
2. Pages allocated to user-mode processes do not necessarily have to be
in contiguous physical memory. However/ certain hardware devices
interact directly with physical memory-without the benefit of a virtual
memory interface-and consequently may require memory residing in
physically contiguous pages.
The buddy system
The buddy system allocates memory from a fixed-size
segment consisting of physically contiguous pages.
Memory is allocated from this segment using a power-
of-2 allocator, which satisfies requests in units sized as a
power of 2 (4 KB, 8 KB, 16 KB, and so forth). A request in
units not appropriately sized is rounded up to the next
highest power of 2. For example, if a request for 11 KB is
made, it is satisfied with a 16-KB segment.
Slab Allocation
A second strategy for allocating kernel memory
is known as slab allocation. A slab is made up of
one or more physically contiguous pages. A
cache consists of one or more slabs
The slab-allocation algorithm uses caches to store
kernel objects. When a cache is created, a number of
objects-which are initially marked as free-are allocated
to the cache. The number of objects in the cache
depends on the size of the associated slab. For
example, a 12-KB slab (made up of three continguous
4-KB pages) could store six 2-KB objects. Initially, all
objects in the cache are marked as free. When a new
object for a kernel data structure is needed, the
allocator can assign any free object from the cache to
satisfy the request. The object assigned from the cache
is marked as used.