Os
Os
6. Security & Protection: Ensures system protection against unauthorized access and other
security threats through authentication, authorization, and encryption.
Early computers were not interactive device, there user use to prepare a job which consist
three parts
1. Program
2. Control information
3. Input data
2.
Only one job is given input at a time as there was no memory, computer will take the input
then process it and
then generate output.
3.
Common input/output device were punch card or tape drives. So these devices were very
slow, and processor
remain ideal most of the time.
Processor Knowledge
वकसी के विए
wait
नहीीं करे गा
Gate
Website
while waiting for a job to complete.
• Multiprogrammed: The OS switches to
and executes another job if the current
job needs to wait, utilizing the CPU
effectively.
Show must go on
• Conclusion
• Efficient Utilization: Ensures that the
CPU is never idle as long as at least one
job needs to execute, leading to better
utilization of resources.
2.
In the modern operating systems, we are able to play MP3 music, edit documents in
Microsoft Word, surf the
Google Chrome all running at the same time. (by context switching, the illusion of
parallelism is achieved)
3.
For multitasking to take place, firstly there should be multiprogramming i.e. presence of
multiple programs
ready for execution. And secondly the concept of time sharing.
Multiprocessor Operating System refers to the use of two or more central processing units
(CPU) within a
single computer system. These multiple CPU’s share system bus, memory and other
peripheral devices.
2.
Multiple concurrent processes each can run on a separate CPU, here we achieve a true
parallel execution of
processes.
3.
Becomes most important in computer system, where the complexity of the job is more, and
CPU divides and
conquers the jobs. Generally used in the fields like artificial intelligence and expert system,
image processing,
weather forecasting etc.
Asymmetric Processing
Definition
Task
Allocation
Complexity
Scalability
Performance
Multi-Processing
Definition
Concurrency
Complexity and
Coordination
3. For example, a petroleum refinery, Airlines reservation system, Air traffic control system,
Systems that provide up
to the minute information on stock prices, Defense application systems like as RADAR.
Response Time
Applications
Reliability
• The MINIX 3 microkernel, for example, has only approximately 12,000 lines of code.
Developer
Andrew S. Tanenbaum
Operating
System
• On systems with multiple command interpreters to choose from, the interpreters are known
as
shells. For example, on UNIX and Linux systems, a user may choose among several different
shells, including the Bourne shell, C shell, Bourne-Again shell, Korn shell, and others.
• Indeed, on some systems, only a subset of system functions is available via the GUI, leaving
the less
common tasks to those who are command-line knowledgeable.
Resources
Independence
Interaction
Program
Process
Does not require system resources when Requires CPU time, memory, and other
not running.
resources during execution.
Exists independently and is not
executing.
2. Program counter: The counter indicates the address of the next instruction
to be executed for this process.
3. CPU registers: The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code information.
Along with the program counter, this state information must be saved when
an interrupt occurs, to allow the process to be continued correctly afterward.
The operating system must select, for scheduling purposes, processes from these queues in
some fashion. The selection process is carried out by the appropriate scheduler.
• Types of Schedulers
• Long Term Scheduler (LTS)/Spooler: Long-term schedulers determine which processes
enter the ready queue from the job pool. Operating less frequently than short-term
schedulers, they focus on long-term system goals such as maximizing throughput.
• Medium-term scheduler: The medium-term scheduler swaps processes in and out of
memory to optimize CPU usage and manage memory allocation. By doing so, it adjusts the
degree of multiprogramming and frees up memory as needed. Swapping allows the
system to pause and later resume a process, improving overall system efficiency.
• Short Term Scheduler (STS): The short-term scheduler, or CPU scheduler, selects from
among the processes that are ready to execute and allocates the CPU to one of them.
Short-Term Scheduler
Middle Scheduler
Function
Frequency
Executes infrequently as it
deals with the admission of
new processes.
Executes frequently to
rapidly switch between
processes.
Responsibility
Executes at an intermediate
frequency, balancing long-term
and short-term needs.
• Process execution begin with the CPU burst that may be followed by a i/o burst, then
another CPU and i/o burst
and so on. Eventually in the last will end up on CPU burst. So, process keep switching
between the CPU and i/o
during execution.
•
I/O Bound Processes: An I/O-bound process is one that spends more of its time doing I/O
than it spends doing
computations.
CPU Bound Processes: A CPU-bound process, generates I/O requests infrequently, using
more of its time doing
computations.
It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-
bound processes. If
all processes are I/O bound, the ready queue will almost always be empty, and the short-term
scheduler will
have little to do. Similarly, if all processes are CPU bound, the I/O waiting queue will almost
always be empty,
devices will go unused, and again the system will be unbalanced.
allocated to a process, the process keeps the CPU until it releases the CPU
willingly.
• A process will leave the CPU only
1. When a process completes its execution (Termination state)
2. When a process wants to perform some i/o operations(Blocked state)
Pre-emptive Scheduling
CPU Allocation
Response Time
Complexity
Simpler to implement.
Resource
Utilization
Suitable
Applications
FCFS is the simplest scheduling algorithm, as the name suggest, the process that requests the
CPU first is allocated the CPU first.
Burst Time
(BT)
P0
P1
P2
P3
P4
2
1
0
4
3
4
2
3
2
1
Completion Time
(CT)
Average
Waiting Time
(WT) = TAT - BT
• Easy to understand, and can easily be implemented using Queue data structure.
• Can be used for Background processes where execution is not urgent.
BT
P0
100
P1
P. No
AT
BT
P0
100
P1
TAT=CT-AT
WT=TAT -BT
TAT=CT-AT
WT=TAT -BT
Average
Average
• Solution, smaller process have to be executed before longer process, to achieve less average
waiting time.
• The FCFS algorithm is thus particularly troublesome for time-sharing systems (due to its
non-pre-emptive nature), where it is important that each user get a share of the CPU at
regular intervals.
• Higher average waiting time and TAT compared to other algorithms.
Arrival Time
(AT)
1
Burst Time
(BT)
7
Completion Time
(CT)
Average
Waiting Time
(WT) = TAT - BT
Advantage
•
•
Pre-emptive version guarantees minimal average waiting time so some time also referred
as optimal algorithm. Provide a standard for other algo in terms of average waiting time.
Provide better average response time compare to FCFS.
Disadvantage
•
•
Here process with the longer CPU burst requirement goes into starvation and have
response time.
This algo cannot be implemented as there is no way to know the length of the next CPU
burst. As SJF is not implementable, we can use the one technique where we try to predict
the CPU burst of the next coming process.
• Tie is broken using FCFS order. No importance to senior or burst time. It supports both
non-preemptive and pre-emptive versions.
• In Priority (non-pre-emptive) once a decision is made and among the available process, the
process with the highest priority is scheduled on the CPU, it cannot be pre-empted even if a
new process with higher priority more than the priority of the running process enter in the
system.
• In Priority (pre-emptive) once a decision is made and among the available process, the
process
with the highest priority is scheduled on the CPU. if it a new process with priority more than
the priority of the running process enter in the system, then we do a context switch and the
processor is provided to the new process with higher priority.
• There is no general agreement on whether 0 is the highest or lowest priority, it can vary
from
systems to systems.
BT
Priority
P0
P1
P2
P3
P4
P5
8(H)
6
CT
TAT = CT - AT
Average
WT = TAT - BT
• Gives a facility specially to system process.
• Allow us to run important process even if it is a user process.
• Disadvantage
• Here process with the smaller priority may starve for the CPU
• No idea of response time or waiting time.
• Note: - Specially use to support system process or important user process
• Ageing: - a technique of gradually increasing the priority of processes that wait in
the system for long time. E.g. priority will increase after every 10 mins
• The CPU scheduler goes around the ready queue, allocating the CPU to each process for a
maximum of 1 Time
quantum say q. Up to which a process can hold the CPU in one go, with in which either a
process terminates if
process have a CPU burst of less than given time quantum or context switch will be executed
and process must
release the CPU voluntarily and enter the ready queue and wait for the next chance.
• If there are n processes in the ready queue and the time quantum is q, then each process gets
1/n of the CPU
time in chunks of at most q time units. Each process must wait no longer than (n - 1) x q time
units until its next
time quantum.
Burst Time
(BT)
P0
P1
P2
P3
P4
P5
Completion Time
(CT)
ready queue, scheduling algorithm inside the queue and between the
queue and once a process enters a specific queue we can not change and
queue after that.
• The multilevel feedback queue scheduling algorithm, allows a process to
P ()
{
read ( i );
i = i + 1;
write( i );
}
•
Race condition is a situation in which the output of a process depends on the execution
sequence of process. i.e. if we change the order of execution of different process with
respect to other process the output may change.
P()
{
While(T)
{
Initial Section
Entry Section
Critical Section
Exit Section
Remainder Section
}
}
• Bounded Waiting: There exists a bound or a limit on the number of times a process is
allowed to enter its critical section and no process should wait indefinitely to enter the
CS.
3. Hardware Solution
1. Test and Set Lock
2. Disable interrupt
P1
while (1)
while (1)
{
{
while (turn! = 0);
while (turn! = 1);
Critical Section
Critical Section
turn = 1;
turn = 0;
Remainder section
Remainder Section
}
}
P1
while (1)
{
flag [0] = T;
while (flag [1]);
Critical Section
flag [0] = F;
Remainder section
}
while (1)
{
flag [1] = T;
while (flag [0]);
Critical Section
flag [1] = F;
Remainder Section
}
Pj
do
{
do
{
flag[i] = true;
while (flag[j])
{
if (turn == j)
{
flag[i] = false;
while (turn == j) ;
flag[i] = true;
}
}
/* critical section */
turn = j;
flag[i] = false;
/* remainder section */
}
while (true);
flag[j] = true;
while (flag[i])
{
if (turn == i)
{
flag[j] = false;
while (turn == i) ;
flag[j] = true;
}
}
/* critical section */
turn = i;
flag[j] = false;
/* remainder section */
}
while (true);
P0
while (1)
{
flag [0] = T;
turn = 1;
while (turn = = 1 && flag [1] = = T);
Critical Section
flag [0] = F;
Remainder section
}
P1
while (1)
{
flag [1] = T;
turn = 0;
while (turn = = 0 && flag [0] = =T);
Critical Section
flag [1] = F;
Remainder Section
}
Wait(S)
Signal(S)
{
while(s<=0);
s++;
s--;
}
Pi()
{
While(T)
{
Initial Section
wait(s)
Critical Section
signal(s)
Remainder Section
}
Wait(S)
{
Signal(S)
{
while(s<=0);
s--;
s++;
• Similarly, a consumer needs to check for an underflow before accessing the buffer and then
consume an item.
• Also, the producer and consumer must be synchronized, so that once a producer and
consumer
it accessing the buffer the other must wait.
Semaphore S =
Semaphore E =
Semaphore F =
Consumer()
Producer()
Consumer()
{
while(T)
while(T)
Semaphore E =
Semaphore F =
Consumer()
Producer()
Consumer()
while(T)
while(T)
Semaphore E =
// Produce an item
// Pick item from buffer
Semaphore F =
// Consume item
}
Consumer()
Producer()
Consumer()
{
while(T)
while(T)
{
// Produce an item
Semaphore E =
wait(S)
wait(S)
Semaphore F =
signal(S)
signal(S)
// Consume item
Consume item
Reader()
CS //Write
CS //Read
Wrt =
Readcount =
Reader()
Wait(wrt)
CS //Write
Signal(wrt)
CS //Read
Wrt =
Readcount =
Wrt =
Reader()
Readcount++
Readcount =
Wait(wrt)
CS //Write
Signal(wrt)
CS //Read
Readcount--
Wrt =
Reader()
Wait(mutex)
Readcount++
Readcount =
Wait(wrt)
CS //Write
Signal(wrt)
signal(mutex)
CS //Read
Wait(mutex)
Readcount--
signal(mutex)
Wrt =
Readcount =
Wait(wrt)
CS //Write
Signal(wrt)
Reader()
Wait(mutex)
Readcount++
If(readcount ==1)
wait(wrt) // first
signal(mutex)
CS //Read
Wait(mutex)
Readcount-If(readcount ==0)
signal(wrt) // last
signal(mutex)
Barber
Customer
wait(mutex);
while(true)
if(waiting < n)
{
{
waiting = waiting + 1;
wait(customer);
signal(customer);
wait(mutex);
signal(mutex);
wait(barber);
waiting = waiting - 1;
// Get hair cut
signal(barber);
}
signal(mutex);
else
{
// Cut hair
signal(mutex);
}
}
Knowledge Gate
Website
• Software-based solutions such as Peterson’s are not guaranteed to work on modern
computer
architectures. In the following discussions, we explore several more solutions to the
criticalsection problem using techniques ranging from hardware to software, all these
solutions are
based on the premise of locking —that is, protecting critical regions through the use of locks.
• The critical-section problem could be solved simply in a single-processor environment if we
could prevent interrupts from occurring while a shared variable was being modified.
While(1)
{
while (test and set(&lock));
/* critical section */
lock = false;
/* remainder section */
}
A process requests resources; if the resources are not available at that time, the process
enters a waiting state. Sometimes, a waiting process is never again able to change state,
because the resources it has requested are held by other waiting processes. This situation is
called a deadlock.
A set of processes is in a deadlocked state when every process in the set is waiting for an
event that can be caused only by another process in the set.
P1
P2
R1
R2
• Mutual exclusion
• Hold and wait
• No pre-emption
• Circular wait
4. Ignorance: - We can ignore the problem altogether and pretend that deadlocks never occur
in the system.
Polio vaccine
P1
P2
R
R
1
2 Website
Knowledge Gate
• If a process requests some resources
• We first check whether they are available. If they are, we allocate them.
• If they are not,
• We check whether they are allocated to some other process that is waiting for
additional resources. If so, we pre-empt the desired resources from the waiting process
and allocate them to the requesting process (Considering Priority).
• If the resources are neither available nor held by a waiting process, the requesting
process must wait, or may allow to pre-empt resource of a running process Considering
Priority.
Max Need
E
F
G
4
3
1
2
1
4
1
3
3
5
4
1
System Max
E
FG
846
P0
P1
P2
P3
Allocation
E
F
G
1
0
1
1
1
2
1
0
3
2
0
0
Available
E
FG
Knowledge Gate Website
Current Need
E
F
G
P0
P1
P2
P3
satisfies demand of every process without going into
deadlock, if yes, this sequence is called safe sequence.
• Safe Sate: If their exist at least one possible safe
sequence.
• Unsafe Sate: If their exist no possible safe sequence.
P0
P1
P2
P3
Max Need
E
F
G
4
3
1
2
1
4
1
3
3
5
4
1
Available
E
FG
330
type currently allocated to each process. If Allocation[i][j] equals k, then
process Pi is currently allocated k instances of resource type Rj.
P0
P1
P2
P3
Current Need
E
F
G
3
3
0
1
0
2
0
3
0
3
4
1
P0
P1
P2
P3
E
1
1
1
2
Allocation
F
0
1
0
0
G
1
2
3
0
We can now present the algorithm for finding out whether or not a system is in a safe state.
This
algorithm can be described as follows:
1- Let Work and Finish be vectors of length m and n, respectively. Initialize Work =
Available and
Finish[i] = false for i = 0, 1, ..., n − 1.
Need
Work
E
F
G
E
FG
2- Find an index i such that both
P0
3
3
0
Finish[i] == false
330
P1
1
0
2
Needi ≤ Work
P2
0
3
0
If no such i exists, go to step 4.
P3
3
4
1
3- Work = Work + Allocationi
Finish[i] = true
Go to step 2.
Finish[i]
4- If Finish[i] == true for all i, then the system is in a safe state.
This algorithm may require an order of m*n2 operations to
determine whether a state is safe.
E
F
F
F
G
F
• Deadlock can also be described in terms of a directed graph called a system
resourceallocation graph. This graph consists of a set of vertices V and a set of edges E.
• The set of vertices V is partitioned into two different types of nodes: P = {P1, P2, ..., Pn},
the set
consisting of all the active processes in the system, and R = {R1, R2, ..., Rm}, the set
consisting of
all resource types in the system.
• Process Termination
• Abort all deadlocked processes
• Abort one process at a time until the deadlock is removed
• Recourse pre-emption
• Selecting a victim
• Partial or Complete Rollback
4. Deadlocks are often rare, so the trade-off may seem justified. Manual restarts
may be required when a deadlock occurs.
• Multi-threaded applications have multiple threads within a single process, each having their
own program counter, stack and set of registers, but sharing common code, data, and certain
structures such as open files.
There are two types of threads to be managed in a modern system: User threads and kernel
threads.
User threads are supported above the kernel, without kernel support. These are the threads
that application programmers would put into their programs.
Kernel threads are supported within the kernel of the OS itself. All modern OS support kernel
level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service
multiple kernel system calls simultaneously
overcomes the problems listed above involving blocking system calls and the splitting of
processes across multiple CPUs.
• However, the overhead of managing the one-to-one model is more significant, involving
more
overhead and slowing down the system. Most implementations of this model place a limit on
how many threads can be created.
• Linux and Windows from 95 to XP implement the one-to-one model for threads.
Airbus
4 MB
Go down
Factory
32 GB
8 TB
The references to memory at any given interval of time tend to be confined within a few
localized areas in memory. This phenomenon is known as the property of locality of
reference. There are two types of locality of reference.
•
Temporal Locality: Temporal locality refers to the reuse of specific data or resources,
within a relatively small-time duration, i.e. Most Recently Used.
2.
3.
Page table base register(PTBR) provides the base of the page table and then the
corresponding page no is
accessed using p.
4.
Here we will finds the corresponding frame no (the base address of that frame in main
memory in which the
page is stored)
5.
Combine corresponding frame no with the instruction offset and get the physical address.
Which is used to
access main memory.
4. Size of each entry in the page table is same it is corresponding frame number.
5. Page table is a data structure which is it self stored in main memory.
1 Thousand
1 Million
1 Billion
1 Trillion
103
106
109
1012
1015
1018
1021
1024
1 kilo
1 Mega
1 Giga
1 Tera
1 Peta
1 Exa
1 Zetta
1 Yotta
210
220
230
240
250
260
270
280
1 kilo
1 Mega
1 Giga
1 Tera
1 Peta
1 Exa
1 Zetta
1 Yotta
No of Locations
n
2n
n
2n
Upper Bound(Log2n)
n
Upper Bound(Log2n)
n
1
2
3
4
5
32 GB
LA
MM
128 MB
42
512GB
128GB
PA
33
31
32GB
11
30
28
14
addressable
Page Size
1B
1B
1B
1B
1KB
512B
4096B
(one for page table and other for actual access).
• To solve the problems in paging we take the help of TLB. The TLB is associative, high-
speed
memory.
Ratio.
large amounts of physical memory just to keep track of how other physical memory is being
used. To solve this
problem, we can use an Inverted Page Table.
● An inverted page table has one entry for each real page (or frame) of memory. Each entry
consists of the virtual
address of the page stored in that real memory location, with information about the process
that owns the page.
Thus, only one page table is in the system, and it has only one entry for each page of physical
memory.
● Thus number of entries in the page table is equal to the number of frames in the physical
memory.
Disadvantages
• Virtual memory is not easy to implement.
• It may substantially decrease performance if it is used carelessly (Thrashing)
2. We find a free frame if available we can brought in desired page, but if not we have to
select a page as a victim and swap it out from main memory to secondary memory and then
swap in the desired page(situation effectively doubles the page-fault service time ).
3.2. If the modify bit is not set: It means the page has not been modified since it was read
into the main memory. We need not write the memory page to the disk: it is already there.
replacement algorithm. Page Replacement will decide which page to replace next.
time, rather than forward. Replace the page that has not been used for the longest period of
time.
• LRU is much better than FIFO replacement in term of page-fault. The LRU policy is often
used
• If a page is in active use, it will be in the working set. If it is no longer being used, it will
drop
from the working set.
• The working set is an approximation of the program's locality. The accuracy of the working
set
depends on the selection of Δ . If Δ is too small, it will not encompass the entire locality; if Δ
is too large, it may overlap several localities.
spindle
sector s
read-write head
cylinder c
platter
arm
• Seek Time: - It is a time taken by Read/Write header to reach the correct track. (Always
given in
question)
• Rotational Latency: - It is the time taken by read/Write header during the wait for the
correct sector. In
general, it’s a random value, so far average analysis, we consider the time taken by disk to
complete
half rotation.
• Transfer Time: - it is the time taken by read/write header either to read or write on a disk. In
general,
we assume that in 1 complete rotation, header can read/write the either track, so
• total time will be = (File Size/Track Size) *time taken to complete one revolution.
3. If the desired disk drive and controller are available, the request can be serviced
immediately. If the drive or controller is busy,
any new requests for service will be placed in the queue of pending requests for that drive.
4. When one request is completed, the operating system chooses which pending request to
service next. How does the
operating system make this choice? Any one of several disk-scheduling algorithms can be
used.
Advantages:
Seek movements decreases
Throughput increases
Disadvantages:
Overhead to calculate the closest request.
Can cause Starvation for a request which is far from the current location of the header
High variance of response time and waiting time as SSTF favors only closest requests
• Circular-scan is a variant of SCAN designed to provide a more uniform wait time. Like
SCAN, CSCAN moves the head from one end of the disk to the other, servicing requests
along the way.
When the head reaches the other end, however, it immediately returns to the beginning of
the disk without servicing any requests on the return trip.
Advantages:
• Provides more uniform wait time compared to SCAN
• Better response time compared to scan
Disadvantage:
• More seeks movements in order to reach starting position
sector s
read-write head
cylinder c
platter
arm
• Rotational Latency: - It is the time taken by read/Write header during the wait for the
correct
sector. In general, it’s a random value, so far average analysis, we consider the time taken by
disk to
complete half rotation.
• Transfer Time: - it is the time taken by read/write header either to read or write on a disk. In
general, we assume that in 1 complete rotation, header can read/write the either track, so
• total time will be = (File Size/Track Size) *time taken to complete one revolution.
• Contiguous
• Linked
• Indexed
Each method has advantages and disadvantages. Although some systems support all three, it
is
more common for a system to use one method for all files.
on the disk.
• In directory usually we store three column file name, start dba and length of file
in number of blocks.
• Disadvantage
• Suffer from huge amount of external fragmentation.
• Another problem with contiguous allocation is file modification
• To access a block, the operating system uses the first-level index to find a second-level
index
block and then uses that block to find the desired data block. This approach could be
continued to a third or fourth level, depending on the desired maximum file size.
• Disadvantage
• Indexed allocation does suffer from wasted space. The pointer overhead of the index block
is generally greater than the pointer overhead of linked allocation.
• To create a file, we search the free-space list for the required amount of space and allocate
that space to the new
file. This space is then removed from the free-space list. When a file is deleted, its disk space
is added to the
free-space list.
• For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and
27 are free
and the rest of the blocks are allocated. The free-space bit map would be
001111001111110001100000011100000 ...
• The main advantage of this approach is its relative simplicity and its efficiency in finding
the first free
block or n consecutive free blocks on the disk.
• A 1.3-GB disk with 512-byte blocks would need a bit map of over 332 KB to track its free
blocks.
• A 1-TB disk with 4-KB blocks requires 256 MB to store its bit map. Given that disk size
constantly
increases, the problem with bit vectors will continue to escalate as well.
File organization refers to the way data is stored in a file. File organization is very important
because it determines the methods of access, efficiency, flexibility and storage devices to use.
Four methods of organizing files:
• 1. Sequential file organization:
• a. Records are stored and accessed in a particular sorted order using a key field.
• b. Retrieval requires searching sequentially through the entire file record by record to
the end.
• 2. Random or direct file organization:
• a. Records are stored randomly but accessed directly.
• b. To access a file which is stored randomly, a record key is used to determine where a
record is stored on the storage media.
• c. Magnetic and optical disks allow data to be stored and accessed randomly.
User Isolation
Organization
Two-Level Directory
Search Efficiency
Access Control
Complexity
Simpler to implement but can become Slightly more complex due to the need
cluttered and difficult to manage with for user management, but offers
many files.
better organization.
Features of Directories
• Metadata: Directories also store metadata about the files and subdirectories
they contain, such as permissions, ownership, and timestamps.
• Dynamic Nature: As files are added or removed, the directory dynamically
updates its list of contents.
• Links and Shortcuts: Some systems support the creation of pointers or links
within directories to other files or directories.
Indexed File
Access Method
Speed of Access
Storage Efficiency
Update Complexity
Use Case
File B
File C
User 1
r-w
User 2
r-w
User 3
• Here, 'r' indicates read permission, 'w' indicates write permission, and
'-' indicates no permission.
• File A:
• User 1: r-w
• User 2: r
• File B:
• User 1: r
• User 2: w
• User 3: r
• User 1:
• File A: r-w
• File B: r
• User 2:
• File A: r
• File B: w
• File C: r-w
Global Table: A global table is essentially the raw access matrix itself, where each cell
denotes the permissions
a subject has on an object. While straightforward, this method is not practical for large
systems due to the
sparsity of the matrix and the associated storage overhead.
2.
Access Lists for Objects: Here, the focus is on objects like files or directories. Each object
maintains an Access
Control List (ACL) that records what operations are permissible by which subjects. ACLs are
object-centric and
make it easy to determine all access rights to a particular object. However, this approach
makes it cumbersome
to list all capabilities of a particular subject across multiple objects.
3.
Capability Lists for Domains: In this subject-centric approach, each subject or domain
maintains a list of
objects along with the operations it can perform on them, known as a Capability List. This
makes it
straightforward to manage and review the permissions granted to each subject. On the
downside, revoking or
changing permissions across all subjects for a specific object can be more challenging.
4.
Lock-Key Mechanism: In a lock-key mechanism, each object is assigned a unique "lock," and
subjects are
granted "keys" to unlock these locks. When a subject attempts to access an object, the system
matches the key
with the lock to determine if the operation is permissible. This approach can be seen as an
abstraction over the
access matrix and can be used to dynamically change permissions with minimal overhead.