CAP-790
Unit - III
ACCESS CONTROL MATRIX
In computer science, an Access Control
Matrix or Access Matrix is an abstract, formal
security model of protection state in
computer systems, that characterizes the
rights of each subject with respect to every
object in the system. It was first introduced
by Butler W. Lampson in 1971.
CONTINUE..
An access matrix can be envisioned as a
rectangular array of cells, with one row per
subject and one column per object. The entry
in a cell – that is, the entry for a particular
subject-object pair – indicates the access
mode that the subject is permitted to
exercise on the object. Each column is
equivalent to an access control list for the
object; and each row is equivalent to an
access profile for the subject.
DEFINITION
According to the model, the protection state
of a computer system can be abstracted as a
set of objects O, that is the set of entities
that needs to be protected (e.g. processes,
files, memory pages) and a set of subjects S,
that consists of all active entities (e.g. users,
processes).
Further there exists a set of rights R of the
form r(s,o). A right thereby specifies the kind
of access a subject is allowed to process
object.
CONTINUE..
Asset 2 File Device
Asset 1
read, write,
Role 1 execute, execute read write
own
read, write,
Role 2 read execute,
own
UTILITY
Because it does not define the granularity of
protection mechanisms, the Access Control
Matrix can be used as a model of the static
access permissions in any type of access
control system.
It does not model the rules by which
permissions can change in any particular
system, and therefore only gives an
incomplete description of the system's
access control security policy.
CONTINUE..
An Access Control Matrix should be thought
of only as an abstract model of permissions
at a given point in time; a literal
implementation of it as a two-dimensional
array would have excessive memory
requirements.
Capability-based security and access control
lists are categories of concrete access control
mechanisms whose static permissions can be
modeled using Access Control Matrices.
CONTINUE..
Although these two mechanisms have
sometimes been presented (for example in
Butler Lampson's Protection paper) as simply
row-based and column-based
implementations of the Access Control
Matrix, this view has been criticized as
drawing a misleading equivalence between
systems that does not take into account
dynamic behaviour.
WHAT IS AUTHENTICATION?
Authentication is the process of identifying
users that request access to a system,
network, or device.
Access control often determines user identity
according to credentials like username and
password.
Other authentication technologies like
biometrics and authentication apps are also
used to authenticate user identity.
WHY IS USER AUTHENTICATION
IMPORTANT?
User authentication is a method that keeps
unauthorized users from accessing sensitive
information. For example, User A only has
access to relevant information and cannot
see the sensitive information of User B.
Cybercriminals can gain access to a system
and steal information when user
authentication is not secure. The data
breaches companies like Adobe, Equifax, and
Yahoo faced are examples of what happens
when organizations fail to secure their user
authentication.
CONTINUE..
Hackers gained access to Yahoo user
accounts to steal contacts, calendars and
private emails between 2012 and 2016. The
Equifax data breach in 2017 exposed credit
card data of more than 147 million
consumers. Without a secure authentication
process, any organization could be at risk.
5 COMMON AUTHENTICATION TYPES
Cybercriminals always improve their attacks.
As a result, security teams are facing plenty
of authentication-related challenges.
This is why companies are starting to
implement more sophisticated incident
response strategies, including authentication
as part of the process.
The list below reviews some common
authentication methods used to secure
modern systems.
1. PASSWORD-BASED
AUTHENTICATION
Passwords are the most common methods of
authentication. Passwords can be in the form
of a string of letters, numbers, or special
characters. To protect yourself you need to
create strong passwords that include a
combination of all possible options.
However, passwords are prone to phishing
attacks and bad hygiene that weakens
effectiveness. An average person has about
25 different online accounts, but only 54% of
users use different passwords across their
accounts.
CONTINUE..
The truth is that there are a lot of passwords
to remember. As a result, many people
choose convenience over security. Most
people use simple passwords instead of
creating reliable passwords because they are
easier to remember.
The bottom line is that passwords have a lot
of weaknesses and are not sufficient in
protecting online information. Hackers can
easily guess user credentials by running
through all possible combinations until they
find a match.
2. MULTI-FACTOR AUTHENTICATION
Multi-factor Authentication is an authentication
method that requires two or more independent
ways to identify a user. Examples include
codes generated from the user’s smartphone,
Captcha tests, fingerprints, voice biometrics or
facial recognition.
MFA authentication methods and technologies
increase the confidence of users by adding
multiple layers of security. MFA may be a good
defense against most account hacks, but it has
its own pitfalls. People may lose their phones
or SIM cards and not be able to generate an
authentication code.
3. CERTIFICATE-BASED
AUTHENTICATION
Certificate-based authentication technologies identify
users, machines or devices by using digital certificates.
A digital certificate is an electronic document based on
the idea of a driver’s license or a passport.
The certificate contains the digital identity of a user
including a public key, and the digital signature of a
certification authority. Digital certificates prove the
ownership of a public key and issued only by a
certification authority.
Users provide their digital certificates when they sign
in to a server. The server verifies the credibility of the
digital signature and the certificate authority. The
server then uses cryptography to confirm that the user
has a correct private key associated with the
certificate.
4. BIOMETRIC AUTHENTICATION
Biometrics authentication is a security
process that relies on the unique biological
characteristics of an individual. Here are key
advantages of using biometric authentication
technologies:
Biological characteristics can be easily compared
to authorized features saved in a database.
Biometric authentication can control physical
access when installed on gates and doors.
You can add biometrics into your multi-factor
authentication process.
5. TOKEN-BASED AUTHENTICATION
Token-based authentication technologies
enable users to enter their credentials once
and receive a unique encrypted string of
random characters in exchange.
You can then use the token to access
protected systems instead of entering your
credentials all over again.
The digital token proves that you already
have access permission. Use cases of token-
based authentication include RESTful APIs
that are used by multiple frameworks and
clients.
USER AUTHENTICATION ELEMENTS
You can view the types of elements used for
configuring user authentication and directory
services and the element descriptions.
Authentication Methods: Configured
authentication methods for end-user and
administrator authentication. Used in rules that
require end-user authentication.
Servers: Active Directory Servers, LDAP Servers,
RADIUS Authentication Servers, and TACACS+
Authentication Servers for end-user and
administrator authentication and directory services.
Users: End users stored in the internal LDAP
database or an external LDAP database. Used in
rules that require end-user authentication.
OTHER ELEMENTS
SMTP Servers SMTP servers that send
email or SMS messages about changes to
user accounts to end users. The same SMTP
Servers can also be used to send Alerts to
Administrators.
Certificates Trusted Certificate
Authorities that issue certificates presented
by service providers, Pending Certificate
Requests, and Server Credentials.
CONCLUSION
Authentication technology is always
changing. Businesses have to move beyond
passwords and think of authentication as a
means of enhancing user experience.
Authentication methods like biometrics
eliminate the need to remember long and
complex passwords.
As a result of enhanced authentication
methods and technologies, attackers will not
be able to exploit passwords, and a data
breach will be prevented.
SECURITY MODELS
A computer security model is a scheme for
specifying and enforcing security policies.
A security model may be founded upon a
formal model of access rights, a model of
computation, a model of distributed computing,
or no particular theoretical grounding at all.
A computer security model is implemented
through a computer security policy.
These models are used for maintaining goals of
security, i.e. Confidentiality, Integrity, and
Availability. In simple words, it deals with CIA
Triad maintenance. There are 3 main types of
Classic Security Models.
1. BELL-LAPADULA
This Model was invented by Scientists David
Elliot Bell and Leonard .J. LaPadula. Thus this
model is called the Bell-LaPadula Model. This
is used to maintain the Confidentiality of
Security.
Here, the classification of Subjects(Users)
and Objects(Files) are organized in a non-
discretionary fashion, with respect to
different layers of secrecy.
CONTINUE..
CONTINUE..
SIMPLE CONFIDENTIALITY RULE: Simple
Confidentiality Rule states that the Subject
can only Read the files on the Same Layer of
Secrecy and the Lower Layer of Secrecy but
not the Upper Layer of Secrecy, due to which
we call this rule as NO READ-UP
STAR CONFIDENTIALITY RULE: Star
Confidentiality Rule states that the Subject
can only Write the files on the Same Layer of
Secrecy and the Upper Layer of Secrecy but
not the Lower Layer of Secrecy, due to which
we call this rule as NO WRITE-DOWN
CONTINUE..
STRONG STAR CONFIDENTIALITY RULE: Stong
Star Confidentiality Rule is highly secured
and strongest which states that the Subject
can Read and Write the files on the Same
Layer of Secrecy only and not the Upper
Layer of Secrecy or the Lower Layer of
Secrecy, due to which we call this rule as NO
READ WRITE UP DOWN
2. BIBA
This Model was invented by Scientist Kenneth
.J. Biba. Thus this model is called Biba Model.
This is used to maintain the Integrity of
Security.
Here, the classification of Subjects(Users)
and Objects(Files) are organized in a non-
discretionary fashion, with respect to
different layers of secrecy.
This works the exact reverse of the Bell-
LaPadula Model.
CONTINUE..
IT HAS MAINLY 3 RULES:
SIMPLE INTEGRITY RULE: Simple Integrity
Rule states that the Subject can only Read
the files on the Same Layer of Secrecy and
the Upper Layer of Secrecy but not the Lower
Layer of Secrecy, due to which we call this
rule as NO READ DOWN
STAR INTEGRITY RULE: Star Integrity Rule
states that the Subject can only Write the
files on the Same Layer of Secrecy and the
Lower Layer of Secrecy but not the Upper
Layer of Secrecy, due to which we call this
rule as NO WRITE-UP
STRONG STAR INTEGRITY RULE
3. CLARKE WILSON SECURITY MODEL
This Model is a highly secured model. It has
the following entities.
SUBJECT: It is any user who is requesting for Data
Items.
CONSTRAINED DATA ITEMS: It cannot be
accessed directly by the Subject. These need to
be accessed via Clarke Wilson Security Model
UNCONSTRAINED DATA ITEMS: It can be
accessed directly by the Subject.
CONTINUE..
THE COMPONENTS OF CLARKE
WILSON SECURITY MODEL
TRANSFORMATION PROCESS: Here, the
Subject’s request to access the Constrained
Data Items is handled by the Transformation
process which then converts it into
permissions and then forwards it to
Integration Verification Process
INTEGRATION VERIFICATION PROCESS: The
Integration Verification Process will perform
Authentication and Authorization. If that is
successful, then the Subject is given access
to Constrained Data Items.
DISASTER RECOVERY
Disaster Recovery involves a set of policies,
tools and procedures to enable the recovery
or continuation of vital technology
infrastructure and systems following a
natural or human-induced disaster.
Disaster recovery focuses on the IT or
technology systems supporting critical
business functions, as opposed to business
continuity, which involves keeping all
essential aspects of a business functioning
despite significant disruptive events. Disaster
recovery can therefore be considered a
subset of business continuity.
CONTINUE..
Disaster Recovery assumes that the primary
site is not recoverable (at least for some
time) and represents a process of restoring
data and services to a secondary survived
site, which is opposite to the process of
restoring back to its original place.
HOW DOES DISASTER RECOVERY
WORK?
Disaster recovery relies upon the replication
of data and computer processing in an off-
premises location not affected by the
disaster.
When servers go down because of a natural
disaster, equipment failure or cyber attack, a
business needs to recover lost data from a
second location where the data is backed up.
Ideally, an organization can transfer its
computer processing to that remote location
as well in order to continue operations.
IT SERVICE CONTINUITY
IT Service Continuity (ITSC) is a subset of
business continuity planning (BCP) and
encompasses IT disaster recovery planning
and wider IT resilience planning. It also
incorporates those elements of IT
infrastructure and services which relate to
communications such as (voice) telephony
and data communications.
The ITSC Plan reflects Recovery Point
Objective (RPO - recent transactions) and
Recovery Time Objective (RTO - time
intervals).
RECOVERY TIME OBJECTIVE
The Recovery Time Objective (RTO) is the
targeted duration of time and a service level
within which a business process must be restored
after a disaster (or disruption) in order to avoid
unacceptable consequences associated with a
break in business continuity.
Schematic representation of the terms RPO and
RTO. In this example, the agreed values of RPO
and RTO are not fulfilled.
In accepted business continuity planning
methodology, the RTO is established during the
Business Impact Analysis (BIA) by the owner of a
process, including identifying options time frames
for alternate or manual workarounds.
CONTINUE..
RECOVERY POINT OBJECTIVE
A Recovery Point Objective (RPO) is defined
by business continuity planning. It is the
maximum targeted period in which data
(transactions) might be lost from an IT
service due to a major incident.
If RPO is measured in minutes (or even a few
hours), then in practice, off-site mirrored
backups must be continuously maintained; a
daily off-site backup on tape will not suffice.
5 TOP ELEMENTS OF AN EFFECTIVE
DISASTER RECOVERY PLAN
Disaster recovery team: This assigned group of
specialists will be responsible for creating,
implementing and managing the disaster recovery
plan. This plan should define each team member’s
role and responsibilities. In the event of a disaster,
the recovery team should know how to
communicate with each other, employees, vendors,
and customers.
Risk evaluation: Assess potential hazards that put
your organization at risk. Depending on the type of
event, strategize what measures and resources will
be needed to resume business. For example, in the
event of a cyber attack, what data protection
measures will the recovery team have in place to
respond?
CONTINUE..
Business-critical asset identification: A good
disaster recovery plan includes documentation
of which systems, applications, data, and
other resources are most critical for business
continuity, as well as the necessary steps to
recover data.
Backups: Determine what needs backup (or to
be relocated), who should perform backups,
and how backups will be implemented. Include
a recovery point objective (RPO) that states
the frequency of backups and a recovery time
objective (RTO) that defines the maximum
amount of downtime allowable after a disaster.
CONTINUE..
Testing and optimization: The recovery team
should continually test and update its strategy to
address ever-evolving threats and business
needs.
By continually ensuring that a company is ready
to face the worst-case scenarios in disaster
situations, it can successfully navigate such
challenges.
In planning how to respond to a cyber attack, for
example, it’s important that organizations
continually test and optimize their security and
data protection strategies and have protective
measures in place to detect potential security
breaches.
HOW TO BUILD A DISASTER
RECOVERY TEAM?
Whether creating a disaster recovery
strategy from scratch or improving an
existing plan, assembling the right
collaborative team of experts is a critical first
step. It starts with tapping IT specialists and
other key individuals to provide leadership
over the following key areas in the event of a
disaster:
CONTINUE..
Crisis management: This leadership role commences
recovery plans, coordinates efforts throughout the
recovery process, and resolves problems or delays that
emerge.
Business continuity: The expert overseeing this ensures
that the recovery plan aligns with the company’s
business needs, based on the business impact analysis.
Impact assessment and recovery: The team responsible
for this area of recovery has technical expertise in IT
infrastructure including servers, storage, databases and
networks.
IT applications: This role monitors which application
activities should be implemented based on a restorative
plan. Tasks include application integrations, application
settings and configuration, and data consistency.
TRUSTED COMPUTING AND
MULTILEVEL SECURITY
Multilevel security or multiple levels of
security (MLS) is the application of a
computer system to process information with
incompatible classifications (i.e., at different
security levels), permit access by users with
different security clearances and needs-to-
know, and prevent users from obtaining
access to information for which they lack
authorization.
TWO CONTEXTS FOR THE USE OF
MULTILEVEL SECURITY.
One is to refer to a system that is adequate
to protect itself from subversion and has
robust mechanisms to separate information
domains, that is, trustworthy.
Another context is to refer to an application
of a computer that will require the computer
to be strong enough to protect itself from
subversion and possess adequate
mechanisms to separate information
domains, that is, a system we must trust.
This distinction is important because systems
that need to be trusted are not necessarily
trustworthy.
TRUSTED OPERATING SYSTEMS
An MLS operating environment often requires
a highly trustworthy information processing
system often built on an MLS operating
system (OS), but not necessarily.
Most MLS functionality can be supported by a
system composed entirely of untrusted
computers, although it requires multiple
independent computers linked by hardware
security-compliant channels.
CONTINUE..
An example of hardware enforced MLS is
asymmetric isolation. If one computer is
being used in MLS mode, then that computer
must use a trusted operating system (OS).
Because all information in an MLS
environment is physically accessible by the
OS, strong logical controls must exist to
ensure that access to information is strictly
controlled.
Typically this involves mandatory access
control that uses security labels, like the Bell–
LaPadula model.
CONTINUE..
Freely available operating systems with some
features that support MLS include Linux with
the Security-Enhanced Linux feature enabled
and FreeBSD. Security evaluation was once
thought to be a problem for these free MLS
implementations for three reasons:
CONTINUE..
It is always very difficult to implement kernel
self-protection strategy with the precision
needed for MLS trust, and these examples were
not designed to or certified to an MLS protection
profile so they may not offer the self-protection
needed to support MLS.
Common Criteria lacks an inventory of
appropriate high assurance protection profiles
that specify the robustness needed to operate in
MLS mode.
Even if (1) and (2) were met, the evaluation
process is very costly and imposes special
restrictions on configuration control of the
evaluated software.
PROBLEM AREAS
Sanitization is a problem area for MLS
systems. Systems that implement MLS
restrictions, like those defined by Bell–
LaPadula model, only allow sharing when it
does not obviously violate security restrictions.
Users with lower clearances can easily share
their work with users holding higher
clearances, but not vice versa.
There is no efficient, reliable mechanism by
which a Top Secret user can edit a Top Secret
file, remove all Top Secret information, and
then deliver it to users with Secret or lower
clearances.
CONTINUE..
Covert channels pose another problem for MLS
systems. For an MLS system to keep secrets
perfectly, there must be no possible way for a Top
Secret process to transmit signals of any kind to a
Secret or lower process.
This includes side effects such as changes in
available memory or disk space, or changes in
process timing. When a process exploits such a
side effect to transmit data, it is exploiting a
covert channel.
It is extremely difficult to close all covert channels
in a practical computing system, and it may be
impossible in practice. The process of identifying
all covert channels is a challenging one by itself.
CONTINUE..
Bypass is problematic when introduced as a
means to treat a system high object as if it
were MLS trusted. A common example is to
extract data from a secret system high object
to be sent to an unclassified destination,
citing some property of the data as trusted
evidence that it is 'really' unclassified (e.g.
'strict' format).
A system high system cannot be trusted to
preserve any trusted evidence, and the result
is that an overt data path is opened with no
logical way to securely mediate it.
MILS ARCHITECTURE
Multiple Independent Levels of Security
(MILS) is an architecture that addresses the
domain separation component of MLS.
Security models such as the Biba model (for
integrity) and the Bell–LaPadula model (for
confidentiality) allow one-way flow between
certain security domains that are otherwise
assumed to be isolated.
MILS addresses the isolation underlying MLS
without addressing the controlled interaction
between the domains addressed by the
above models.
CONTINUE..
Trusted security-compliant channels mentioned
above can link MILS domains to support more
MLS functionality.
The MILS approach pursues a strategy
characterized by an older term, MSL (multiple
single level), that isolates each level of
information within its own single-level
environment (System High).
The rigid process communication and isolation
offered by MILS may be more useful to ultra high
reliability software applications than MLS. MILS
notably does not address the hierarchical
structure that is embodied by the notion of
security levels.
MSL SYSTEMS
There is another way of solving such
problems known as multiple single-level.
Each security level is isolated in a separate
untrusted domain.
The absence of medium of communication
between the domains assures no interaction
is possible.
The mechanism for this isolation is usually
physical separation in separate computers.
This is often used to support applications or
operating systems which have no possibility
of supporting MLS such as Microsoft
Windows.