0% found this document useful (0 votes)
275 views35 pages

CISSP Revision

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
275 views35 pages

CISSP Revision

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Created by Biswadip Goswami

New Topics Covered for 2024 April examination CISSP® Rev 1.1

Domain 3:

**************************************
Manage the information system lifecycle
**************************************

The "Manage the information system lifecycle" is a new topic introduced in the CISSP
(Certified Information Systems Security Professional) certification exam, according to the
2024 exam update. This section encompasses the entire stages of system management,
ranging from identifying stakeholder needs to the retirement or disposal of systems. It
emphasizes the importance of incorporating security considerations at every stage of a
system's lifecycle to ensure comprehensive protection and resilience against potential
cybersecurity threats. This addition reflects the evolving landscape of cybersecurity and
the need for security professionals to be adept at managing information systems
throughout their entire lifecycle, ensuring that security is an integral part of the process
from inception to decommissioning.

Managing the information system lifecycle involves a comprehensive process that


encompasses several stages.

1. Initiation and Concept Development: This initial stage involves identifying the need
for a new system to meet business objectives or improve existing processes.
Stakeholder needs are gathered, and preliminary concepts are developed.

2. System Planning and Requirements Definition: In this phase, detailed planning and
definition of system requirements take place. It involves analyzing stakeholder
needs in-depth to develop a clear set of system requirements, which will guide the
development process.

3. System/Architectural Design: Based on the requirements defined in the previous


stage, the system's architecture and design are developed. This includes the
selection of technology, system modeling, and the definition of system workflows
and interfaces.

4. Development and Implementation: During this phase, the system is built, coded, or
configured according to the specifications outlined in the design phase. This
includes the development of software, installation and configuration of hardware,
and integration of system components.

5. Integration and Testing: This crucial stage involves integrating the system
components and conducting thorough testing to identify and fix any issues. Testing

[email protected] 1
Created by Biswadip Goswami

may include unit testing, integration testing, system testing, and acceptance testing
to ensure the system meets all requirements and functions as intended.

6. Deployment: Once the system has been tested and approved, it is deployed into the
production environment. This stage may involve data migration, system
configuration, and user training to ensure a smooth transition to the new system.

7. Operations and Maintenance: After deployment, the system enters the operations
and maintenance phase, where it is actively used and maintained. This includes
ongoing support, system monitoring, performance tuning, and periodic updates to
address new requirements or security threats.

8. Retirement or Disposal: Eventually, the system may become obsolete or no longer


meet the organization’s needs. In this final stage, the system is retired or disposed of
in a secure manner. This involves ensuring that all sensitive data is properly erased
or transferred, decommissioning the system components, and possibly
transitioning to a new system.

Throughout each stage of the system lifecycle, it’s important to integrate security
considerations, risk management, and compliance requirements to protect the system
against cybersecurity threats and ensure it meets all relevant regulations and standards.

********************************
SASE: Secure access service edge
********************************

Secure Access Service Edge (SASE, pronounced "sassy") is an emerging cybersecurity


concept that integrates network security functions with wide-area networking (WAN)
capabilities to support the dynamic, secure access needs of organizations' mobile
workforces and cloud applications. The term was first introduced by Gartner in 2019,
reflecting a shift towards a more integrated approach to securing access in a cloud-centric
world.

Key Components of SASE


SASE combines various networking and security functions into a unified, cloud-native
service platform, odering both flexibility and scalability. Key components typically include:

Ø Software-Defined Wide Area Networking (SD-WAN): Enhances network connectivity


and performance by dynamically routing tradic across the WAN based on current
network conditions and user requirements.
Ø Firewall as a Service (FWaaS): Provides firewall capabilities delivered from the
cloud, allowing for inspection and filtering of internet-bound tradic directly from
remote devices.

[email protected] 2
Created by Biswadip Goswami

Ø Zero Trust Network Access (ZTNA): Implements the principle of "never trust, always
verify" by granting access to applications and services based on the identity of the
user and device, context, and policy compliance, rather than based on the user's
network location.
Ø Cloud Access Security Broker (CASB): Acts as an intermediary between users and
cloud service providers to enforce security policies, compliance, and governance
for cloud applications.
Ø Secure Web Gateway (SWG): Provides protection against web-based threats and
enforcement of company policies, filtering unwanted software/malware from user-
initiated web/internet tradic.
Ø Data Loss Prevention (DLP): Monitors, detects, and prevents data breaches/data ex-
filtration transmissions by monitoring data in use, data in motion, and data at rest.

Benefits of SASE
i. Improved Security Posture: Integrates multiple security services to provide
comprehensive threat prevention, detection, and response capabilities.
ii. Reduced Complexity: Consolidates networking and security functions, reducing the
need for multiple standalone products and simplifying management.
iii. Enhanced Performance: Optimizes network performance by routing tradic through
the most edicient paths and enabling secure, direct internet access for cloud
applications.
iv. Scalability and Flexibility: Easily scales to accommodate growing numbers of
remote users and cloud services, adapting to the evolving needs of businesses.
v. Cost Ediciency: Potentially lowers overall costs by consolidating vendors and
services, and by optimizing network performance and cloud resource usage.

For CISSP professionals, understanding SASE is important for several reasons:

Security and Risk Management: SASE's integrated approach aligns with the CISSP's focus
on comprehensive risk management strategies, providing a framework for securing
distributed environments.
Architecting Secure Solutions: CISSP professionals involved in designing secure network
architectures will need to consider how SASE can be leveraged to meet their organization's
security, performance, and operational objectives.
Emerging Technologies: Keeping abreast of emerging technologies like SASE is critical for
CISSPs, as these innovations can significantly impact security strategies and architecture
decisions.
SASE represents a shift towards a more holistic, agile approach to network security,
particularly in environments characterized by widespread cloud adoption and remote
workforces. For CISSP professionals, incorporating SASE principles into security planning
and architecture can enhance organizational agility, improve security outcomes, and
support the seamless operation of modern, distributed enterprises.

[email protected] 3
Created by Biswadip Goswami

************************
Quantum Key Distribution
************************

Quantum Key Distribution (QKD) is an advanced cryptographic system that uses the
principles of quantum mechanics to secure communication channels. Unlike traditional
cryptographic methods that rely on mathematical complexity for security, QKD leverages
the physical properties of quantum particles (such as photons) to generate and share
encryption keys between parties in a way that is theoretically immune to eavesdropping.
This is because any attempt to measure or intercept the quantum states of the particles
used in the key distribution process will inevitably alter those states, thus alerting the
communicating parties to the presence of an eavesdropper.

Principles of QKD
i. Quantum Entanglement: A pair of quantum particles, such as photons, can be
entangled in such a way that the state of one (its polarization, for example)
instantaneously determines the state of the other, regardless of the distance
between them. This property is used in some QKD systems to transmit keys
securely.
ii. Heisenberg's Uncertainty Principle: In the quantum world, the act of measuring a
particle's state (e.g., the polarization of a photon) changes that state. QKD uses this
principle to ensure that any attempt by an eavesdropper to measure the quantum
bits (qubits) being used to establish a key would be detected by the legitimate
communicating parties.

QKD in Practice
A. Key Distribution: QKD is used primarily for the secure distribution of encryption
keys. The actual encryption and decryption of messages still rely on traditional
cryptographic algorithms.

B. Tamper Detection: The quantum properties ensure that any interception or


measurement of the qubits by an unauthorized party can be detected. If
interference is detected, the key is discarded, and a new key is generated.

C. Distance Limitations: Early implementations of QKD were limited by the distance


over which qubits could be reliably transmitted without significant loss. Recent
advancements, including the use of quantum repeaters and satellite-based QKD,
aim to overcome these limitations.

CISSP Considerations:
Ø Future-Proof Security: current encryption algorithms may become vulnerable. QKD
presents a method for secure communication that could resist quantum computing
attacks.

[email protected] 4
Created by Biswadip Goswami

Ø Implementation and Integration: As QKD technology matures, security


professionals will need to consider how it can be implemented within existing
security infrastructures and what changes would be necessary for integration.

Ø Regulatory and Standards Compliance: The development of international standards


and regulations for QKD and quantum cryptography will be an important area for
CISSPs to monitor, ensuring compliance in secure communication practices.

Ø Cost and Accessibility: Current QKD systems are expensive and complex to deploy.
CISSPs will need to weigh the costs against the security benefits, particularly for
critical infrastructure and highly sensitive communication.

QKD represents a cutting-edge approach to secure communications, odering potential


security solutions in the quantum computing era. QKD and quantum cryptography is
essential for future-proofing security strategies and understanding the evolving landscape
of cryptographic security.

Domain 1:

********************************
Five Pillars of Information Security
********************************

Confidentiality
Confidentiality ensures that sensitive information is accessed only by authorized
individuals and systems. It's about keeping data private through encryption, access
controls, and secure channels of communication to protect against unauthorized access
and disclosures.

Integrity
Integrity ensures the accuracy and trustworthiness of data. It verifies that information has
not been tampered with, altered, or corrupted. This is achieved through hashing,
checksums, digital signatures, and rigorous authentication processes.

Availability
Availability ensures that data and resources are accessible to authorized users whenever
needed. This involves redundancy, failover systems, regular maintenance, and robust
disaster recovery plans to combat downtime and ensure service continuity.

Authenticity (Authentication)
Authenticity confirms the identity of a person or entity interacting with the system. It’s
crucial for enforcing access controls and verifying that users are who they claim to be. This
can be accomplished through passwords, biometrics, security tokens, and multi-factor
authentication mechanisms.

[email protected] 5
Created by Biswadip Goswami

Non-Repudiation
Non-Repudiation prevents individuals or entities from denying the authenticity of their
signatures on a document or the sending of a message that they originated. This is crucial
in legal, financial, and contractual environments. Digital signatures and comprehensive
audit trails are key technologies for ensuring non-repudiation.

EXAMPLE for each pillar:


Confidentiality
Imagine a diary with a lock on it. The owner of the diary has the key and can decide who
else, if anyone, gets a key. In the digital world, the diary is sensitive information stored on a
computer or transmitted over the internet. Encryption is the digital lock that ensures only
authorized parties with the correct 'key' (such as a password or decryption key) can access
the information. For example, an encrypted email can only be read by the recipient who has
the key.

Integrity
Consider a sealed envelope sent through the mail. If the envelope arrives with the seal
broken, the recipient knows the contents may have been tampered with. In cybersecurity,
digital signatures and hashing functions act as the seal, ensuring that any tampering with
the data (like changing bank account numbers in a transaction) is detectable because it
breaks the 'seal,' indicating the data's integrity has been compromised.

Availability
Think of a library that's open 24/7, ensuring that books are always available whenever you
need them. Similarly, in cybersecurity, ensuring availability means making sure that data
and systems are accessible to authorized users whenever they need them. This involves
strategies like redundant systems, backups, and disaster recovery plans to protect against
DDoS attacks or hardware failures that could make data temporarily inaccessible.

Authenticity (Authentication)
Imagine going to a club where a bouncer checks your ID before letting you in. Your ID
verifies that you are who you claim to be. In cybersecurity, authentication mechanisms like
passwords, biometric scans, or security tokens serve the same purpose. They verify the
identity of a user or device trying to access a system to ensure that they're authorized to do
so.

Non-Repudiation
Consider sending a registered letter via postal mail. The sender receives a receipt, and the
recipient signs for the letter upon delivery. This system ensures that the sender cannot
deny sending the letter, and the recipient cannot deny receiving it. In the digital world,
digital signatures and transaction logs serve a similar purpose. They provide irrefutable

[email protected] 6
Created by Biswadip Goswami

evidence of the origin and receipt of messages or transactions, ensuring that parties
cannot deny their actions.

******************************************************
SABSA - Sherwood Applied Business Security Architecture
******************************************************

SABSA (Sherwood Applied Business Security Architecture) is a framework and


methodology for enterprise security architecture and service management. Developed by
John Sherwood, Andrew Clark, and David Lynas, SABSA is designed to integrate seamlessly
with business and IT management frameworks. Its primary goal is to ensure that security
services are designed, delivered, and managed in alignment with enterprise business goals
and objectives.

Relevance to Risk Frameworks

Ø Risk Management: SABSA’s approach to security architecture is inherently risk-


driven, focusing on identifying and managing risks that could impact business
objectives. This aligns with emphasis on understanding and applying risk
management concepts, tools, and techniques to ensure the confidentiality,
integrity, and availability of information.

Ø Security Architecture and Design: SABSA provides a comprehensive model for


designing security architectures that are both robust and flexible. This includes
defining security domains, identifying critical assets, and specifying security
controls.

Ø Business Continuity and Disaster Recovery: SABSA’s focus on aligning security


practices with business objectives supports the development of edective business
continuity and disaster recovery strategies. This ensures that security measures
contribute to the resilience of business operations.

Ø Governance and Policy: SABSA encourages the establishment of security policies


and governance structures that reflect the organization's business goals and risk
tolerance levels. This is directly relevant to the focused-on Security and Risk
Management, which covers governance principles, compliance, and legal issues.

Ø Integration with Other Frameworks: SABSA is designed to be compatible with other


frameworks, such as ITIL, COBIT, and ISO/IEC 27001. Knowledge of how to integrate
various frameworks is beneficial in managing and securing information systems
within an organizational context.

[email protected] 7
Created by Biswadip Goswami

*********************************
Cryptocurrency, AI and BlockChain
*********************************

These technologies not only represent new opportunities but also introduce unique risks
and challenges that security professionals must be prepared to address.

Cryptocurrency:

Security and Risk Management: Cryptocurrencies introduce new forms of financial


transactions that bypass traditional banking systems. Security Professional need to
understand the security risks associated with cryptocurrency transactions, including wallet
security, transaction privacy, and the potential for theft or loss.

Asset Security: Cryptocurrencies themselves can be considered digital assets that require
protection. Securing cryptographic keys, managing wallets, and understanding the
implications of blockchain technology for asset security are critical.

Artificial Intelligence (AI)

Security Architecture and Design: AI systems must be designed with security in mind to
prevent manipulation and ensure that AI behaves predictably in diverse scenarios.
Understanding the architecture of AI and machine learning systems, including data inputs,
processing, and decision-making, is important for security.

Identity and Access Management (IAM): AI can enhance IAM through behavior analysis,
anomaly detection, and automated decision-making for access requests. However, it also
raises concerns about privacy and the potential for AI-driven decisions to be manipulated.
Security Assessment and Testing: AI and machine learning models themselves need to be
securely developed, tested, and deployed, ensuring they are free from biases and cannot
be easily tricked or bypassed.

Blockchain

Security and Risk Management: Blockchain technology oders a decentralized and tamper-
evident ledger for transactions, which can enhance the integrity and non-repudiation of
digital transactions.

Software Development Security: Blockchain applications, such as smart contracts, require


secure development practices to prevent exploits that could lead to financial loss or
compromised data integrity. Understanding the principles of secure coding and the unique
aspects of blockchain application development is important.

[email protected] 8
Created by Biswadip Goswami

Asset Security: Blockchain technology has implications for the management and security
of digital assets, including the use of cryptographic keys for accessing and transacting
digital currencies or tokens.

Security Controls around these technologies:

Implementing robust security measures is crucial for protecting against fraud, theft,
manipulation, and ensuring the integrity and confidentiality of data. Here's some of the
security controls:

Blockchain technology underpins cryptocurrencies and has various applications across


industries, odering benefits like tamper-resistance and decentralized verification. However,
its security depends on both the underlying technology and the application's
implementation.

Ø Smart Contract Auditing: Regularly audit and test smart contracts for vulnerabilities.
Automated tools and manual code review can identify flaws that could be exploited.
Ø Access Controls: Implement strict access controls for blockchain nodes and
administrative functions to prevent unauthorized access and actions.
Ø Network Monitoring and Anomaly Detection: Monitor blockchain networks for
unusual activity that could indicate a security issue, such as attempts to gain
control over the network for a 51% attack.
Ø Encryption and Privacy Enhancements: Use encryption for data stored on the
blockchain and consider privacy-focused technologies like zero-knowledge proofs
to enhance user privacy without compromising security.

Cryptocurrency Security Controls


Cryptocurrencies are a prime target for theft due to their high value and the irreversible
nature of transactions.

Ø Multi-Factor Authentication (MFA): Require MFA for accessing cryptocurrency


wallets and exchange accounts to add an extra layer of security beyond just a
password.
Ø Cold Storage: Keep the majority of cryptocurrency holdings in cold storage (odline
wallets) to protect them from online hacking attempts.
Ø Transaction Monitoring: Implement monitoring systems to detect and alert on
suspicious transactions that could indicate fraud or theft.
Ø Educate Users: Provide users with information on securing their wallets and
recognizing phishing attempts and other scams.

As AI systems are increasingly integrated into various applications, ensuring their security
and the integrity of their operations is paramount.

[email protected] 9
Created by Biswadip Goswami

Ø Data Integrity and Privacy: Ensure the integrity and privacy of the data used to train
and operate AI models. This includes using encrypted data storage and secure data
processing techniques.
Ø Robustness and Adversarial Testing: Regularly test AI models against adversarial
inputs and scenarios to identify potential weaknesses or biases in the models.
Ø Secure Model Serving: Protect AI models from unauthorized access and tampering
when deployed, using techniques like model encryption and secure enclaves.
Ø Transparency and Explainability: Implement measures to make AI decisions
transparent and explainable, which can help in identifying when an AI system is
acting on biased, manipulated, or erroneous data.

Several standard security controls are relevant across Blockchain, Cryptocurrency, and AI:

Ø Regular Security Audits and Penetration Testing: Conduct comprehensive audits


and penetration testing to identify and mitigate vulnerabilities.
Ø Continuous Monitoring and Incident Response: Implement systems for continuous
monitoring of security events and establish an edective incident response plan.
Ø User Education and Awareness: Continuously educate users and stakeholders
about potential security risks and best practices for mitigation.

*********************************************************
Silocon Root of Trust, Physically Unclonable Function, SBOM
*********************************************************
Silicon Root of Trust
A Silicon Root of Trust (RoT) is a security mechanism embedded directly into the hardware
of a device. It serves as a foundational layer of trust, ensuring that the device boots using
only firmware that is known to be good. This is achieved by securely storing cryptographic
keys and other sensitive information within the silicon itself, which can then be used to
verify the integrity and authenticity of the firmware and software during the boot process.
The Silicon RoT helps protect against firmware attacks and ensures that the initial code
executed by the device is not tampered with, providing a secure foundation for all
subsequent layers of device security.

Relevance in Cybersecurity: Silicon RoT is crucial for preventing root-level attacks that seek
to compromise a device's firmware or boot process. By establishing a trust anchor at the
hardware level, it provides a robust defense mechanism that is didicult for attackers to
bypass, enhancing the overall security posture of the device.

Physically Unclonable Function (PUF)

[email protected] 10
Created by Biswadip Goswami

Physically Unclonable Function (PUF) is a technology used to generate a unique hardware


fingerprint for semiconductor devices. It exploits the minute physical variations that occur
naturally during the manufacturing process, which are unpredictable and virtually
impossible to replicate. When a specific challenge (input) is applied to a PUF circuit, it
produces a unique response (output) based on these physical characteristics. This
response can be used for a variety of purposes, such as generating cryptographic keys,
device authentication, or ensuring hardware integrity.

Relevance in Cybersecurity: PUF technology oders a highly secure method for generating
and storing cryptographic keys directly in hardware, without the need for external memory.
The unique nature of each PUF ensures that keys cannot be duplicated or extracted from
the device, providing a strong defense against cloning and physical tampering attempts.

Software Bill of Materials (SBOM)


A Software Bill of Materials (SBOM) is essentially a detailed inventory of all components,
libraries, and modules that make up a piece of software, along with their corresponding
versions and dependencies. It includes information about open source and proprietary
components, making it an essential tool for managing software supply chain risks. SBOMs
help in understanding the composition of software, identifying known vulnerabilities,
complying with licensing requirements, and conducting security audits.

Relevance in Cybersecurity: With the increasing complexity of software and the


widespread use of open-source components, SBOMs have become critical for
cybersecurity. They enable organizations to quickly identify and remediate vulnerabilities
within their software ecosystem, monitor compliance with security policies, and respond
edectively to new threats. SBOMs also play a vital role in supply chain security by providing
transparency into the software components used in applications and ensuring that
insecure or outdated components can be identified and updated.

**************************************************
Supply Chain Risk Management – Additional Content
**************************************************

The acquisition of products and services from suppliers and providers carries various risks
that can significantly impact an organization's security posture and operational integrity.
These risks can range from product tampering and counterfeits to the introduction of
unauthorized implants and malicious software. Identifying, assessing, and mitigating these
risks is a critical part of supply chain security and risk management strategies. Here’s a
breakdown of the key risks and strategies for mitigation:

Product Tampering
Ø Risk: Product tampering involves unauthorized modifications made to a product’s
design or functionality, potentially introducing vulnerabilities or backdoors. This can
occur at any point in the supply chain, from manufacturing to delivery.

[email protected] 11
Created by Biswadip Goswami

Mitigation Strategies:

Ø Secure Transportation and Storage: Implement secure logistics practices, including


tamper-evident packaging and secure storage facilities, to detect and prevent
tampering.
Ø Supplier Audits and Certifications: Conduct regular audits of suppliers and require
certifications that attest to the security and integrity of their manufacturing and
distribution processes.

Counterfeits
Ø Risk: Counterfeit products are unauthorized reproductions that may be of inferior
quality, fail to meet safety standards, or contain malicious components. They can
enter the supply chain through unverified suppliers or gray market purchasing.

Mitigation Strategies:

Ø Procurement Controls: Establish strict procurement policies that require


purchasing from authorized or certified suppliers.
Ø Verification and Testing: Implement processes to verify the authenticity of received
products, including physical inspections and functionality testing.

Unauthorized Implants and Malicious Software


Ø Risk: Unauthorized implants refer to hardware or software modifications made to a
product to enable unauthorized access or functionality. Malicious software can be
pre-installed on devices to compromise information security.

Mitigation Strategies:

Ø Secure Development Lifecycle (SDLC): For custom-developed software, ensure a


secure SDLC process that includes thorough testing and code reviews to detect
unauthorized code.
Ø Device Integrity Checks: Perform integrity checks and security scans on hardware
and software upon receipt and before integration into the operational environment.
Ø Network Segmentation and Monitoring: Use network segmentation to limit the
potential impact of compromised devices and implement robust network
monitoring to detect anomalous behavior.

General Mitigation Strategies


Ø Comprehensive Risk Assessments: Conduct thorough risk assessments focusing on
supply chain vulnerabilities, including the potential impact of tampered,
counterfeit, or compromised products and services.

[email protected] 12
Created by Biswadip Goswami

Ø Supplier Relationship Management: Develop strong relationships with suppliers to


improve communication on security matters and ensure compliance with security
requirements.
Ø Incident Response Planning: Prepare an incident response plan that includes
scenarios related to supply chain compromises, enabling quick action to mitigate
impacts.
Ø Continuous Monitoring and Improvement: Establish processes for continuously
monitoring supply chain risks and improving security controls based on evolving
threats.

Domain 4:

******************
Planes of Operation
******************

The network architecture is also divided into diderent planes of operation, each
responsible for specific functions:

Data Plane (Forwarding Plane)


Ø Function: Responsible for the actual movement of data packets from one point to
another within the network. It deals with the processing of packets based on the
forwarding information such as routing tables determined by the control plane.
Ø Example: When you stream a video, the data plane handles the transmission of
video packets from the server to your device, following the paths set by the routing
protocols. The switch or router examines the packet's destination address and
decides through which output port to send the packet towards its destination.

Control Plane
Ø Function: Manages the routing of packets by making decisions on how data should
be sent from source to destination. It involves protocols that determine the best
path for data across the network and builds routing tables used by the data plane.
Ø Example: If a router is part of a network using the OSPF (Open Shortest Path First)
routing protocol, the control plane uses OSPF messages to discover the network
topology and calculate the best paths between networks. It updates the routing
table accordingly, which the data plane uses to forward packets.

Management Plane
Ø Function: Involves the administrative tasks needed to keep the network running
smoothly. This includes configuration, monitoring, and management of the network
and its devices. The management plane allows network administrators to perform
tasks necessary for network maintenance and policy enforcement.

[email protected] 13
Created by Biswadip Goswami

Ø Example: Using SNMP (Simple Network Management Protocol), a network


administrator can query a router for tradic statistics, modify its configuration, or
receive alerts. The management plane enables these tasks without directly adecting
how packets are routed or forwarded.

********************
Switching techniques
********************

Cut-Through Switching
Ø Function: This method allows the switch to begin forwarding a packet to its
destination as soon as the switch has received the destination address—without
waiting for the entire packet to arrive. It reduces latency because the switch doesn't
examine the entire packet, making it faster than Store-and-Forward.
Ø Example: Consider a scenario in a high-frequency trading (HFT) firm where every
millisecond of latency can impact trading outcomes. In such an environment,
switches would employ Cut-Through Switching to ensure that trading orders are
transmitted through the network as quickly as possible, minimizing delay from the
time an order is placed to when it is executed.

Store-and-Forward Switching
Ø Function: In this method, the switch waits for the entire packet to be received before
forwarding it. During this time, it checks the packet for errors or corruption by
verifying the packet's checksum or CRC (Cyclic Redundancy Check). If the packet is
found to be error-free, it is forwarded; otherwise, it is discarded. This method
ensures that only valid packets are propagated through the network, reducing the
risk of transmitting erroneous data.
Ø Example: In a corporate network environment where data integrity is crucial (such
as financial transactions or sensitive data exchange), switches would use Store-
and-Forward Switching to ensure that all transmitted data packets are error-free,
thus maintaining high data integrity across the network.

A Comparison of both:

Ø Speed: Cut-Through Switching oders lower latency because it begins forwarding


packets as soon as it knows the destination, without waiting for the entire packet.
This is advantageous in environments where speed is critical.
Ø Data Integrity: Store-and-Forward Switching provides better error checking by
examining the entire packet before forwarding, which is beneficial in networks
where data integrity and error-free transmission are priorities.

[email protected] 14
Created by Biswadip Goswami

*************************
VPC - Virtual Private Cloud
*************************

Virtual Private Cloud (VPC) Overview


A Virtual Private Cloud (VPC) is a secure, isolated private cloud hosted within a public
cloud environment. This allows organizations to run their cloud computing resources within
a virtual network they have defined, giving them control over their virtual networking
environment. This includes selections of IP address ranges, creation of subnets,
configuration of route tables and network gateways.

Ø Security and Isolation: VPCs can provide an isolated environment for sensitive
applications and data in the cloud is crucial for designing secure cloud
architectures.

Ø Networking Controls: Configuring VPCs, including setting up subnets, security


groups, network ACLs (Access Control Lists), and VPN connections, to ensure
secure and edicient network tradic flow.

Ø Compliance and Regulations: Knowledge about implementing VPCs can also help
in ensuring that cloud deployments comply with industry regulations and standards
by maintaining data sovereignty and restricting data flows as necessary.

Ø Hybrid Cloud Environments: VPCs play a key role in hybrid cloud environments by
securely connecting cloud resources with on-premises data centers, which is a
common scenario in many organizations' IT landscapes.

Ø Disaster Recovery and Business Continuity: VPCs can be configured across


multiple availability zones and regions for high availability and disaster recovery
purposes, aligning with focus on business continuity planning and disaster recovery
strategies.

Virtual Private Cloud (VPC) plays a significant role in enhancing cybersecurity through its
inherent design, odering isolation, control, and customization of cloud resources. Its
benefits and the cybersecurity control it enables are central to securing cloud-based
environments.

[email protected] 15
Created by Biswadip Goswami

Network Segmentation,
Ø Network Isolation: A VPC provides an isolated section of the cloud where you can
launch resources in a virtual network that you define. This isolation helps protect
your resources from unwanted access or attacks from other parts of the cloud.

Ø Subnetting and Segmentation: Within a VPC, you can create multiple subnets,
which can be public or private, allowing for detailed network segmentation. This
enables you to separate resources that need to be publicly accessible from those
that do not, reducing the attack surface.

Access Controls
Ø Security Groups: These act as virtual firewalls for your instances to control inbound
and outbound tradic at the instance level. By specifying allowed services, ports, and
source IP ranges, you can limit access to only necessary communication, following
the principle of least privilege.
Ø Network Access Control Lists (NACLs): These provide a layer of security at the
subnet level, odering another layer of filtering to control access to and from the
subnets within your VPC.

Data Protection
Ø Encryption: VPCs support the implementation of encryption-in-transit and at-rest
strategies, ensuring data is encrypted as it moves between your VPC and other
locations, as well as when it is stored.
Ø Private Connectivity Options: Services like AWS Direct Connect or Azure
ExpressRoute allow for the establishment of private connections between the VPC
and your on-premise network, bypassing the public internet to enhance security and
reduce exposure.

Tradic Monitoring and Visibility


Ø Flow Logs: VPC flow logs capture information about the IP tradic going to and from
network interfaces in your VPC. This data is crucial for security monitoring, allowing
for the detection of anomalous tradic patterns or identifying unauthorized access
attempts.
Ø Network Monitoring Tools: Integration with various network monitoring and
management tools helps in continuous monitoring of network tradic, enabling
timely detection and response to potential threats.

Custom Route Tables


Ø Routing Control: Custom route tables in a VPC allow for precise control over where
network tradic is directed, ensuring that it is always routed in a manner that aligns
with security policies.

[email protected] 16
Created by Biswadip Goswami

Compliance and Regulatory Requirements


Ø Regulatory Compliance: VPCs can be configured to comply with various regulatory
requirements by ensuring that data resides in specific geographic locations and by
implementing controls that are mandated by compliance frameworks.
Disaster Recovery (DR)
Ø Multi-Availability Zones and Regions: VPCs can span across multiple Availability
Zones and can be connected across regions, supporting disaster recovery and
business continuity by distributing resources geographically to mitigate the impact
of regional failures.

Implementing these cybersecurity controls within a VPC environment requires careful


planning and continuous management to ensure that security policies are correctly
applied and updated in response to evolving threats.

******************************
InfiniBand over Ethernet (RoCE)
*******************************

InfiniBand over Ethernet (RoCE) is a network technology that enables InfiniBand's high-
performance computing features to be transported over Ethernet networks. RoCE allows
data centers to leverage the high throughput and low latency benefits of InfiniBand while
using the more common Ethernet infrastructure. This technology is particularly useful in
environments where high data transfer rates and ediciency are crucial, such as in cloud
computing, big data analytics, and storage networks.

Key Features and Security Considerations:

Ø High Performance: RoCE provides high bandwidth and low latency communication
by bypassing the operating system's network stack, directly accessing the Ethernet
network adapter's hardware.
Ø Compatibility: It enables the convergence of storage, computing, and network tradic
over a single Ethernet network, reducing the complexity and cost associated with
managing separate networks.
Ø Security Implications: While RoCE itself focuses on performance, it operates within
the broader network security environment of Ethernet. Standard network security
practices, such as securing network access, implementing VLANs for tradic
segmentation, and monitoring for unusual network patterns, remain relevant.

[email protected] 17
Created by Biswadip Goswami

****************************
Compute Express Link (CXL)
****************************

Compute Express Link (CXL) is an open standard interconnect technology designed to


facilitate high-speed, edicient communication between the CPU (central processing unit)
and devices like accelerators, memory buders, and smart I/O devices. It is aimed at data-
intensive applications such as Artificial Intelligence (AI), Machine Learning (ML), High-
Performance Computing (HPC), and more, addressing the need for high-bandwidth, low-
latency connectivity in heterogeneous computing environments.

Key Features and Security Considerations:

Ø Heterogeneous Computing Support: CXL is designed to support a wide range of


computing environments and workloads, facilitating the integration of diderent
types of compute resources.
Ø Memory Coherency and Sharing: One of the key features of CXL is its support for
memory coherency, which allows the CPU and accelerators to share memory
resources seamlessly, improving performance for memory-intensive applications.
Ø Security Implications: With the increased complexity of computing environments
supported by CXL, ensuring the security of data as it moves between components
and enforcing access controls becomes critical. Security mechanisms must
address the potential for unauthorized access and data leakage, particularly in
shared resource environments.

Understanding the implications of implementing technologies like RoCE and CXL is


important from both performance and security perspectives. While these technologies
drive ediciency and enable new capabilities in data centers and computing environments,
they also introduce considerations that must be addressed through comprehensive
security strategies. This includes:

Ø Risk Assessment: Evaluating the security risks associated with deploying these
technologies, particularly in terms of data privacy, integrity, and availability.
Ø Access Control: Implementing robust access control measures to secure
communication channels and protect sensitive data from unauthorized access.
Ø Monitoring and Response: Establishing monitoring mechanisms to detect and
respond to security incidents promptly, ensuring the integrity of high-speed data
transfers and shared resources.
Ø Incorporating RoCE and CXL into an organization’s IT infrastructure requires CISSP
professionals to balance the benefits of high-performance computing with the need
to maintain a secure and compliant environment. This involves staying informed
about the latest developments in these technologies and applying best practices in
network and system security to mitigate potential risks.

[email protected] 18
Created by Biswadip Goswami

**********************************
Tradic Flow: East-West, North-South
**********************************

In the context of networking and cybersecurity, understanding tradic flows—specifically,


north-south and east-west tradic—is crucial for designing secure and edicient networks.
These terms describe the general patterns of data movement within IT environments,
including data centers and cloud deployments. Understanding these tradic flows aids in
applying appropriate security controls and optimizing network performance.

North-South Tra,ic
Ø Definition: North-south tradic refers to the flows of data that enter and exit the data
center or network perimeter. This includes all communications between the data
center and external locations, such as users accessing cloud services from the
internet or data being transferred to and from external partners.

Security Considerations:

Ø Perimeter Security: Traditional network security has focused heavily on securing


north-south tradic, employing firewalls, intrusion detection/prevention systems
(IDS/IPS), and other perimeter security measures to protect against external threats.
Ø Encryption: Encrypting north-south tradic is essential for protecting sensitive data in
transit and complying with data protection regulations.
Ø Monitoring and Analysis: Monitoring north-south tradic helps detect potential
threats, unauthorized access attempts, and other security incidents. It's critical for
maintaining visibility into data entering and leaving the network.

East-West Tra,ic
Ø Definition: East-west tradic describes the movement of data within the data center
or cloud environment. This includes communications between servers, storage
systems, and applications that are hosted in the same data center or cloud
provider's network. With the rise of cloud computing and microservices
architectures, east-west tradic volume has significantly increased.

Security Considerations:

Ø Internal Segmentation: Implementing segmentation and micro-segmentation within


the network is crucial for controlling and securing east-west tradic. This helps limit
the spread of malware and contain breaches within smaller segments of the
network.
Ø Lateral Movement Detection: Security measures must focus on detecting and
preventing lateral movement within the network, as attackers often exploit east-
west tradic to move stealthily after breaching the perimeter.

[email protected] 19
Created by Biswadip Goswami

Ø Application-Level Security: With the increase in east-west tradic, securing


applications and their interactions becomes more critical. Implementing
application firewalls, secure APIs, and rigorous access controls are key strategies.

Designing networks and security architectures requires a deep understanding of both


north-south and east-west tradic patterns. This knowledge enables the implementation of
comprehensive security measures that address both external threats and internal
vulnerabilities.

Ø Hybrid Approaches: Employing a combination of perimeter-based security controls


for north-south tradic and advanced internal defenses for east-west tradic.
Ø Zero Trust Models: Adopting a zero trust security model, which assumes no implicit
trust for any entity, regardless of whether it's inside or outside the network
perimeter, edectively addressing security for both tradic flows.
Ø Continuous Monitoring and Response: Establishing continuous monitoring and
automated response mechanisms to quickly identify and mitigate threats adecting
both types of tradic.

Secure modern IT environments, where the distinction between internal and external
networks is increasingly blurred, and where data often moves dynamically across
traditional boundaries.

***************************
Logical segmentation
***************************

Logical segmentation in networking allows organizations to divide and manage their


network infrastructure more ediciently and securely without requiring physical separation
of resources. This approach underpins the design and implementation of secure network
architectures. Key technologies used in logical segmentation include Virtual Local Area
Networks (VLANs), Virtual Private Networks (VPNs), Virtual Routing and Forwarding (VRF),
and virtual domains. Each oders a method for isolating network tradic and resources,
enhancing security, and improving network management.

Virtual Local Area Networks (VLANs)


Ø Definition: VLANs allow for the logical segmentation of a physical network into
multiple, distinct broadcast domains. Each VLAN operates as if it were a separate
physical network, even though all VLANs can exist on the same physical switch or
infrastructure.

Security Implications:

Ø Isolation: VLANs can isolate sensitive tradic from the rest of the network, reducing
the risk of unauthorized access and limiting the spread of broadcast tradic.

[email protected] 20
Created by Biswadip Goswami

Ø Access Control: Implementing VLANs enables more granular access control


policies, as tradic can be controlled based on the VLAN ID.

Virtual Private Networks (VPNs)


Ø Definition: VPNs create a secure, encrypted tunnel over the internet or another
public network, allowing for secure communication between remote locations or
remote users and the corporate network.

Security Implications:

Ø Encryption: VPNs protect data in transit from eavesdropping or interception, crucial


for remote access over insecure networks like the internet.
Ø Remote Access: VPNs are essential for securely connecting remote workers to
internal resources, ensuring that remote access does not become a vulnerability.

Virtual Routing and Forwarding (VRF)


Ø Definition: VRF technology allows for the creation of multiple, independent routing
tables within the same router or Layer 3 switch. This enables the segmentation of
network tradic without requiring multiple physical routers.

Security Implications:

Ø Tradic Segregation: By maintaining separate routing instances, VRF allows for tradic
segregation at the routing level, enhancing security and reducing the risk of route
leaks between segments.
Ø Multi-tenancy: VRF is particularly useful in multi-tenant environments, such as
cloud services or shared infrastructure, where it's necessary to keep each tenant's
tradic completely isolated.

Virtual Domains
Ø Definition: Virtual domains (also known as VDOMs) are used in firewalls and similar
security appliances to segment the device into multiple, independent domains.
Each domain can have its own security policies, interfaces, and routing.

Security Implications:

Ø Policy Enforcement: Virtual domains enable the application of distinct security


policies for diderent segments of the network, improving overall security posture.
Ø Scalability and Management: VDOMs simplify the management of large or complex
environments by allowing administrators to apply changes to specific domains
without impacting others.

[email protected] 21
Created by Biswadip Goswami

Security Architecture for network, considerations include:

Ø Design and Planning: Careful planning is required to ensure that logical


segmentation aligns with organizational security policies and compliance
requirements.
Ø Implementation and Maintenance: Ongoing management of segmented networks is
crucial, including regular reviews of segmentation rules and access controls to
adapt to changing security needs.
Ø Integration with Security Practices: Logical segmentation should be integrated with
other security practices, such as intrusion detection/prevention, to enhance overall
network security.

Logical segmentation technologies like VLANs, VPNs, VRF, and virtual domains provide
powerful tools for enhancing network security, improving performance, and ensuring
compliance.

**************************************************
Micro-segmentation - network overlays/encapsulation
**************************************************

Micro-segmentation is a network security technique that enables fine-grained security


policies to be assigned to data center and cloud applications, down to the workload level.
This approach divides the data center into distinct security segments down to the
individual workload or application, allowing for more granular control over the flow of tradic
between resources. It's particularly edective in environments that demand high levels of
security, such as multi-tenant cloud services, data centers, and networks with critical or
sensitive applications.

Key Concepts in Micro-Segmentation


Ø Network Overlays/Encapsulation: Micro-segmentation often relies on network
overlays and encapsulation technologies to create a virtual network that is
abstracted and decoupled from the physical network infrastructure. This
virtualization allows for the creation of isolated paths for communication between
workloads without changing the underlying network fabric. Common technologies
include:

Ø VXLAN (Virtual Extensible LAN): A network virtualization technology that uses


encapsulation to create a Layer 2 network on top of a Layer 3 infrastructure,
enabling the stretching of a LAN over a longer distance and across multiple
networks.

[email protected] 22
Created by Biswadip Goswami

Ø NVGRE (Network Virtualization using Generic Routing Encapsulation): Similar to


VXLAN, NVGRE is used to create virtualized networks using GRE encapsulation,
allowing for the creation of isolated tenant spaces over shared network
infrastructure.

Security Implications and Advantages


Ø Granular Security Policies: Micro-segmentation allows for the creation of precise
security policies at the workload level, significantly enhancing the ability to limit
attackers' lateral movement within networks.
Ø Isolation of Sensitive Data: By segmenting the network into micro-segments,
sensitive data can be isolated from other parts of the network, reducing the risk of
unauthorized access and data breaches.
Ø Compliance: Micro-segmentation can help organizations meet compliance
requirements by providing evidence of adequate safeguards for protecting sensitive
data, as it allows for the isolation of regulated data environments from other
network tradic.
Ø Reduced Attack Surface: Implementing micro-segmentation reduces the overall
attack surface by limiting communication paths and access to applications and
data to only those entities that require it.

Familiarizing with the principles and best practices of micro-segmentation is crucial for
designing and implementing secure network architectures, especially in environments
where data sensitivity and compliance are significant concerns.
Key considerations include:

Ø Policy Definition and Management: Defining and managing micro-segmentation


policies requires a thorough understanding of the applications, workloads, and data
flows within an organization to ensure that security policies do not impede
necessary business functions.
Ø Integration with Existing Security Practices: Micro-segmentation should be
integrated with other security controls and practices, such as intrusion detection
systems (IDS), firewalls, and security information and event management (SIEM)
systems, for a layered security approach.
Ø Continuous Monitoring and Adjustment: The dynamic nature of modern IT
environments means that micro-segmentation policies may need to be adjusted as
applications and workloads change. Continuous monitoring is essential for
maintaining the edectiveness of micro-segmentation strategies.

Micro-segmentation represents a shift from traditional, perimeter-focused security models


to more flexible, adaptive approaches suited to modern data centers and cloud
environments. Leveraging micro-segmentation edectively can significantly enhance an
organization's security posture by providing more granular control over internal network
tradic and reducing the risk of lateral movement by attackers within the network.

[email protected] 23
Created by Biswadip Goswami

*****************************************
Edge networks (e.g., ingress/egress, peering)

Edge networks play a crucial role in modern networking architectures, especially with the
increasing demand for low-latency computing and the proliferation of IoT (Internet of
Things) devices. These networks are designed to bring processing closer to the data
source—the "edge" of the network—thereby reducing latency, improving speed, and
optimizing bandwidth usage. Understanding the concepts of ingress/egress and peering
within the context of edge networks is essential for CISSP professionals, as these aspects
significantly influence network design, security, and performance.

Edge Networks
Ø Definition: Edge networks refer to computing and network resources positioned at
or near the data source. Unlike traditional centralized networks, where data is
processed in a central data center, edge networks rely on a decentralized approach.
This setup is particularly beneficial for applications requiring real-time processing.

Ingress/Egress in Edge Networks


Ø Ingress: Refers to the entry point of data into a network. In the context of edge
networks, ingress involves the collection and initial processing of data from IoT
devices or local users before it is sent to a central processing location or processed
locally. Edective management of ingress points is crucial for ensuring data integrity
and security.
Ø Egress: Refers to data exiting a network. In edge networks, egress can involve
sending processed data from the edge back to central servers or to other edge
nodes. It can also include communication from the edge network to the internet or
other external networks. Egress tradic must be carefully managed and secured to
prevent data leakage and ensure compliance with data privacy regulations.

Peering in Edge Networks


Ø Definition: Peering involves the direct interconnection of separate networks for the
purpose of exchanging tradic. In the context of edge networks, peering can take
place between diderent edge nodes to share data and resources or between edge
networks and larger, central networks to ensure seamless data flow.
Ø Benefits: Peering reduces the need for data to traverse through multiple
intermediary networks, reducing latency and potentially lowering the costs
associated with data transfer. It also increases redundancy and resilience by
diversifying the paths data can take.

[email protected] 24
Created by Biswadip Goswami

The shift towards edge computing and the complexities of managing ingress/egress and
peering relationships introduce several security and operational challenges:

Ø Security at the Edge: The distributed nature of edge networks requires robust
security measures at each node. This includes securing physical devices,
implementing strong authentication and access controls, and ensuring data is
encrypted both in transit and at rest.
Ø Data Privacy and Compliance: Managing data privacy becomes more complex as
data is processed and stored across multiple locations. Compliance with
regulations such as GDPR or HIPAA must be ensured at every point of data ingress
and egress.
Ø Network Design and Resilience: Designing networks that can ediciently handle
ingress and egress tradic while maintaining high availability and resilience is critical.
This may involve using technologies like SD-WAN (Software-Defined Wide Area
Network) for optimized routing.
Ø Monitoring and Management: Continuous monitoring of network performance and
security is essential, especially to detect and respond to threats in real-time. This
requires tools that can provide visibility across the distributed network.

Edge networks represent a significant evolution in how data is processed and managed,
odering opportunities for enhanced performance and user experience.

Domain 5:

**************************
Access Policy Enforcement
**************************

Access policy enforcement, incorporating both Policy Decision Points and Policy
Enforcement Points, is a fundamental concept in designing secure systems. The PDP and
PEP work together to ensure that only authorized users can access certain information or
systems, based on predefined security policies and rules. This setup not only helps in
securing sensitive information from unauthorized access but also in ensuring compliance
with regulatory requirements. Access policy enforcement is a critical concept that involves
defining and implementing rules and policies that control access to resources within an
organization. This is typically framed within the context of a Policy Decision Point (PDP) and
a Policy Enforcement Point (PEP).

Policy Decision Point (PDP)


The Policy Decision Point is the component of the system responsible for making decisions
about whether access requests should be allowed or denied, based on the security
policies in place. The PDP evaluates the credentials and context of an access request
against the set policies and rules to make an authorization decision.

[email protected] 25
Created by Biswadip Goswami

Ø Example: Consider a scenario where an employee tries to access a confidential


project management tool. The PDP checks the employee's role, the time of the
request, and the device being used against the organization's access policies. If the
policies allow for this access under the given conditions, the PDP decides to grant
access.

Policy Enforcement Point (PEP)


The Policy Enforcement Point is the mechanism that enforces the decisions made by the
PDP. It is the gatekeeper that allows or denies access to resources based on the PDP's
decision. The PEP intercepts access requests, forwards them to the PDP for decision, and
enforces the decision by allowing or blocking access.

Ø Example: Continuing with the previous example, once the PDP decides that the
employee is allowed to access the project management tool, the PEP then
facilitates access to the tool. If the PDP had decided to deny access, the PEP would
block the employee from accessing the tool.

*********************************************************
Security considerations and impacts of SaaS, IaaS, and PaaS
*********************************************************

The security impact of acquired cloud services—Software as a Service (SaaS),


Infrastructure as a Service (IaaS), and Platform as a Service (PaaS)—is a significant
concern for organizations moving their operations to the cloud. Each service model oders
diderent levels of control over the infrastructure and application stack, which in turn
adects the security responsibilities of the cloud customer and the cloud service provider
(CSP).

Software as a Service (SaaS) Security Impact:

Ø Data Security: With SaaS, the CSP controls the entire stack, including applications
where organizational data is stored. Concerns include data breaches, data loss, and
privacy issues.
Ø Identity and Access Management (IAM): Organizations must rely on the CSP for IAM
capabilities. Ensuring robust authentication and authorization mechanisms are in
place and configured correctly is crucial.
Ø Compliance: Ensuring the SaaS application complies with relevant regulations and
standards (e.g., GDPR, HIPAA) is essential, as the data processed or stored by the
application may be subject to these regulations.

Infrastructure as a Service (IaaS) Security Impact:

[email protected] 26
Created by Biswadip Goswami

Ø Network Security: Organizations are responsible for securing the network within
their IaaS environment, including firewall configurations, network access controls,
and intrusion detection/prevention systems.
Ø Virtual Machine Security: Customers must secure their virtual machines (VMs),
including the operating systems and applications running on them. This includes
patch management, antivirus/antimalware protection, and system hardening.
Ø Data Encryption: Responsibility for encrypting data in transit and at rest typically
falls on the customer. Implementing and managing encryption keys is a critical task.

Platform as a Service (PaaS) Security Impact:

Ø Application Security: While the CSP manages the underlying infrastructure,


organizations are responsible for securing the applications they develop and deploy
on PaaS platforms. This includes application-level firewalls, secure coding
practices, and application vulnerability scanning.
Ø Configuration Management: Misconfigurations at the platform level can expose
applications to risks. Organizations need to ensure secure configuration settings for
the development, testing, and deployment environments.
Ø Dependency Management: Applications developed on PaaS platforms may rely on
third-party libraries and services. Managing the security of these dependencies is
crucial to prevent vulnerabilities.

General Considerations Across Cloud Models


Ø Shared Responsibility Model: The security responsibility between the CSP and the
customer varies depending on the service model. Understanding this shared
responsibility model is crucial for edective cloud security management.
Ø Visibility and Monitoring: Gaining visibility into cloud environments and monitoring
for security threats can be challenging. Organizations must implement appropriate
security monitoring, logging, and alerting systems.
Ø Vendor Lock-in: Relying on proprietary features odered by a CSP can impact the
organization’s ability to switch providers or deploy solutions across multiple clouds,
potentially adecting security posture and flexibility.

*****************************************************
Security Assessment of Orphaned Software and Systems
*****************************************************

Ø Identify Orphaned Software and Systems: The first step is to inventory all software
and systems within the organization to identify which ones are orphaned. This can
involve reviewing version numbers, support end dates, and vendor announcements.

Ø Risk Assessment: Once orphaned software and systems are identified, conduct a
risk assessment to understand the potential security vulnerabilities and threats

[email protected] 27
Created by Biswadip Goswami

associated with them. This includes evaluating the likelihood of exploitation and the
impact on confidentiality, integrity, and availability.

Ø Mitigation Strategies: Depending on the risk assessment, various mitigation


strategies can be considered:

Ø Patching: In some cases, third-party patches or security fixes might be available


even if the original vendor no longer supports the product.

Ø Compensating Controls: Implement additional security controls to mitigate the risk.


This could include more stringent access controls, additional monitoring, or
network segmentation to isolate the orphaned systems.

Ø Replacement or Upgrade: Whenever possible, replace or upgrade orphaned


software and systems with supported alternatives. This is often the most
straightforward way to mitigate security risks but may involve significant cost and
edort.

Ø Monitoring and Response: For orphaned systems that cannot be immediately


replaced or removed, continuous monitoring for suspicious activity is essential.
Have an incident response plan that includes specific procedures for dealing with
security incidents involving orphaned software.

**************************************
Two-Factor Authentication (2FA) fatigue
**************************************

Two-Factor Authentication (2FA) fatigue refers to the scenario where users become
overwhelmed or frustrated with the repeated prompts for secondary verification when
accessing services or systems. This security measure, while significantly enhancing
account security by requiring something you know (like a password) and something you
have (like a one-time code sent to a phone), can lead to diminished user experience or
even security lapses if not implemented thoughtfully. Here's how 2FA fatigue can impact
security and ways to manage it:

Impact on Security
Ø Reduced Compliance: Users tired of constant 2FA prompts might seek ways to
bypass these security measures, such as choosing simpler, less secure methods or
reusing passwords.
Ø Increased Risk of Social Engineering: Attackers might exploit 2FA fatigue by crafting
phishing schemes where users, accustomed to frequent authentication requests,
unwittingly provide their credentials and 2FA codes.

[email protected] 28
Created by Biswadip Goswami

Ø Security vs. Usability Trade-od: Excessive security measures can lead to poor user
experience, leading users to take less secure paths for convenience, potentially
compromising the system's overall security posture.

Managing 2FA Fatigue


Ø Adaptive Authentication: Implementing risk-based or adaptive authentication can
help reduce 2FA prompts. Users are only challenged for the second factor under
specific conditions, such as logging in from a new device or location, thereby
balancing security and usability.
Ø User Education: Educating users on the importance of 2FA and the potential risks of
disabling it can help mitigate fatigue. Understanding the role of 2FA in protecting
their personal and corporate data can increase their tolerance for these measures.
Ø Alternative Authentication Methods: Employing alternative authentication methods,
such as biometrics (fingerprint or facial recognition), can make the 2FA process
quicker and less intrusive while maintaining security.
Ø Single Sign-On (SSO): Integrating SSO with 2FA can reduce the number of times
users need to perform the second factor by allowing them to access multiple
services or applications after a single authentication process.
Ø User Feedback: Collecting and acting on user feedback regarding the 2FA process
can help identify pain points and opportunities to streamline authentication without
compromising security.

For security professionals, understanding the balance between security measures like 2FA
and user experience is critical. They must design and implement security protocols that
protect sensitive information and systems without causing undue burden on the users.
Recognizing symptoms of 2FA fatigue and addressing them through user education,
adaptive authentication, and streamlined processes can help maintain high security
standards while ensuring user compliance and satisfaction.
Incorporating these strategies requires a deep understanding of both the technical aspects
of 2FA and the human factors involved in security systems.

*********************************
AAA - password-less authentication
*********************************

Authentication, Authorization, and Accounting (AAA) form the cornerstone of network


security and access control. These three processes ensure that only authenticated users
can access specific resources, their activities are authorized based on permissions, and
their actions are recorded for auditing purposes.

Here’s a breakdown of each component and how emerging technologies like Multi-Factor
Authentication (MFA) and password-less authentication play into these paradigms.

Authentication

[email protected] 29
Created by Biswadip Goswami

Ø Authentication verifies a user's identity before allowing access to protected


resources. It's the process of confirming that someone is who they claim to be,
typically through one or more of the following factors:

A. Something You Know: A password or PIN.


B. Something You Have: A smart card or a mobile device (used in MFA).
C. Something You Are: Biometric data like fingerprints or facial recognition.

Emerging Trends:

Ø Multi-Factor Authentication (MFA): Enhances security by requiring two or more


verification factors. MFA significantly reduces the risk of unauthorized access since
compromising one factor alone is insudicient to gain access.
Ø Password-less Authentication: Seeks to improve both security and user experience
by eliminating passwords as a factor, instead using biometrics, security keys, or
mobile device verification methods. This method can reduce the risk of phishing
and password-related breaches.

Authorization
Once authenticated, the authorization process determines what resources and services
the user can access within the system. Authorization is policy-driven, defining the
permissions assigned to a user or group of users, and it’s crucial for enforcing the principle
of least privilege.

Key Aspects:

Role-based access control (RBAC): Access rights are assigned based on roles within an
organization.
Attribute-based access control (ABAC): Access rights are granted based on attributes
(characteristics, environment, etc.) of the user, resource, or environment.
Privacy Based Access Control:
Ø Privacy-Based Access Control (PBAC) represents a paradigm in access control
models that specifically addresses the need to protect individuals' privacy when
accessing, processing, or sharing personal data. In contrast to traditional access
control models that focus primarily on securing resources from unauthorized
access based on the roles or attributes of users (e.g., Role-Based Access Control or
RBAC, and Attribute-Based Access Control or ABAC), PBAC integrates privacy
requirements directly into the access control mechanism. This approach is
particularly important in scenarios involving sensitive personal information, where
adherence to privacy regulations and principles is mandatory.

Key Features of PBAC

[email protected] 30
Created by Biswadip Goswami

Ø Privacy Policies: PBAC systems are designed around privacy policies that specify
how personal data can be accessed and used, based on regulations (like GDPR or
HIPAA), user consent, and organizational privacy requirements.
Ø Contextual Access Control: Decisions in PBAC can be context-dependent,
considering factors such as the purpose of data access, the location of the user, the
time of access, and the data subject's consent.
Ø Data Minimization: PBAC encourages data minimization, allowing access only to the
amount of data necessary for the specific purpose for which access was granted,
thereby adhering to privacy-by-design principles.
Ø Dynamic Consent Management: PBAC systems often include mechanisms for
managing and enforcing user consents dynamically, allowing data subjects to
control who can access their data and for what purposes.

Implementation Considerations
1. PBAC implementations must consider local and international privacy regulations to
ensure compliance. This includes mechanisms for tracking and enforcing consent,
as well as handling data subject access requests and the right to be forgotten.
2. Implementing PBAC involves integrating with existing IT systems and databases to
enforce privacy policies across all data processing activities. This may require
significant changes to how data is accessed and processed.
3. For PBAC to be edective, users (both end-users and administrators) need clear
interfaces for setting, viewing, and managing privacy preferences and access
controls.

Accounting
Ø Accounting is the process of tracking user activities and recording the actions they
have performed. It involves collecting data on resource usage for analysis, billing,
monitoring, and potentially for forensic purposes.

Implementations:

Ø Log Management: Collecting and managing logs that record user actions within the
system.
Ø Audit Trails: Creating records of chronological activities that provide documentary
evidence of the sequence of activities.

Ø Designing robust authentication mechanisms that balance security needs with user
convenience, including the implementation of MFA and exploring password-less
options.
Ø Implementing fine-grained authorization controls that adhere to the principle of
least privilege, ensuring users have access only to the resources necessary for their
roles.

[email protected] 31
Created by Biswadip Goswami

Ø Ensuring comprehensive accounting practices are in place for monitoring, auditing,


and compliance purposes, with secure log management and analysis capabilities.

Emerging authentication technologies like MFA and password-less authentication


represent significant advances in securing systems against unauthorized access.

Domain 8:

******************************************
Interactive Application Security Testing (IAST)
******************************************

Interactive Application Security Testing (IAST) is an approach to application security testing


that combines elements of Static Application Security Testing (SAST) and Dynamic
Application Security Testing (DAST) methodologies. IAST tools are designed to detect and
report security vulnerabilities in real-time as the application is running. This is achieved by
instrumenting the application or its runtime environment to monitor the application's
behavior and identify security issues.

Ø Security Architecture and Design: IAST provides insights into how security
vulnerabilities can manifest at runtime, which is critical for designing secure
applications. Security professionals must be aware of the various testing
methodologies available and choose the appropriate ones based on the application
architecture and the specific security requirements.

Ø Software Development Security: Software development security domain covers the


importance of integrating security practices throughout the software development
lifecycle (SDLC). IAST fits into this domain by providing a tool that can be integrated
into the development and testing phases, odering immediate feedback on security
issues to developers.

Ø Risk Management: Identifying vulnerabilities early in the development process


reduces the risk of security breaches in production environments. IAST tools help in
managing these risks by detecting both known and unknown vulnerabilities in real-
time, thereby allowing for immediate remediation.

Key Features of IAST


Ø Real-time Feedback: IAST tools work by analyzing code behavior in real-time during
testing or QA sessions, odering immediate feedback on vulnerabilities.

[email protected] 32
Created by Biswadip Goswami

Ø High Accuracy: By running within the application context, IAST tools can provide
highly accurate findings with fewer false positives compared to other testing
methodologies.
Ø Coverage: IAST can identify a wide range of vulnerabilities, from injection flaws to
complex authorization issues, by understanding the application's logic and data
flow.
Ø Integration: IAST solutions can be integrated into the CI/CD pipeline, making
security testing a part of the continuous integration and deployment process.

Implementation Considerations
Ø Tool Selection: Choosing the right IAST tool requires understanding the application's
technology stack and the specific security requirements. Some tools may oder
better support for certain programming languages or frameworks.
Ø Developer Training: Developers should be trained to understand and remediate the
vulnerabilities reported by IAST tools edectively.
Ø Privacy and Compliance: When implementing IAST in applications that process
sensitive data, considerations around data privacy and compliance with regulations
such as GDPR or HIPAA must be taken into account.

Domain 6:
****************************************************
Audit Strategy: Location (e.g., on-premise, cloud, hybrid)
*****************************************************

Auditing information systems based on their location—whether on-premise, in the cloud,


or in a hybrid environment—presents unique challenges and considerations. Each
environment requires a tailored approach to assess the edectiveness of security controls,
data protection mechanisms, and compliance with relevant standards and regulations.

On-Premise Audits
Key Considerations:

Ø Physical Security: Audits must assess physical access controls to data centers and
server rooms, including surveillance, entry logs, and environmental controls.
Ø Network Security: Evaluating internal network security controls, including firewalls,
intrusion detection systems, and network segmentation.

[email protected] 33
Created by Biswadip Goswami

Ø Data Security: Ensuring that data stored on-premises is encrypted, properly backed
up, and recoverable in case of disasters.
Ø Access Controls: Verifying that principles of least privilege and role-based access
controls are implemented and edective.
Cloud Audits
Key Considerations:

Ø Cloud Service Provider (CSP) Compliance: Assessing the CSP’s compliance with
industry standards (e.g., ISO 27001, SOC 2) and their security practices.
Ø Shared Responsibility Model: Understanding the delineation of security
responsibilities between the organization and the CSP to ensure all aspects of
security are covered.
Ø Data Sovereignty and Privacy: Ensuring data stored in the cloud complies with data
protection regulations relevant to the jurisdictions in which the data is stored and
processed.
Ø Configuration and Change Management: Evaluating the security of cloud
configurations, including the management of access keys, security groups, and
storage buckets.

Hybrid Environment Audits


Key Considerations:

Ø Integration Points: Assessing the security of connections between on-premise


systems and cloud services, such as VPNs and API gateways.
Ø Consistent Security Policies: Ensuring that security policies are uniformly applied
across both on-premise and cloud components to avoid gaps in protection.
Ø Data Flow Analysis: Understanding how data moves between on-premise and cloud
environments to secure data in transit and at rest.
Ø Identity and Access Management (IAM): Verifying that IAM policies and practices are
edective across the hybrid environment, particularly for users accessing both on-
premise and cloud resources.

General Audit Practices Across Environments


Ø Documentation Review: Examining security policies, procedures, and records of
security incidents and their resolutions.
Ø Interviews: Conducting interviews with IT stad, security personnel, and end-users to
understand security practices and challenges.
Ø Automated Tools: Utilizing scanning tools and software to identify vulnerabilities,
misconfigurations, and non-compliance with security policies.
Ø Compliance Checking: Verifying adherence to legal, regulatory, and contractual
obligations specific to the organization’s industry and location.

For security professionals, adapting audit practices to suit the specific characteristics of
on-premise, cloud, and hybrid environments is essential for edective security governance.

[email protected] 34
Created by Biswadip Goswami

They must stay informed about the latest security standards and technological
advancements to address the evolving risks and challenges associated with each type of
environment. This comprehensive approach ensures that audits can edectively identify
vulnerabilities, assess the edectiveness of security controls, and recommend
improvements to strengthen the overall security posture of the organization.

[email protected] 35

You might also like