Unit 5 - CC
Unit 5 - CC
Data integrity
- Data integrity ensures that data remains consistent and correct during operations like
transfer, storage, or retrieval.
- It means that data changes only in response to authorized transactions.
- Fail-over technology is essential for cloud security but is often overlooked.
- Mission-critical applications require robust fail-over mechanisms to ensure business
continuity.
- Security measures need to extend to the data level to guarantee protection regardless of its
location.
- Sensitive data belongs to the enterprise, emphasizing the importance of data-level security.
Compliance
- Cloud computing presents challenges for achieving compliance with existing standards.
- Many compliance regulations do not consider the nuances of cloud computing.
- Translating existing IT security and compliance standards to the cloud environment is
necessary over time.
- SaaS complicates compliance as customers may struggle to determine the location of their
data within the provider's network.
- Compliance issues related to data privacy, segregation, and security arise due to data
intermixing on shared servers or databases.
- Some countries have strict regulations on data storage, including limits on data about citizens
and requirements for financial data to remain within the country.
Government policy
- Government policy must adapt to address both the opportunities and threats associated with
cloud computing.
- Policy changes are likely to focus on issues such as off-shoring of personal data and privacy
protection.
- Attention will be drawn to data control by third parties and data off-shored to other countries.
- Transitioning to a virtualized environment may lead to a decrease in security effectiveness for
traditional controls like VLANs and firewalls.
- Security managers should prioritize protecting systems containing critical data, such as
corporate financial information or source code, during the shift to server virtualization in
production environments.
Outsourcing
- Outsourcing involves relinquishing significant control over data, posing security concerns
despite the benefits in business ease and financial savings.
- Security managers should collaborate with legal teams to establish appropriate contract terms
safeguarding corporate data and defining acceptable service-level agreements.
Virtualization efficiencies
- Virtualization efficiencies in the cloud require co-location of virtual machines from multiple
organizations on the same physical resources.
- Traditional data center security principles apply in the cloud, but physical segregation and
hardware-based security measures are ineffective against attacks between virtual machines on
the same server.
- Administrative access is typically through the Internet rather than controlled, restricted direct
connections as in traditional data centers, increasing risk and exposure.
- Stringent monitoring for changes in system control and access restrictions is essential in cloud
environments.
- The dynamic nature of virtual machines makes maintaining security consistency and ensuring
auditability challenging.
- Cloning and distribution of virtual machines between physical servers can lead to the spread
of configuration errors and vulnerabilities.
- Identifying the security state of a system and locating insecure virtual machines becomes
challenging.
- Intrusion detection and prevention systems must detect malicious activity at the virtual
machine level, regardless of their location within the virtual environment.
- Co-locating multiple virtual machines increases the attack surface and the risk of virtual
machine-to-virtual machine compromise.
- Virtual machines and physical servers in cloud environments often use the same operating
systems and applications, raising the risk of remote exploitation of vulnerabilities by attackers
or malware.
- Virtual machines moving between private and public clouds are vulnerable.
- Shared cloud environments, either fully or partially, have a larger attack surface and are at
greater risk compared to dedicated resource environments.
1. Privileged user access—Inquire about who has specialized access to data, and about the
hiring and management of such administrators.
2. Regulatory compliance—Make sure that the vendor is willing to undergo external audits
and/or security certifications.
3. Data location—Does the provider allow for any control over the location of data?
4. Data segregation—Make sure that encryption is available at all stages, and that these
encryption schemes were designed and tested by experienced professionals.
5. Recovery—Find out what will happen to data in the case of a disaster. Do they offer
complete restoration? If so, how long would that take?
6. Investigative support—Does the vendor have the ability to investigate any inappropriate or
illegal activity?
7. Long-term viability—What will happen to data if the company goes out of business? How
will data be returned, and in what format?
To address the security issues listed above along with others mentioned earlier in the topic,
SaaS providers will need to incorporate and enhance security practices used by the managed
service providers and develop new ones as the cloud computing environment evolves.
The baseline security practices for the SaaS environment as currently formulated are discussed
in the following sections
Security Management
It refers to the comprehensive process of developing, implementing, and
overseeing policies, procedures, and practices to protect an organization's
information assets and ensure the confidentiality, integrity, and availability
of data.
It involves:
Clearly define roles and responsibilities.
Establish agreements on expectations.
Prevent loss and confusion within the security team.
Leverage team members' skills and experience.
Set clear performance goals.
Maintain high team morale and pride.
Ensure overall security effectiveness.
Risk/Vulnerability Assessment
It is the process of identifying, evaluating, and prioritizing potential threats and weaknesses in
an organization's information systems and infrastructure.
It involves:
Conduct security risk assessments to balance business utility and asset protection.
Avoid increases in issues of information security audit findings.
Prevent jeopardizing certification goals(not only for the shake of certifications).
Select effective security controls.
Proactively assess and manage information security risks periodically or as needed.
Use formal risk management processes for informed decision-making.
Risk Management
It refers to the systematic process of identifying, assessing, and prioritizing risks to an
organization's information assets, and implementing measures to mitigate or manage these
risks.
It involves:
Identify technology assets.
Identify data and its links to business processes, applications, and data stores.
Assign ownership and custodial responsibilities (duties and obligations assigned to
individuals or teams responsible for maintaining and protecting information assets ).
Maintain a repository of information assets.
Owners hold authority and accountability for information assets.
Implement protection requirements for confidentiality, integrity, availability, and
privacy.
Create a formal risk assessment process.
Allocate security resources linked to business continuity.
They should be integrated with network and other systems monitoring processes (e.g., security
information management, security event management, security information and event
management, and security operations centers that use these systems for dedicated 24/7/365
monitoring).
Incident Response:
Incident response is an organized approach to addressing and managing the aftermath of a
security breach or attack (also known as an incident).
The goal is to handle the situation in a way that limits damage and reduces recovery time and
costs.
An incident response plan includes a policy that defines, in specific terms, what constitutes an
incident and provides a step-by-step process that should be followed when an incident occurs.
A security architecture document should be developed that defines security and privacy
principles to meet business objectives.
Documentation is required for management controls and metrics specific to asset classification
and control, physical security, system access controls, network and computer management,
application development and maintenance, business continuity, and compliance.
The creation of a secure architecture provides the engineers, data center operations personnel,
and network operations personnel a common blueprint to design, build, and test the security of
the applications and systems.
Design reviews of new changes can be better assessed against this architecture to assure that
they conform to the principles described in the architecture, allowing for more consistent and
effective design reviews.
Vulnerability Assessment
Vulnerability assessment classifies network assets to more efficiently `prioritize vulnerability
mitigation programs, such as patching and system upgrading.
It measures the effectiveness of risk mitigation by setting goals of reduced vulnerability
exposure and faster mitigation.
Vulnerability management should be integrated with discovery, patch management, and
upgrade management processes to close vulnerabilities before they can be exploited.
Vulnerability Assessment in cloud should be done in periodic basis with predefined service level
agreement.
Customers should be allowed to test cloud infrastructure before and after they outsource their
infrastructure to cloud.
This shift in control is the number one reason new approaches and techniques are required to
ensure organizations can maintain data security.
When an outside party owns, controls, and manages infrastructure and computational
resources, how can you be assured that business or regulatory data remains private and secure,
and that your organization is protected from damaging data breaches—and feel you can still
completely satisfy the full range of reporting, compliance, and regulatory requirements?
Some of the points to keep data private and secure in cloud infrastructure are as below:
Application Security
Application security is one of the critical success factors for SaaS company.
This is where the security features and requirements are defined and application security test
results are reviewed.
Application security processes, secure coding guidelines, training, and testing scripts and tools
are typically a collaborative effort between the security and the development team.
Although product engineers will likely focus on the application layer, the security design of the
application itself, and the infrastructure layers interacting with the application, the security
team should provide the security requirements for the product development engineers to
implement.
This should be a collaborative effort between the security and product development team.
External penetration testers are used for application source code reviews, and attack and
penetration tests provide an objective review of the security of the application as well as
assurance to customers that attack and penetration tests are performed regularly.
Fragmented and undefined collaboration on application security can result in lower-quality
design, coding efforts, and testing results.
Some of the things that we should consider while moving to cloud application are:
a. Risks associated with cloud application
b. The fact that someone is managing and controlling your critical application
c. The perimeter of cloud is different and multitenant
d. Application should be protected with industry standard firewall and security products
e. Insecure Interfaces and Application Program Interface (API’s)
f. Denial of Service (DOS) attack
Virtual Machine Security
Virtual Machine Isolation: Although VMs are isolated and has following
features
Containers for Applications: Virtual machines serve as
containers for running applications and guest operating
systems.
Isolation: All VMware virtual machines are isolated from each
other by design.
Secure Resource Sharing: Isolation allows multiple virtual
machines to securely share hardware resources without
interruption.
Defense Mechanisms
Firewall Protection: Deploy bidirectional stateful firewalls on
virtual machines to ensure isolation and location awareness.
Integrity Monitoring: Apply integrity monitoring and log
inspection software at the virtual machine level for enhanced
security.
Network Attack Concerns: Monitor network traffic between
collocated VMs to detect and prevent potential attacks.
Disaster Recovery
A Disaster Recovery Plan (DRP) is a business plan that describes how work can be resumed
quickly and effectively after a disaster.
Disaster recovery planning is just part of business continuity planning and applied to aspects of
an organization that rely on an IT infrastructure to function.
The overall idea is to develop a plan that will allow the IT department to recover enough
data and system functionality to allow a business or organization to operate - even possibly
at a minimal level.
A disaster recovery plan (DRP) documents policies, procedures and actions to limit the
disruption to an organization in the wake of a disaster.
Just as a disaster is an event that makes the continuation of normal functions impossible, a
disaster recovery plan consists of actions intended to minimize the negative effects of a disaster
and allow the organization to maintain or quickly resume mission-critical functions.
To better understand and evaluate disaster recovery strategies, it is important to define two
terms: recovery time objective (RTO) and recovery point objective (RPO).
RTO
The recovery time objective (RTO) is the maximum amount of time allocated for restoring
application functionality.
This is based on business requirements and is related to the importance of the application.
Critical business applications require a low RTO.
RPO
The recovery point objective (RPO) is the acceptable time window of lost data due to the
recovery process. For example, if the RPO is one hour, you must completely back up or
replicate the data at least every hour. Once you bring up the application in an alternate
datacenter, the backup data may be missing up to an hour of data. Like RTO, critical
applications target a much smaller RPO.
Difference
RTO RPO
Definition: RTO refers to the targeted Definition: RPO refers to the
duration within which a business acceptable amount of data loss an
process or system must be restored organization can tolerate during a
after a disruption to avoid significant disruption to its operations.
consequences.
• Count the costs. Although data center downtime is harmful to any company that relies on
its IT services, it costs some companies more than others. Your disaster recovery plan
should enable a fast return to service, but it shouldn’t cost you more than you are losing
in downtime costs.
• Evaluate the types of threats you face and how extensively they can affect your facility.
Malicious attacks can occur anywhere, but you may also face threats peculiar to your
location, such as weather events (tornadoes, hurricanes, floods and so on), earthquakes
or other dangers. Part of preparing for a disaster is to know what is likely to occur and
how those threats could affect your systems. Evaluating these situations beforehand
allows you to better take appropriate action should one of these events occur.
• Know what you have and how critical it is to operations. Responding to a disaster in your
data center is similar to doing so in medicine: you need to treat the more serious
problems first, then the more minor ones. By determining which systems are most critical
to your data center, you enable your IT staff to prioritize and make the best use of the
precious minutes and hours immediately following an outage. Not every system need be
functional immediately following a disaster.
• Identify critical personnel and gather their contact information. Who do you most want
to be present in the data center following an outage? Who has the most expertise in a
given area and the greatest ability to oversee some part of the recovery effort? Being able
to get in touch with these people is crucial to a fast recovery. Collect their contact
information and, just as importantly, keep it up to date. If it’s been a year or more since
you last checked, some of that contact information is likely out of date. Every minute you
spend trying to find important personnel is time not spent on recovery.
• Ensure that everyone knows the disaster recovery plan and understands his or her role.
Announcing the plan and assigning roles is not something you should do after a disaster
strikes; it should be done well in advance, leaving time for personnel to learn their roles
and to practice them. Almost nothing about a disaster event should be new (aside from
some contingencies of the moment, perhaps): the IT staff should implement disaster
recovery as a periodic task (almost) like any other.
• Practice(Drills). Needless to say, this is perhaps the most critical part of preparation for a
downtime event. The difference between knowing your role and being able to execute it
well is simply practice. You may not be able to shut down your data center to simulate
precisely all of the conditions you will face in an outage, but you can go through many of
the procedures nevertheless. Some recommendations prescribe semiannual drills, at a
minimum, to practice implementing the disaster recovery plan. If there’s one thing you
take from this article, it’s that you should practice your disaster recovery plan—don’t
expect it to unfold smoothly when you need it (regardless of how well laid-out a plan it is)
if you haven’t given it a trial run or two.
• Automate where possible. Your staff is limited, so it can only do so much. The more that
your systems can do on their own in a recovery situation, the faster the recovery will
generally be. This also leaves less room for human error—particularly in the kind of
stressful atmosphere that exists following a disaster.
• Follow up after a disaster. When a downtime event does occur, evaluate the
performance of the personnel and the plan to determine if any improvements can be
made. Update your plan accordingly to enable a better response in the future.
Furthermore, investigate the cause of the outage. If it’s an internal problem, take
necessary measures to correct equipment issues to avoid the same problem occurring
again.
Key Issues
Security
- Data Risks: Threats of data loss, theft, and hacking.
- Unauthorized Access: Risk of database administrators inadvertently granting access to
unauthorized users.
- Remote Storage Risks: Potential security threats when data is stored and accessed remotely.
- Encryption Challenges: Even with secure encryption, there is always a risk of decryption by
knowledgeable hackers.
- Multi-Tenancy Vulnerabilities: Unique security challenges, such as data access by
unauthorized tenants or resource sharing issues.
Performance
- Response Time: SaaS applications can have slower response times compared to server
applications.
- Efficiency Impact: Slow performance affects overall system efficiency and competitiveness of
cloud providers.
- Enhancement Needs: Crucial for providers to continuously enhance performance.
Feature Limitations
- Lack of Essential Features: Many cloud services lack critical features despite modern
interfaces.
- User Experience: Can be frustrating for clients without necessary features.
Interoperability
- Provider Restrictions: Users are limited by their cloud service providers' constraints.
- Integration Challenges: Difficulty in integrating with other vendors, service providers, and local
applications.
- System Optimization: Limits on optimizing systems through integration with existing on-
premise data centers.
Monitoring
- Continuous Monitoring: Essential for identifying and resolving issues in multi-tenancy systems.
- Complexity: Monitoring is challenging due to shared resources and complexity in identifying
flaws.
Capacity Optimization
- Tenant Placement: Critical for database administrators to allocate tenants appropriately on
networks.
- Modern Tools: Necessary to use advanced tools for correct allocation and capacity generation.
- Cost Management: Insufficient capacity management can lead to increased costs.
- Adaptability: Systems must upgrade continuously to meet changing demands.