0% found this document useful (0 votes)
204 views

Cloud Computing All Unit Notes

The document provides an overview of cloud computing, including: - Cloud computing delivers computing resources like power, infrastructure, applications, and storage over the internet on-demand. - Key characteristics of cloud computing include elasticity and scalability, self-service provisioning, standardized APIs, metered billing, and security. - Popular cloud services include IBM Cloud, Google Cloud, Microsoft Azure, AWS, and applications like Gmail, Dropbox, and Hangouts.

Uploaded by

Kritik Bansal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
204 views

Cloud Computing All Unit Notes

The document provides an overview of cloud computing, including: - Cloud computing delivers computing resources like power, infrastructure, applications, and storage over the internet on-demand. - Key characteristics of cloud computing include elasticity and scalability, self-service provisioning, standardized APIs, metered billing, and security. - Popular cloud services include IBM Cloud, Google Cloud, Microsoft Azure, AWS, and applications like Gmail, Dropbox, and Hangouts.

Uploaded by

Kritik Bansal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Cloud Computing

All unit notes KRITIK BANSAL


UNIT – 1

Cloud Definition: The cloud in cloud computing provides the means through which everything — from
computing power to computing infrastructure, applications, software can be delivered to a user as a
service wherever and whenever the user needs. The cloud itself is a set of hardware, networks,
storage, services, and interfaces that enable the delivery of computing as a service. Cloud services
include the delivery of software, infrastructure, and storage over the Internet (either as separate
components or a complete platform) based on user demand.

The characteristics of cloud are: -

❖ Elasticity and the ability to scale up and down

❖ Self-service provisioning and automatic de-provisioning

❖ Application programming interfaces (APIs)

❖ Billing and metering of service usage in a pay-as-you-go model

❖ Security

Elasticity and scalability


The service provider can‘t predict how customers will use the service. One customer might use the
service three times a year during peak selling seasons, whereas another might use it as a primary
development platform for all of its applications. Therefore, the service needs to be available all the
time (7 days a week, 24 hours a day) and it has to be designed to scale upward for high periods of
demand and downward for lighter ones. Scalability also means that an application can scale when
additional users are added and when the application requirements change. This ability to scale is
achieved by providing elasticity.

Self-service provisioning
Customers can easily get cloud services without going through a lengthy process. The customer simply
requests a required amount of computing, storage, software, process, or other resources from the
service provider. While the on-demand provisioning capabilities of cloud services eliminate many time
delays, an organization still needs to do its homework. These services aren‘t free; needs and
requirements must be determined before capability is automatically provisioned.

Application programming interfaces (APIs)


Cloud services need to have standardized APIs. These interfaces provide the instructions and
constraints on how two application or data sources can communicate with each other. A standardized
interface lets the customer more easily link a cloud service without any complex programming.
Billing and metering of services
A cloud environment needs a built-in service that bills customers. And, of course, to calculate that bill,
usage has to be metered (tracked). Even free cloud services (such as Google ‘s Gmail or Zoho‘s
Internet-based office applications) are metered.

Security
Many customers must take a leap of faith to trust that the cloud service is safe. Turning over critical
data or application infrastructure to a cloud-based service provider requires making sure that the
information can‘t be accidentally accessed by another company (or maliciously accessed by a hacker).
Many companies have compliance requirements for securing both internal and external information.
Without the right level of security, one might not be able to use a provider‘s offerings.

Cloud computing has mainly five characteristics:

1. On-demand self-service- the services are available on demand, the user can get the services at any
time, all it takes is an Internet connection.

2. Broad network access- the cloud is accessed remotely over the network, while the access to the
cloud is through the internet; it means that it is accessible to its computing capabilities, software, and
hardware from anywhere.

3. Resources pooling- In an independent location and resources serve a large number of users with all
their different devices and their required resources.

4. Rapid elasticity- dealing with the cloud is very easy, the user can simply reduce or increase the
capacity, and also, it’s faster than the regular computing types.

5. Measured Service- the cloud systems control and reuse the resources by using measurement
capabilities and according to the type of service, these services also have financial return, depending
on usage.

Applications: IBM cloud, Google Cloud, Microsoft azure, AWS, R cloud, Gmail, Yahoo, Dropbox,
Hangouts

Advantages of Cloud Computing


• Lower computer costs
• Scalable
• Instant software updates
• Unlimited storage capacity
• Increased data reliability
• Universal document access
• Device independence
• Lowers the outlay expense for start-up companies
• Easier group collaboration

Disadvantages of Cloud Computing


• Requires a constant Internet connection
• Does not work well with low-speed connections
• Governance and Regulatory compliance
– Not all service providers have well-defined service-level agreements.
• Stored data might not be secure
• Technical issue and downtime
• Efficient resource management
• Data theft
• Lack of control

What is the difference between Internet and Cloud Computing?

Internet is a network of networks, which provides software/hardware infrastructure to establish and


maintain connectivity of the computers around the word, while Cloud computing is a new technology
that delivers many types of resources over the Internet. Therefore, Cloud computing could be
identified as a technology that uses the Internet as the communication medium to deliver its services.
Cloud services can be offered within enterprises through LANs but in reality, Cloud computing cannot
operate globally without the Internet.

Scalability vs. Elasticity

The purpose of Elasticity is to match the resources allocated with actual amount of resources
needed at any given point in time. Scalability handles the changing needs of an application within
the confines of the infrastructure via statically adding or removing resources to meet applications
demands if needed. In most cases, this is handled by adding resources to existing instances—called
scaling up or vertical scaling—and/or adding more copies of existing instances—called scaling out or
horizontal scaling. In addition, scalability can be more granular and targeted in nature than elasticity
when it comes to sizing.

Cloud Ecosystem

A cloud ecosystem is a complex system of interdependent components that all work together to
enable cloud services. ... In cloud computing, the ecosystem consists of hardware and software as
well as cloud customers, cloud engineers, consultants, integrators and partners.

Evolution of Cloud Computing


Cluster Computing
Cluster computing- it’s a group of computers connected to each other and work together as a single
computer. These computers are often linked through a LAN.

The cluster is a tightly coupled systems, and from its characteristics that it’s a centralized job
management and scheduling system.

All the computers in the cluster use the same hardware and operating system, and the computers
are the same physical location and connected with a very high-speed connection to perform as a
single computer.

The resources of the cluster are managed by centralized resource manager.


Architecture: The architecture of cluster computing contains some main components and they are:
1. Multiple standalone computers.

2. Operating system.

3. High performance interconnects.

4. Communication software.

5. Different applications

Advantages: software is automatically installed and configured, and the nodes of the cluster can be
added and managed easily, so it’s very easy to deploy, it’s an open system, and very cost effective to
acquire and manage, clusters have many sources of support and supply, it’s fast and very flexible,
the system is optimized for performance as well as simplicity and it can change software
configurations at any time, also it saves the time of searching the net for latest drivers, The cluster
system is very supportive as it includes software updates.
Disadvantages: it’s hard to be managed without experience, also when the size of cluster is large,
it’ll be difficult to find out something has failed, the programming environment is hard to be
improved when software on some node is different from the other.

Grid Computing
Grid computing is a combination of resources from multiple administrative domains to reach a
common target, and this group of computers can be distributed on several location and each a group
of grids can be connected to each other.

The computers in the grid are not required to be in the same physical location and can be operated
independently, so each computer on the grid is concerned a distinct computer.

The computers in the grid are not tied to only on operating system and can run different OSs and
different hardware, when it comes to a large project, the grid divides it to multiple computers to
easily use their resources.

Advantages: One of the advantages of grid computing that you don’t need to buy large servers for
applications that can be split up and farmed out to smaller commodity type servers, secondly, it’s
more efficient in use of resources. Also, the grid environments are much more modular and don't
have much points of failure. About policies in the grid, it can be managed by the grid software,
beside that upgrading can be done without scheduling downtime, and jobs can be executed in
parallel speeding performance.

Disadvantages: It needs fast interconnect between computers resources, some applications may
need to be pushed to take full advantage of the new model, licensing across many servers may make
it forbidden for some applications, the grid environments include many smaller servers across
various administrative domains. also, political challenges associated with sharing resources
especially across different admin domains.
Utility Computing
Utility Computing refers to a type of computing technologies and business models which provide
services and computing resources to the customers, such as storage, applications and computing
power.

This repackaging of computing services is the foundation of the shift to on demand computing,
software as a service and cloud computing models which late developed the idea of computing,
applications and network as a service.

Utility computing is kind of virtualization, that means the whole web storage space and computing
power which it’s available to users is much larger than the single time-sharing computer.

Multiple backend web servers used to make this kind of web service possible.

Utility computing is similar to cloud computing and it often requires a cloud-like infrastructure.

Advantages: the client doesn't have to buy all the hardware, software and licenses needed to do
business. Instead, the client relies on another party to provide these services. It also gives companies
the option to subscribe to a single service and use the same suite of software throughout the entire
client organization. it offers compatibility of all the computers in large companies.
Disadvantages: The service could be stopped from the utility computing company for any reason
such as a financial trouble or equipment problems. Also, utility computing systems can also be
attractive targets for hackers, and much of the responsibility of keeping the system safe falls to the
provider

Cloud Computing
Cloud computing is a term used when we are not talking about local devices which it does all the
hard work when you run an application, but the term used when we’re talking about all the devices
that run remotely on a network owned by another company which it would provide all the possible
services from e-mail to complex data analysis programs.

This method will decrease the users’ demands for software and super hardware at physical location.

The only thing the user will need is running the cloud computing system software on any device that
can access to the Internet

Difference between cloud computing and utility computing

Let us understand the difference between utility computing vs cloud computing. Utility computing is
a precursor to cloud computing. Cloud computing does everything that utility computing does and
also offers much more than that. Cloud computing is not restricted to any specific network but it is
accessible through the internet. The resource virtualization and its scalability advantage and
reliability are more pronounced in the case of cloud computing.

Utility computing can get implemented without cloud computing. Utility computing can be
understood by say a supercomputer that rents out the processing time to various clients. The user
will pay for the resources that they use.
Utility computing is more like a business model than a particular technology. Cloud computing does
support utility computing but not every utility computing will be based on the cloud.

Cloud Service model architecture


Cloud Deployment model architecture

Underlying principles of parallel & distributed Computing

Distributed Computing:

Network of autonomous computers Communicate with each other to achieve a goal. computers in
distributed System are independent and don't physically share memory or processors. Communicate
with each other via message passing.
Computer in distributed system can have different roles based on goals of system & computer’s own
hardware and software properties.

Distributed system can be divided into two kinds:

Client-server:
• Centralized
• Single server provides services to many clients

Peer to peer:

• Decentralized
• All are equally responsible; no main server
• All contributes some processing power and memory

Features of Distributed computing

1. Modularity- Two architecture peer to peer and client server are designed to enforce
modularity. It is an idea that the components of the system should be black box to each
other. It doesn’t worry about how the implementation is going to happen as it interacts via
interface that throws the output via input.

2. Message passing- In distributed system each system communicates with each other via
message passing.
Consist of three essential parts-
-sender
-recipient
-content
Message protocols is a set of encoding & decoding. They have a particular format.

Parallel computing
• Usage of multiple processors
• If two or more processors are available many tasks can be done more quickly
• While one is doing one aspects of some computation, other is doing some other aspect of
computation.
• In order to be able to work together, multiple processors need to share information with
each other this is done using shared memory environment.
• Variables, data structures and objects in that environment is accessible by all the processors.
UNIT – 2
Service-Oriented Architecture
Service-Oriented Architecture (SOA) is an architectural approach in which applications make use of
services available in the network. In this architecture, services are provided to form applications,
through a communication call over the internet. It defines a way to make software components
reusable using the interfaces.

• SOA allows users to combine a large number of facilities from existing services to form
applications.
• SOA encompasses a set of design principles that structure system development and provide
means for integrating components into a coherent and decentralized system.
• SOA based computing packages functionalities into a set of interoperable services, which can
be integrated into different software systems belonging to separate business domains.
There are two major roles within Service-oriented Architecture:

1. Service provider: The service provider is the maintainer of the service and the organization
that makes available one or more services for others to use. To advertise services, the
provider can publish them in a registry, together with a service contract and service readme
document that specifies the nature of the service, how to use it, the requirements for the
service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata in the registry and
develop the required client components to bind and use the service.

Services might aggregate information and data retrieved from other services or create workflows of
services to satisfy the request of a given service consumer. This practice is known as service orchestration
Another important interaction pattern is service choreography, which is the coordinated interaction of
services without a single point of control.

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service description,
readme documents.
2. Loose coupling: Services are designed as self-contained components, maintain
relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description
documents. They hide their logic, which is encapsulated within their implementation.
4. Reusability: Designed as components, services can be reused more effectively, thus
reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a service
consumer point of view, there is no need to know about their implementation.
6. Discoverability: Services are defined by description documents that constitute
supplemental metadata through which they can be effectively discovered. Service
discovery provides an effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex
operations can be implemented. Service orchestration and choreography provide a
solid support for composing services and achieving business goals.
Advantages of SOA:
• Service reusability: In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.
• Easy maintenance: As services are independent of each other they can be updated
and modified easily without affecting other services.
• Platform independent: SOA allows making a complex application by combining
services picked from different sources, independent of the platform.
• Availability: SOA facilities are easily available to anyone on request.
• Reliability: SOA applications are more reliable because it is easy to debug small
services rather than huge codes
• Scalability: Services can run on different servers within an environment, this
increases scalability
Disadvantages of SOA:
• High overhead: A validation of input parameters of services is done whenever
services interact this decreases performance as it increases load and response time.
• High investment: A huge initial investment is required for SOA.
• Complex service management: When services interact, they exchange messages to
tasks. the number of messages may go in millions. It becomes a cumbersome task to
handle a large number of messages.
Practical applications of SOA: SOA is used in many ways around us whether it is mentioned
or not.
1. SOA infrastructure is used by many armies and air force to deploy situational
awareness systems.
2. SOA is used to improve the healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example,
an app might need GPS so it uses inbuilt GPS functions of the device. This is SOA in
mobile solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and
content
REST API (Introduction)
Representational State Transfer (REST)or Restful api is an architectural style that defines a
set of constraints to be used for creating web services. REST API is a way of accessing the
web services in a simple and flexible way without having any processing.
REST technology is generally preferred to the more robust Simple Object Access Protocol
(SOAP) technology because REST uses the less bandwidth, simple and flexible making it
more suitable for internet usage. It’s used to fetch or give some information from a web
service. All communication done via REST API used only HTTP request.

Working
A request is sent from client to server in the form of web URL as HTTP GET or POST or PUT
or DELETE. After that a response is come back from server in the form of resource which can
be anything like HTML, XML, Image or JSON. But now JSON is the most popular format being
used in Web Services.

In HTTP there are five methods which are commonly used in a REST based Architecture i.e.,
POST, GET, PUT, PATCH, and DELETE. These correspond to create, read, update, and delete
(or CRUD) operations respectively. There are other methods which are less frequently used
like
OPTIONS and HEAD

Using HTTP Methods for RESTful Services

REST APIs communicate via HTTP requests to perform standard database functions like
creating, reading, updating, and deleting records (also known as CRUD) within a resource.
For example, a REST API would use a GET request to retrieve a record, a POST request to
create one, a PUT request to update a record, and a DELETE request to delete one. All HTTP
methods can be used in API calls. A well-designed REST API is similar to a website running in
a web browser with built-in HTTP functionality.

The state of a resource at any particular instant, or timestamp, is known as the resource
representation. This information can be delivered to a client in virtually any format including
JavaScript Object Notation (JSON), HTML, XLT, Python, PHP, or plain text. JSON is popular
because it’s readable by both humans and machines—and it is programming language-
agnostic.

Request headers and parameters are also important in REST API calls because they include
important identifier information such as metadata, authorizations, uniform resource
identifiers (URIs), caching, cookies and more. Request headers and response headers, along
with conventional HTTP status codes, are used within well-designed REST APIs.

NOTE- If an API is RESTful, that simply means that the API adheres to the REST architecture.
RESTful refers to an API adhering to those constraints.

Advantages of REST API

• Scalability. This protocol stands out due to its scalability. Thanks to the separation
between client and server, a product may be scaled by a development team without
much difficulty.

• Flexibility and portability. With the indispensable requirement for data from one of
the requests to be properly sent, it is possible to perform a migration from one
server to another or carry out changes on the database at any time. Front and back
can therefore be hosted on different servers, which is a significant management
advantage.

• Independence. With the separation between client and server, the protocol makes it
easy for developments across a project to take place independently. In addition,
the REST API adapts at all times to the working syntax and platform. This offers the
opportunity to use multiple environments while developing.

Restless Webservice: It is a web service that does not obey REST architecture. It uses an
XML document to send and receive messages. It also uses SOAP that stands for Simple
Object Access Protocol. This service is useful for applications that are to be made secure. It
is easy to build a Restless web-service

Difference between Restless and Restful webservices:

It is based on the principles of


It is not based on REST architecture and is
the principles of integrable with other computer
Architecture REST. systems on the network.

It does not use


REST principles REST principles. It uses REST principles.
It uses SOAP
Communication services. It uses REST services.

It supports only XML It supports JSON, HTML, etc


Data Formats format. format.

These services use


service interface to These services use URL to show
Functions show business logic. business logic.

It is less usable and


Usability and flexible for the It is more usable and flexible for
Flexibility users. the users.

More secure as it Less secure as it uses the


designs its own security layers to communication
Security Aspects security layer. protocols.

It uses a small
Bandwidth bandwidth. It uses a large bandwidth.

WEB SERVICE
• A web service is any piece of software that makes itself available over the internet
and uses a standardized XML messaging system. XML is used to encode all
communications to a web service. For example, a client invokes a web service by
sending an XML message, then waits for a corresponding XML response. As all
communication is in XML, web services are not tied to any one operating system or
programming language—Java can talk with Perl; Windows applications can talk with
Unix applications.
• Web services are XML-based information exchange systems that use the Internet for
direct application-to-application interaction. These systems can include programs,
objects, messages, or documents.
• A web service is a collection of open protocols and standards used for exchanging
data between applications or systems. Software applications written in various
programming languages and running on various platforms can use web services to
exchange data over computer networks like the Internet in a manner similar to inter-
process communication on a single computer. This interoperability (e.g., between
Java and Python, or Windows and Linux applications) is due to the use of open
standards.
To summarize, a complete web service is, therefore, any service that −
• Is available over the Internet or private (intranet) networks
• Uses a standardized XML messaging system
• Is not tied to any one operating system or programming language
• Is self-describing via a common XML grammar
• Is discoverable via a simple find mechanism

Components of Web Services

The basic web services platform is XML + HTTP. All the standard web services work using the
following components −
• SOAP (Simple Object Access Protocol)
• UDDI (Universal Description, Discovery and Integration)
• WSDL (Web Services Description Language)

How Does a Web Service Work?

A web service enables communication among various applications by using open standards
such as HTML, XML, WSDL, and SOAP. A web service takes the help of −
• XML to tag the data
• SOAP to transfer a message
• WSDL to describe the availability of service.
You can build a Java-based web service on Solaris that is accessible from your Visual Basic
program that runs on Windows.
You can also use C# to build new web services on Windows that can be invoked from your
web application that is based on Java Server Pages (JSP) and runs on Linux.

Example

Consider a simple account-management and order processing system. The accounting


personnel use a client application built with Visual Basic or JSP to create new accounts and
enter new customer orders.
The processing logic for this system is written in Java and resides on a Solaris machine, which
also interacts with a database to store information.
The steps to perform this operation are as follows −
• The client program bundles the account registration information into a SOAP
message.
• This SOAP message is sent to the web service as the body of an HTTP POST request.
• The web service unpacks the SOAP request and converts it into a command that the
application can understand.
• The application processes the information as required and responds with a new
unique account number for that customer.
• Next, the web service packages the response into another SOAP message, which it
sends back to the client program in response to its HTTP request.
• The client program unpacks the SOAP message to obtain the results of the account
registration process.

Advantages of web service

Exposing the Existing Function on the network

A web service is a unit of managed code that can be remotely invoked using HTTP. That is, it
can be activated using HTTP requests. Web services allow you to expose the functionality of
your existing code over the network. Once it is exposed on the network, other applications
can use the functionality of your program.

Interoperability

Web services allow various applications to talk to each other and share data and services
among themselves. Other applications can also use the web services. For example, a VB or
.NET application can talk to Java web services and vice versa. Web services are used to make
the application platform and technology independent.

Standardized Protocol

Web services use standardized industry standard protocol for the communication. All the
four layers (Service Transport, XML Messaging, Service Description, and Service Discovery
layers) use well-defined protocols in the web services protocol stack. This standardization of
protocol stack gives the business many advantages such as a wide range of choices, reduction
in the cost due to competition, and increase in the quality.

Low-Cost Communication

Web services use SOAP over HTTP protocol, so you can use your existing low-cost internet
for implementing web services. This solution is much less costly compared to proprietary
solutions like EDI/B2B. Besides SOAP over HTTP, web services can also be implemented on
other reliable transport mechanisms like FTP.

SOAP
SOAP stands for Simple Object Access Protocol. It is an XML-based protocol for accessing web
services. It is platform independent and language independent. By using SOAP, you will be
able to interact with other programming language applications.
SOAP vs REST

Implementation Rest API is implemented as it On other hand SOAP API has an


has no official standard at all official standard because it is a
1
because it is an architectural protocol.
style.

Internal REST APIs uses multiple SOAP APIs is largely based and
communication standards like HTTP, JSON, uses only HTTP and XML.
2
URL, and XML for data
communication and transfer.

Description REST API uses Web On other hand SOAP API used
Application Description Web Services Description
3 Language for describing the language for the same.
functionalities being offered
by web services.

Security REST has SSL and HTTPS for On other hand SOAP has SSL
security. (Secure Socket Layer) and WS-
security due to which in the
4
cases like Bank Account
Password, Card Number, etc.
SOAP is preferred over REST.

Abbreviation REST stands for On other hand SOAP stands for


5 Representational State Simple Object Access Protocol
Transfer.

Interchange REST can make use of SOAP On other hand SOAP cannot
as the underlying protocol for make use of REST since SOAP is
6 web services, because in the a protocol and REST is an
end it is just an architectural architectural pattern.
pattern.

Web services vs API


Web Services Web API

Web services are a type of API, which APIs are application interfaces, implying that
must be accessed through a network one application can communicate with
connection. another application in a standardized manner.

Web service is used for REST, SOAP


and XML-RPC for communication. API is used for any style of communication.

All Web services are APIs. APIs are not web services.

It doesn’t have lightweight design, It has a light-weight architecture furthermore,


needs a SOAP convention to send or useful for gadgets which have constrained
receive data over the system. transmission capacity like smart phones.

It provides supports only for the It provides support for the HTTP/s protocol:
HTTP protocol. URL Request/Response Headers, and so on.

Web service supports only XML. API supports XML and JSON.

Web Services can be hosted on IIS. Web API can be hosted only on IIS and self.

Request Response Model

Request–response, or request–reply, is one of the basic methods computers use to


communicate with each other, in which the first computer sends a request for some data and
the second responds to the request. Usually, there is a series of such interchanges until the
complete message is sent; browsing a web page is an example of request–response
communication. Request–response can be seen as a telephone call, in which someone is
called and they answer the call.
Request–response is a message exchange pattern in which a requestor sends a request
message to a replier system which receives and processes the request, ultimately returning a
message in response. This is a simple, but powerful messaging pattern which allows two
applications to have a two-way conversation with one another over a channel. This pattern is
especially common in client–server architectures.
Pub/Sub Model

Definition- Pub/sub is shorthand for publish/subscribe messaging, an asynchronous


communication method in which messages are exchanged between applications without
knowing the identity of the sender or recipient.

Terminology used in Pub/Sub

• Topic – An intermediary channel that maintains a list of subscribers to relay messages


to that are received from publishers
• Message – Serialized messages sent to a topic by a publisher which has no knowledge
of the subscribers
• Publisher – The application that publishes a message to a topic
• Subscriber – An application that registers itself with the desired topic in order to
receive the appropriate messages

Working of Pub/Sub model

• A Publish Subscribe Architecture is a messaging pattern where the publishers


broadcast messages, with no knowledge of the subscribers.

• Similarly, the subscribers ‘listen’ out for messages regarding topic/categories that
they are interested in without any knowledge of who the publishers are.

• The event bus transfers the messages from the publishers to the subscribers.

• Each subscriber only receives a subset of the messages that have been sent by
the publisher they only receive the message topics or categories they have
subscribed to.
Virtualization
Virtualization is a technique of how to separate a service from the underlying physical
delivery of that service. It is the process of creating a virtual version of something like
computer hardware.
Or

Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or network resources".

In other words, Virtualization is a technique, which allows to share a single physical instance
of a resource or an application among multiple customers and organizations. It does by
assigning a logical name to a physical storage and providing a pointer to that physical resource
when demanded.

What is the concept behind the Virtualization?

Creation of a virtual machine over existing operating system and hardware is known as
Hardware Virtualization. A Virtual machine provides an environment that is logically
separated from the underlying hardware.

The machine on which the virtual machine is going to create is known as Host Machine and
that virtual machine is referred as a Guest Machine.

BENEFITS OF VIRTUALIZATION
1. More flexible and efficient allocation of resources.
2. Enhance development productivity.
3. It lowers the cost of IT infrastructure.
4. Remote access and rapid scalability.
5. High availability and disaster recovery.
6. Pay peruse of the IT infrastructure on demand.
7. Enables running multiple operating systems.

Types of Virtualizations
1. Application Virtualization:
Application virtualization helps a user to have remote access of an application from a
server. The server stores all personal information and other characteristics of the
application but can still run on a local workstation through the internet. Example of this
would be a user who needs to run two different versions of the same software.
Technologies that use application virtualization are hosted applications and packaged
applications.
2. Network Virtualization:
Network Virtualization (NV) refers to abstracting network resources that were traditionally
delivered in hardware to software. NV can combine multiple physical networks to one
virtual, software-based network, or it can divide one physical network into separate,
independent virtual networks. E.g.-logical switches, routers, firewalls, load balancer,
Virtual Private Network (VPN).
3. Desktop Virtualization:
Desktop virtualization allows the users’ OS to be remotely stored on a server in the data
centre. It allows the user to access their desktop virtually, from any location by a different
machine. Users who want specific operating systems other than Windows Server will need
to have a virtual desktop. Main benefits of desktop virtualization are user mobility,
portability, easy management of software installation, updates, and patches.
4. Storage Virtualization:
Storage virtualization is an array of servers that are managed by a virtual storage system.
The servers aren’t aware of exactly where their data is stored, and instead function more
like worker bees in a hive. It makes managing storage from multiple sources to be
managed and utilized as a single repository. storage virtualization software maintains
smooth operations, consistent performance and a continuous suite of advanced functions
despite changes, break down and differences in the underlying equipment.
5. Server Virtualization:
This is a kind of virtualization in which masking of server resources takes place. Here, the
central-server (physical server) is divided into multiple different virtual servers by
changing the identity number, processors. So, each system can operate its own operating
systems in isolate manner. Where each sub-server knows the identity of the central
server. It causes an increase in the performance and reduces the operating cost by the
deployment of main server resources into a sub-server resource. It’s beneficial in virtual
migration, reduce energy consumption, reduce infrastructural cost, etc.
6. Data virtualization:
This is the kind of virtualization in which the data is collected from various sources and
managed that at a single place without knowing more about the technical information like
how data is collected, stored & formatted then arranged that data logically so that its
virtual view can be accessed by its interested people and stakeholders, and users through
the various cloud services remotely. Many big giant companies are providing their services
like Oracle, IBM, At scale, Cdata, etc.

Structure of virtualization
Virtualization is achieved through the software known as virtual machine monitor or the
hypervisor. the software is used in two ways thus forming two different structures of
virtualization, namely Bare Metal Virtualization and Hosted Virtualization.

Bare-metal virtualization hypervisors: (TYPE I HYPERVISOR)


• Is deployed as a bare-metal installation (the first thing to be installed on a server as the
operating system will be the hypervisor).
• The hypervisor will communicate directly with the underlying physical server hardware,
manages all hardware resources and support execution of VMs.
• Hardware support is typically more limited, because the hypervisor usually has limited
device drivers built into it.
• Well suited for enterprise data canters, because it usually comes with advanced features
for resource management, high availability and security.
• Bare-metal virtualization hypervisors examples: VMware ESX and ESXi, Microsoft Hyper-V,
Citrix Systems XenServer.

Hosted virtualization hypervisors: (TYPE II HYPERVISOR)


• The software is not installed onto the bare-metal, but instead is loaded on top of an
already live operating system, so it requires you to first install an OS (Host OS).
• The Host OS integrates a hypervisor that is responsible for providing the virtual machines
(VMs) with their virtual platform interface and for managing all context switching
scheduling, etc.
• The hypervisor will invoke drivers or another component of the Host OS as needed.
• On the Host OS you may run Guest VMs, but you can also run native applications
• This approach provides better hardware compatibility than bare-metal virtualization,
because the OS is responsible for the hardware drivers instead of the hypervisor.
• A hosted virtualization hypervisor does not have direct access to hardware and must go
through the OS, which increases resource overhead and can degrade virtual machine (VM)
performance.
• The latency is minimal and with today’s modern software enhancements, the hypervisor
can still perform optimally.
• Common for desktops, because they allow you to run multiple OSes. These virtualization
hypervisor types are also popular for developers, to maintain application compatibility on
modern OSes.
• Because there are typically many services and applications running on the host OS, the
hypervisor often steals resources from the VMs running on it
• The most popular hosted virtualization hypervisors are: VMware Workstation, Server,
Player and Fusion; Oracle VM VirtualBox; Microsoft Virtual PC; Parallels Desktop.
Xen is a Type 1 or bare-metal hypervisor that allows you to run multiple operating systems
on host computers to enable virtualization. Xen lets you share hardware resources of the
same computer among multiple operating systems concurrently. The multiple operating
systems can be run on the same hardware resources.
A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and
runs virtual machines (VMs). A hypervisor allows one host computer to support multiple
guest VMs by virtually sharing its resources, such as memory and processing.

Full Virtualization
This process was introduced by IBM in the year 1966. It is considered to be the first software
solution for server virtualization. It uses binary translation and a direct approach method.

• In this, the guest OS is fully isolated using the virtual machine from the virtualization
layer and hardware.
• Examples of full virtualization include Microsoft and Parallels systems.
• The virtual machine permits the execution of the instructions in addition to running
the unmodified OS in a completely isolated method.
• It is considered to be less secure in comparison to paravirtualization.
• It uses binary translation as the operational technique.
• It is slower in comparison to paravirtualization in terms of operation.
• It is considered to be portable and compatible in comparison to paravirtualization.
Paravirtualization
It belongs to the section of CPU virtualization that uses hypercalls for operations in order to
handle instructions at compile time.

• Here, the guest OS isn't isolated fully, but is partially isolated from the virtualization
layer and hardware with the help of the virtual machine.
• Examples of paravirtualization include VMware and Xen.
• The virtual machine doesn't implement full isolation of OS.
• It just provides a different API that can be utilized when the OS is subjected to
changes.
• It is considered to be more secure in comparison to full virtualization.
• It uses hypercalls at compile time for operational purposes.
• It is quicker in terms of operation in comparison to full virtualization.
• It is considered comparatively less portable and compatible.
• I/O virtualization (IOV), or input/output virtualization, is technology that uses
software to abstract upper-layer protocols from physical connections or physical
transports. This technique takes a single physical component and presents it to
devices as multiple components.
• CPU virtualization involves a single CPU acting as if it were multiple separate CPUs.
The most common reason for doing this is to run multiple different operating
systems on one machine. CPU virtualization emphasizes performance and runs
directly on the available CPUs whenever possible.

• Memory virtualization allows networked, and therefore distributed, servers to share


a pool of memory to overcome physical memory limitations

Disaster recovery is how a company goes about accessing applications, data, and the
hardware that might be affected during a disaster. Virtualization provides hardware
independence which means the disaster recovery site does not have to have the exact
equipment as the equipment in production. Server provisioning is relevant when a server is
built for the first time. Although data centers do have backup generators, the entire data
center is designed for disaster recovery. One particular data centre could never guarantee
that the data center itself would never be without power.
UNIT- 3
Layered architecture cloud

1. Application layer:

• This layer consists of different cloud services which are used by cloud users.
• These applications provide services to the end user as per their requirements.

2. Platform layer:

• This layer consists of application software and operating system.


• The objective of this layer is to deploy applications directly on the virtual machines.

3. Infrastructure layer:

• It is a virtualization layer where physical resources are partitioned into set of virtual
resources through different virtualization technologies such as Xen, KVM and
VMware.
• This layer is the core of the cloud environment where cloud resources are
dynamically provisioned using different virtualization technologies.

4. Datacenter layer:

• This layer is accountable for managing physical resources such as servers, switches,
routers, power supply, and cooling system etc., in the datacenter of the cloud
environment.
• All the resources are available and managed in datacenters to provide services to the
end user.
• The datacenter consists of physical servers, connected through high-speed devices
such as router and switches.
NIST CLOUD REFERENCE ARCHITECTURE:

Cloud consumer:

• A cloud consumer is the end user who browses or utilizes the services provided by
Cloud Service Providers (CSP), sets up service contracts with the cloud provider.
• Cloud consumers use Service-Level Agreement (SLAS) to specify the technical
performance requirements to be fulfilled by a cloud provider.
• SLAs can cover terms concerning the quality of service, security. and remedies for
performance failures.

Cloud auditor:

• Cloud auditor is an entity that can conduct independent assessment of cloud


services, security, performance and information system operations of the cloud
implementations.
• The services that are provided by Cloud Service Providers (CSP) can be evaluated by
service auditors in terms of privacy impact, security control and performance, etc..
• There are three major roles of cloud auditor:
1. Security audit
2. Privacy impact audit
3. Performance audit
Cloud service providers:

• It is a group or enterprise that delivers cloud services to cloud consumers or end


users.
• It offers consumers to purchase a growing variety of cloud services from cloud
service providers.
• There are various categories of cloud-based services:

IaaS providers: In this model, the cloud service providers offer infrastructure
components that would exist in an on-premises datacenter. These components
consist of servers networking and storage as well as the virtualization layer.

SaaS providers: In Software-as-a-Service (SaaS), vendors provide a wide sequence of


Applications and Software’s, such as Human Resources Management (HRM)
software, Customer Relationship Management (CRM) software, all of which the SaaS
vendor hosts and provide services through internet.

PaaS providers: In Platform-as-a-Service (PaaS), vendors offer cloud infrastructure


and services that can access to perform many functions. In PaaS, services and
products are mostly utilized in software development. PaaS providers offer more
services than laaS providers. PaaS providers provide operating system and
middleware along with application stack to the underlying infrastructure.

Cloud broker:

• An organization or a unit that manages the performance, use and delivery of cloud
services by enhancing specific capability and offers the value-added services to cloud
consumers.
• It combines and integrates various services into one or more new services.
• They provide service arbitrage which allows flexibility and opportunistic choices.
• There is major three services offered by a cloud broker.
1. Service intermediation
2. Service aggregation
3. Service arbitrage

Cloud carrier:

• The mediator who offers connectivity and transport of cloud services within cloud
service providers and cloud consumers.
• It allows access to the services of cloud through Internet network,
telecommunication, and other access devices.

PUBLIC PRIVATE HYBRID CLOUD


Difference Private Public Hybrid
The data stored in the public cloud
is usually multi-tenant, which
Single tenancy: there’s Multi-tenancy: the data means the data from multiple
only the data of a single of multiple organizations organizations is stored in a shared
Tenancy environment. The data stored in
organization stored in the in stored in a shared
cloud. environment. private cloud is kept private by the
organization.

The services running on a private


cloud can be accessed only the
No: only the organization
Exposed to the Yes: anyone can use the organization’s users, while the
itself can use the private services running on public cloud
Public public cloud services.
cloud services. can be accessed by anyone.

Anywhere on the Inside the organization’s network


Internet where the cloud for private cloud services as well
Data Center Inside the organization’s service provider’s as anywhere on the Internet for
Location network. services are located. public cloud services.

The organization must The cloud service


have their own provider manages the The organization itself must
Cloud Service administrators managing services, where the manage the private cloud, while
Management their private cloud organization merely uses the public cloud is managed by the
services. them. CSP.

Must be provided by the


organization itself, which The CSP provides all the The organization must provide
Hardware has to buy physical servers hardware and ensures it’s hardware for the private cloud,
Components to build the private cloud working at all times. while the hardware of CSP is used
on. for public cloud services.

Can be quite expensive, The private cloud services must be


since the hardware, The CSP has to provide provided by the organization,
applications and network the hardware, set-up the including the hardware,
Expenses have to be provided and application and provide applications and network, while
managed by the the network accessibility the CSP manages the public cloud
organization itself. according to the SLA. services.

What is Public Cloud Computing?


A cloud platform that is based on standard cloud computing model in which service
provider offers resources, applications storage to the customers over the internet is
called as public cloud computing. The hardware resources in public cloud are shared
among similar users and accessible over a public network such as the internet. Most
of the applications that are offered over internet such as Software as a Service
(SaaS) offerings such as cloud storage and online applications uses Public Cloud
Computing platform. Budget conscious startups, SMEs not keen on high level of
security features looking to save money can opt for Public Cloud Computing.

Advantage of Public Cloud Computing


1. It offers greater scalability
2. Its cost effectiveness helps you save money.
3. It offers reliability which means no single point of failure will interrupt your
service.
4. Services like SaaS, (Paas), (Iaas) are easily available on Public Cloud platform as it
can be accessed from anywhere through any Internet enabled devices.
5. It is location independent – the services are available wherever the client is
located.

Disadvantage of Public Cloud Computing


1. No control over privacy or security
2. Cannot be used for use of sensitive applications
3. Lacks complete flexibility as the platform depends on the platform provider
4. No stringent protocols regarding data management

What is Private Cloud Computing?


A cloud platform in which a secure cloud-based environment with dedicated storage
and hardware resources provided to a single organization is called Private Cloud
Computing. The Private cloud can be either hosted within the company or
outsourced to a trusted and reliable third-party vendor. It offers company a greater
control over privacy and data security. The resources in case of private cloud are not
shared with others and hence it offers better performance compared to public
cloud. The additional layers of security allow company to process confidential data
and sensitive work in the private cloud environment.

Advantage of Private Cloud Computing


1. Offers greater Security and Privacy
2. Offers more control over system configuration as per the company’s need
3. Greater reliability when it comes to performance
4. Enhances the quality of service offered by the clients

Disadvantage of Private Cloud


1. Expensive when compared to public cloud
2. Requires IT Expertise

What is Hybrid Cloud Computing?


Hybrid Cloud computing allows you to use combination of both public and private
cloud. This helps companies to maximize their efficiency and deliver better
performance to clients. In this model companies can use public cloud for transfer of
non-confidential data and switch on to private cloud in case of sensitive data
transfer or hosting of critical applications.

Advantage of Hybrid Cloud Computing


1. It is scalable
2. It is cost efficient
3. Offers better security
4. Offers greater flexibility
Disadvantage of Hybrid Cloud Computing
1. Infrastructure Dependency
2. Possibility of security breach through public cloud

IAAS, PAAS & SAAS:

IaaS: Infrastructure as a Service

This is a virtual equivalent of a traditional data center. Cloud infrastructure


providers use virtualization technology to deliver scalable compute resources such
as server s, network s and storage to their clients. This is beneficial for the clients, as
they don’t have to buy personal hardware and manage its components. Instead,
they can deploy their platforms and applications within the provider’s virtual
machines that offer the same technologies and capabilities as a physical data center.

An IaaS provider is responsible for the entire infrastructure, but users have total
control over it. In turn, users are responsible for installing and maintaining apps and
operating systems, as well as for security, runtime, middleware and data.

IaaS users can compare the cost and performance of different providers in order to
choose the best option, as they can access them through a single API.

IaaS Key Features

• Highly scalable resources


• Enterprise-grade infrastructure
• Cost depends on consumption
• Multitenant architecture, i.e., a single piece of hardware serves many users
• The client gets complete control over the infrastructure

IaaS Advantages

• The most flexible and dynamic model


• Cost-effective due to pay-as-you-go pricing
• Easy to use due to the automated deployment of hardware
• Management tasks are virtualized, so employees have more free time for
other tasks

IaaS Disadvantages

• Data security issues due to multitenant architecture


• Vendor outages make customers unable to access their data for a while
• The need for team training to learn how to manage new infrastructure
When to Use IaaS

IaaS can be especially advantageous in some situations:

• If you are a small company or a start-up that has no budget for creating your
own infrastructure
• If you are a rapidly growing company and your demands are unstable and
changeable
• If you are a large company that wants to have effective control over
infrastructure but pay only for the resources you actually use

Examples of IaaS

The best-known IaaS solutions vendors are Microsoft Azure, Google Compute
Engine (GCE), Amazon Web Services (AWS), Cisco Metapod, DigitalOcean, Linode
and Rackspace.

PaaS: Platform as a Service

PaaS in cloud computing is a framework for software creation delivered over the
internet. This is the offering of a platform with built-in software components and
tools, using which developer s can create, customize, test and launch applications.
PaaS vendors manage servers, operating system updates, security patches and
backups. Clients focus on app development and data without worrying about
infrastructure, middleware and OS maintenance.

The main difference between IaaS and PaaS lies in the degree of control given to
users.

PaaS Key Features

• Allows for developing, testing and hosting apps in the same environment
• Resources can be scaled up and down depending on business needs
Multiple users can access the same app in development The user
doesn’t have complete control over the infrastructure
• Web services and databases are integrated
• Remote teams can collaborate easily

PaaS Advantages

• PaaS-built software is highly scalable, available and multi-tenant, as it is


cloud-based
• The development process is quickened and simplified
• Reduced expenses for creating, testing and launching apps
• Automated company policy
• Reduced amount of coding required
• Allows for easy migrating to the hybrid cloud
PaaS Disadvantages

• Data security issues


• Compatibility of existing infrastructure (not every element can be cloud-
enabled)
• Dependency on vendor’s speed, reliability and support

When to Use PaaS

Such solutions are especially profitable to developers who want to spend more time
coding, testing and deploying their applications. Utilizing PaaS is beneficial when:

• Multiple developers work on one project


• Other vendors must be included
• You want to create your own customized apps

Examples of PaaS

The best-known PaaS solutions vendors are Google App Engine, Amazon AWS,
Windows Azure Cloud Services, Heroku, AWS Elastic Beanstalk, Apache Stratos and
OpenShift.

SaaS: Software as a Service

With this offering, users get access to the vendor’s cloud-based software. Users
don’t have to download and install SaaS applications on local devices, but
sometimes they may need plugins.
SaaS software resides on a remote cloud network and can be accessed through the
web or APIs. Using such apps, customers can collaborate on projects, as well as
store and analyze data.

SaaS is the most common category of cloud computing. The SaaS provider manages
everything from hardware stability to app functioning. Clients are not responsible
for anything in this model; they only use programs to complete their tasks.

SaaS Key Features

• The subscription model of utilizing


• No need to download, install or upgrade software
• Resources can be scaled depending on requirements
• Apps are accessible from any connected device
• The provider is responsible for everything
SaaS Advantages

• No hardware costs
• No initial setup costs
• Automated upgrades
• Accessible from any location
• Pay-as-you-go model
• Scalability
• Easy customization
SaaS Disadvantages

• Loss of control
• Limited range of solutions
• Connectivity is a must

When to Use SaaS

Utilizing SaaS is most beneficial in the following situations:

• If your company needs to launch a ready-made software quickly


• For short-term projects that require collaboration
• If you use applications on a temporary basis
• For applications that need both web and mobile access

Examples of SaaS

The best-known SaaS solutions vendors are Google Apps, Dropbox, Gmail, Salesforce, Cisco
WebEx, Concur, GoToMeeting, Office365.

ARCHITECTURAL DESIGN CHALLENGES:


Here are six common challenges you must consider before implementing cloud
computing technology.

1. Cost
Cloud computing itself is affordable, but tuning the platform according to the
company’s needs can be expensive. Furthermore, the expense of transferring the
data to public clouds can prove to be a problem for short-lived and small-scale
projects.
Companies can save some money on system maintenance, management, and
acquisitions. But they also have to invest in additional bandwidth, and the absence
of routine control in an infinitely scalable computing platform can increase costs.
2. Service Provider Reliability
The capacity and capability of a technical service provider are as important as price.
The service provider must be available when you need them. The main concern
should be the service provider’s sustainability and reputation. Make sure you
comprehend the techniques via which a provider observes its services and defends
dependability claims.

3. Downtime
Downtime is a significant shortcoming of cloud technology. No seller can promise a
platform that is free of possible downtime. Cloud technology makes small
companies reliant on their connectivity, so companies with an untrustworthy
internet connection probably want to think twice before adopting cloud computing.

4. Password Security
Industrious password supervision plays a vital role in cloud security. However, the
more people you have accessing your cloud account, the less secure it is. Anybody
aware of your passwords will be able to access the information you store there.
Businesses should employ multi-factor authentication and make sure that
passwords are protected and changed regularly, particularly when staff members
leave. Access rights related to passwords and usernames should only be allocated to
those who require them.

5. Data privacy
Sensitive and personal information that is kept in the cloud should be defined as
being for internal use only, not to be shared with third parties. Businesses must
have a plan to securely and efficiently manage the data they gather.

6. Vendor lock-in
Entering a cloud computing agreement is easier than leaving it. “Vendor lock-in”
happens when altering providers is either excessively expensive or just not possible.
It could be that the service is nonstandard or that there is no viable vendor
substitute.
It comes down to buyer carefulness. Guarantee the services you involve are typical
and transportable to other providers, and above all, understand the requirements.
Cloud computing is a good solution for many businesses, but it’s important to know what
you’re getting into.

CLOUD STORAGE
Cloud storage is a cloud computing model that stores data on the Internet through a cloud
computing provider who manages and operates data storage as a service. It’s delivered on
demand with just-in-time capacity and costs, and eliminates buying and managing your own
data storage infrastructure. This gives you agility, global scale and durability, with “anytime,
anywhere” data access.
How Does Cloud Storage Work?

Cloud storage is purchased from a third-party cloud vendor who owns and operates data
storage capacity and delivers it over the Internet in a pay-as-you-go model. These cloud
storage vendors manage capacity, security and durability to make data accessible to your
applications all around the world.
Applications access cloud storage through traditional storage protocols or directly via an
API. Many vendors offer complementary services designed to help collect, manage, secure
and analyze data at massive scale.

Storage as a service
Storage as a service (SaaS) is a cloud business model in which a company leases or rents
its storage infrastructure to another company or individuals to store data.
Small companies and individuals often find this to be a convenient methodology for
managing backups, and providing cost savings in personnel, hardware and physical
space.

Advantages of Storage as a Services

1. Cost– factually speaking, backing up data isn’t always cheap, especially when
take the cost of equipment into account. Additionally, there is the cost of the
time it takes to manually complete routine backups. Storage as a service
reduces much of the cost associated with traditional backup methods,
providing ample storage space in the cloud for a low monthly fee.
2. Invisibility – Storage as a service is invisible, as no physical presence of it is
seen in its deployment and so it doesn’t take up valuable office space.
3. Security – In this service type, data is encrypted both during transmission and
while at rest, ensuring no unauthorized user access to files.
4. Automation – Storage as a service makes the tedious process of backing up
easy to accomplish through automation. Users can simply select what and
when they want to backup, and the service does all the rest.
5. Accessibility – By going for storage as a service, users can access data from
smart phones, netbooks to desktops and so on.
6. Syncing – Syncing ensures your files are automatically updated across all of
your devices. This way, the latest version of a file a user saved on their
desktop is available on your smart phone.
7. Sharing – Online storage services allow the users to easily share data with just
a few clicks
8. Collaboration – Cloud storage services are also ideal for collaboration
purposes. They allow multiple people to edit and collaborate on a single file or
document. Thus, with this feature users need not worry about tracking the
latest version or who has made what changes.
9. Data Protection – By storing data on cloud storage services, data is well
protected by all kind of catastrophes such as floods, earthquakes and human
errors.
10. Disaster Recovery – as said earlier, data stored in cloud is not only protected
from catastrophes by having the same copy at several places, but can also
favour disaster recovery to ensure business continuity.

Disadvantages of Cloud Storage

1. Internet Connection
Cloud based storage is dependent on having an internet connection. If you are on a
slow network, you may have issues accessing your storage. In the event you find
yourself somewhere without internet, you won't be able to access your files.
2. Costs
There are additional costs for uploading and downloading files from the cloud. These
can quickly add up if you are trying to access lots of files often.
3. Hard Drives
Cloud storage is supposed to eliminate our dependency on hard drives, right? Well,
some business cloud storage providers require physical hard drives as well.
4. Support
Support for cloud storage isn't the best, especially if you are using a free version of a
cloud provider. Many providers refer you to a knowledge base or FAQs.
5. Privacy
When you use a cloud provider, your data is no longer on your physical storage. So,
who is responsible for making sure that data is secure? That's a Gray area that is still
being figured out.

There are three types of cloud data storage:


Object Storage - Object storage, also known as object-based storage, is a flat structure in
which files are broken into pieces and spread out among hardware. In object storage, the
data is broken into discrete units called objects and is kept in a single repository, instead of
being kept as files in folders or as blocks on servers.

File Storage - File storage, also called file-level or file-based storage, is exactly what you
think it might be: Data is stored as a single piece of information inside a folder, just like
you’d organize pieces of paper inside a manila folder. When you need to access that piece of
data, your computer needs to know the path to find it.

Block Storage - Block storage chops data into blocks—get it? —and stores them as separate
pieces. Each block of data is given a unique identifier, which allows a storage system to
place the smaller pieces of data wherever is most convenient. That means that some data
can be stored in a linux environment and some can be stored in a Windows unit.
There are Four types of cloud data storage:
Private Cloud Storage - Just as the name may suggest, this type of cloud storage means that
the infrastructure is used by a single person or company hence high level of privacy and
security.

Public Cloud Storage - In public cloud storage, data and files of an individual are located
within the premises of the company offering the cloud storage services which means that as
a client, you have no control over the nature of the storage infrastructure. Also, the
infrastructure is shared, and so are the resources used, which means that the possibility
of data hijacking and other attacks may be experienced.

Hybrid Storage - in hybrid cloud storage, both private and public cloud
storage infrastructures are combined and put into use with each storing different data. For
instance, banks could have their client’s data stored in public cloud where the clients could
have accounts that to enable them interact with the banking official and then
their confidential account details could be stored in private cloud.

Community cloud storage - This is a cloud infrastructure whereby data can be


accessed by different departments or organization such as different branches of a company
located in different towns.
UNIT – 4

Inter cloud
Intercloud or ‘cloud of clouds’ is a common term used for the cloud in cloud computing. It’s a
theoretical model that combines individual clouds to create a smooth mass network.
Intercloud enables a cloud to take advantage of all pre-existing resources available with other
cloud providers.
It simply ensures that a cloud can use all the available resources beyond their limits for fast
inter-cloud file transfer.

TYPES OF INTER CLOUD RESOURCE MANAGEMENT

Federation Clouds: A Federation cloud is an Inter-Cloud where a set of cloud providers


willingly interconnect their cloud infrastructures in order to share resources among each
other. The cloud providers in the federation voluntarily collaborate to exchange
resources. This type of Inter-Cloud is suitable for collaboration of governmental clouds
(Clouds owned and utilized by non-profit institution or government) or private cloud
portfolios (Cloud is a part of a portfolio of clouds where the clouds belong to the same
organization).
Types of federation clouds are Peer to Peer and Centralized clouds.
• Centralised – In every instance of this group of architectures, there is a central entity
that either manages resource allocation. Usually, this central entity acts as a
repository where available cloud resources are registered but may also have other
responsibilities like acting as a market place for resources.

• Peer-to-Peer – In the architectures from this group, clouds communicate and


negotiate directly with each other without mediators.

Multi-Cloud: In a Multi-Cloud, a client or service uses multiple independent clouds. A


multicloud environment has no volunteer interconnection and sharing of the cloud service
providers’ infrastructures. Managing resource provisioning and scheduling is the
responsibility of client or their representatives. This approach is used to utilize resources
from both governmental clouds and private cloud portfolios.
Types of Multi-cloud are Services and Libraries.
• Services – application provisioning is carried out by a service that can be hosted
either externally or in-house by the cloud clients. Most such services include broker
components in themselves.

• Libraries – Inter-Cloud libraries that facilitate the usage of multiple clouds in a


uniform way.
Resource finding, allocation and provisioning in a distributed environment is one of the
major challenges faced by Cloud Federation architectures. From user’s point of view,
network monitoring, security and privacy are important issues.

RESOURCE PROVISIONING
Resource Provisioning means the selection, deployment, and run-time management of
software (e.g., database server management systems, load balancers) and hardware
resources (e.g., CPU, storage, and network) for ensuring guaranteed performance for
applications.
This resource provisioning takes Service Level Agreement (SLA) into consideration for
providing service to the cloud users. This is an initial agreement between the cloud users
and cloud service providers which ensures Quality of Service (QoS) parameters like
performance, availability, reliability, response time etc. Based on the application needs
Static Provisioning/Dynamic Provisioning and Static/Dynamic Allocation of resources
have to be made in order to efficiently make use of the resources without violating SLA
and meeting these QoS parameters

RESOURCE PROVISIONING TYPES

Static Provisioning: For applications that have predictable and generally unchanging
demands/workloads, it is possible to use “static provisioning" effectively. With advance
provisioning, the customer contracts with the provider for services and the provider
prepares the appropriate resources in advance of start of service. The customer is
charged a flat fee or is billed on a monthly basis.

Dynamic Provisioning: In cases where demand by applications may change or vary,


“dynamic provisioning" techniques have been suggested. With dynamic provisioning,
the provider allocates more resources as they are needed and removes them when they
are not. The customer is billed on a pay-per-use basis.
When dynamic provisioning is used to create a hybrid cloud, it is sometimes referred to
as cloud bursting.

User Self-provisioning: With user self- provisioning (also known as cloud self- service),
the customer purchases resources from the cloud provider through a web form, creating
a customer account and paying for resources with a credit card. The provider's
resources are available for customer use within hours, if not minutes.

Parameters for Resource Provisioning

Response time: The resource provisioning algorithm designed must take minimal
time to respond when executing the task.
Minimize Cost: From the Cloud user point of view cost should be minimized.
Revenue Maximization: This is to be achieved from the Cloud Service Provider’s
view.
Fault tolerant: The algorithm should continue to provide service in spite of failure of
nodes.
Reduced SLA Violation: The algorithm designed must be able to reduce SLA violation.
Reduced Power Consumption: VM placement & migration techniques must lower
power consumption.

Security Risk in cloud computing

Here are some of the most common cloud computing security risks

Distributed-Denial-of-Service Attacks

When cloud computing first became popular, Distributed Denial-of-Service (DDoS) attacks
against cloud platforms were largely unthinkable; distributed denial-of-service (DDoS) attack is a
malicious attempt to disrupt the normal traffic of a targeted cloud service by flooding the target or its
surrounding infrastructure with a flood of Internet traffic so, it can either go down entirely or
experience difficulties.

Shared Cloud Computing Services

Many cloud solutions do not provide the necessary security between clients, leading to shared
resources, applications, and systems. In this situation, threats can originate from other clients
with the cloud computing service, and threats targeting one client could also have an impact
on other clients.

Employee Negligence
Employee negligence and employee mistakes remain one of the biggest security issues for all
systems, but the threat is particularly dangerous with cloud solutions. Modern employees
may log into cloud solutions from their mobile phones, home tablets, and home desktop PCs,
potentially leaving the system vulnerable to many outside threats.

Data Loss and Inadequate Data Backups

Inadequate data backups and improper data syncing is what has made many businesses
vulnerable to ransomware, a specific type of cloud security threat. Ransomware "locks" away
a company's data in encrypted files, only allowing them to access the data once a ransom has
been paid. With appropriate data backup solutions, companies need no longer fall prey to
these threats.

Phishing and Social Engineering Attacks

Due to the openness of a cloud computing system, phishing and social engineering attacks
have become particularly common. Once login information or other confidential information
is acquired, a malicious user can potentially break into a system easily -- as the system itself
is available from anywhere. Employees must be knowledgeable about phishing and social
engineering enough to avoid these types of attacks.

System Vulnerabilities

Cloud computing systems can still contain system vulnerabilities, especially in networks that
have complex infrastructures and multiple third-party platforms. Once a vulnerability
becomes known with a popular third-party system, this vulnerability can be easily used
against organizations. Proper patching and upgrade protocols -- in addition to network
monitoring solutions -- are critical for fighting this threat.

Account Hijacking
With the increase in adoption of cloud services, organizations have reported an increased
occurrence of account hijacking. Such attacks involve using employee’s login information to
access sensitive information. Attackers can also modify, insert false information and
manipulate the data present in the cloud. They also use scripting bugs or reused passwords
to steal credentials without being detected.

Account hijacking could have a detrimental effect at the enterprise level, undermining the
firm’s integrity and reputation. This could also have legal implications in industries such as
healthcare where patients’ personal medical records are compromised. A robust IAM
(Identity Access Management) system can prevent unauthorized access and damage to the
organization’s data assets.

Insider Threat
An Insider threat is the misuse of information through hostile intent, malware, and even
accidents. Insider threats originate from employees or system administrators, who can
access confidential information they can also access even more critical systems and
eventually data.

When the relationship between the employer and system administrator turns sour, they
may resort to leaking privileged information.
There can be several instances of insider threat such as a Salesperson who jumps ship or a
rogue admin. In scenarios where the cloud service provider is responsible for security, the
risk from insider threat is often greater.

Schematic diagram of various cloud security challenges

Various security challenges related to these deployment models are


discussed below:

Cloning and Resource Pooling: Cloning deals with replicating or duplicating the data.
According to cloning leads to data leakage problems revealing the machine’s authenticity.
Resource pooling as a service provided to the users by the provider to use various resources
and share the same according to their application demand. Resource Pooling relates to the
unauthorized access due to sharing through the same network.
Motility of Data and Data residuals: For the best use of resources, data often is moved to
cloud infrastructure. As a result, the enterprise would be devoid of the location where data
is put on the cloud. This is true with public cloud. With this data movement, the residuals of
data are left behind which may be accessed by unauthorized users.
Elastic Perimeter: A cloud infrastructure, particularly comprising of private cloud, creates an
elastic perimeter. Various departments and users throughout the organization allow sharing
of different resources to increase facility of access but unfortunately lead to data breach
problem.
Shared Multi-tenant Environment: one of the very vital attributes of cloud computing,
which allows multiple users to run their distinct applications concurrently on the same
physical infrastructure hiding user data from each other. But the shared multi-tenant
character of public cloud adds security risks such as illegal access of data by other renter
using the same hardware.
Unencrypted Data: Data encryption is a process that helps to address various external and
malicious threats. Unencrypted data is vulnerable for susceptible data, as it does not
provide any security mechanism. These unencrypted data can easily be accessed by
unauthorized users.
Authentication and Identity Management: With the help of cloud, a user is facilitated to
access its private data and make it available to various services across the network. Identity
management helps in authenticating the users through their credentials.

Various security challenges with the service models are discussed


below:
Data Leakage and consequent problems: Data deletion or alteration without backup leads to
certain drastic data related problems like security, integrity, locality, segregation and breaches. This
would lead to sensitive data being accessed by the unauthorized users.

Malicious Attacks: The threat of malicious attackers is augmented for customers of cloud services by
the use of various IT services which lacks the lucidity between the procedure and process relating to
service providers. Malicious users may gain access to certain confidential data and thus leading to
data breaches.

Backup and Storage: The cloud vendor must ensure that regular backup of data is implemented that
even ensure security with all measures. But this backup data is generally found in unencrypted form
leading to misuse of the data by unauthorized parties. Thus, data backups lead to various security
threats.

Shared Technological issues: IaaS vendors transport their services in a scalable way by contributing
infrastructure. But this structure does not offer strong isolation properties for a multi-tenant
architecture. Hence in order to address this gap, a virtualization hypervisor intercede the access
between guest operating systems and the physical compute resources.
Service Hijacking: Service hijacking is associated with gaining an illegal control on certain authorized
services by various unauthorized users. It accounts for various techniques like phishing, exploitation
of software and fraud. This is considered as one of the top most threats.

VM Hopping: an attacker on one VM gains rights to use another victim VM. The attacker can check
the victim VM’s resource procedure, alter its configurations and can even delete stored data, thus,
putting it in danger the VM’s confidentiality, integrity, and availability. A requirement for this attack
is that the two VMs must be operating on the same host, and the attacker must recognize the victim
VM’s IP address. Although PaaS and IaaS users have partial authority.

The network issues

Browser Security: Every client uses browser to send the information on network. The browser uses
SSL technology to encrypt user’s identity and credentials. But hackers from the intermediary host
may acquire these credentials by the use of sniffing packages installed on the intermediary host.

SQL Injection Attack: These attacks are malicious act on the cloud computing in which a spiteful
code is inserted into a model SQL code. This allows the invader to gain unauthorized access to a
database and eventually to other confidential information.

Flooding Attacks: In this attack the invader sends the request for resources on the cloud rapidly so
that the cloud gets flooded with the ample requests.

XML Signature Element Wrapping: It is found to be a very renowned web service attack. it protects
identity value and host name from illegal party but cannot protect the position in the documents.
The attacker simply targets the host computer by sending the SOAP messages and putting any
scrambled data which the user of the host computer cannot understand

Incomplete Data Deletion: Incomplete data deletion is treated as hazardous one in cloud
computing. When data is deleted, it does not remove the replicated data placed on a dedicated
backup server. The operating system of that server will not delete data unless it is specifically
commanded by network service provider. Precise data deletion is majorly impossible because copies
of data are saved in replica but are not available for usage.

Locks in: Locks in not leading to facilitate the customer in transferring from one cloud provider to
another or transferring the services back to home IT location.

Cloud security governance


Cloud security governance refers to the management model that facilitates effective and
efficient security management and operations in the cloud environment so that an
enterprise’s business targets are achieved. This model contains a hierarchy of executive
mandates, performance expectations, operational practices, structures, and metrics that,
when implemented, result in the optimization of business value for an enterprise.

Key Objectives for Cloud Security Governance

1. Strategic Alignment
Enterprises should mandate that security investments, services, and projects in the
cloud are executed to achieve established business goals (e.g., market
competitiveness, financial, or operational performance).

2. Value Delivery
Enterprises should define, operationalize, and maintain an appropriate security
function/organization with appropriate strategic and tactical representation, and
charged with the responsibility to maximize the business value.

3. Risk Mitigation
Security initiatives in the cloud should be subject to measurements that gauge
effectiveness in mitigating risk to the enterprise (Key Risk Indicators). These
initiatives should also yield results that progressively demonstrate a reduction in
these risks over time.

4. Effective Use of Resources


It is important for enterprises to establish a practical operating model for managing
and performing security operations in the cloud, including the proper definition and
operationalization of due processes, the institution of appropriate roles and
responsibilities, and use of relevant tools for overall efficiency and effectiveness.

5. Sustained Performance
Security initiatives in the cloud should be measurable in terms of performance, value
and risk to the enterprise (Key Performance Indicators, Key Risk Indicators), and yield
results that demonstrate attainment of desired targets (Key Goal Indicators) over
time.

In a nutshell, cloud governance is a carefully designed set of rules and protocols put in place
by businesses that operate in a cloud environment to enhance data security, manage risks,
and keep things running smoothly.

IAM (Identity and access management)


Identity and access management (IAM) in enterprise is about defining and managing the
roles and access privileges of individual network users and the circumstances in which users
are granted (or denied) those privileges. Those users might be customers (customer identity
management) or employees (employee identity management. The core objective of IAM
systems is one digital identity per individual. Once that digital identity has been established,
it must be maintained, modified and monitored throughout each user’s “access lifecycle.”
Thus, the goal of identity management is to “grant access to the right enterprise assets to
the right users in the right context, from a user’s system onboarding to permission
authorizations to the offboarding of that user as needed in a timely fashion,”
IAM systems provide administrators with the tools and technologies to change a user’s role,
track user activities, create reports on those activities.
Benefits of IAM

• Access privileges are granted according to policy, and all individuals and services are
properly authenticated, authorized and audited.
• Companies that properly manage identities have greater control of user access,
which reduces the risk of internal and external data breaches.
• Automating IAM systems allows businesses to operate more efficiently by decreasing
the effort, time and money that would be required to manually manage access to
their networks.
• In terms of security, the use of an IAM framework can make it easier to enforce
policies around user authentication, validation and privileges, and address issues
regarding privilege creep.
• IAM systems help companies better comply with government regulations by allowing
them to show corporate information is not being misused.

Challenges of IAM

1. Password need
2. Increase complexity
3. Distributed applications

Cloud security standards comprises cloud usage in accordance with industry guidelines and
local, national, and international laws.

Three parameters of security:


• Confidentiality.
• Access controllability.
• Integrity.
UNIT – 5

Hadoop
Hadoop is an open-source framework from Apache and is used to store process and analyze
data which are very huge in volume. Hadoop is written in Java and is not OLAP (online
analytical processing). It is used for batch/offline processing. It is being used by Facebook,
Yahoo, Google, Twitter, LinkedIn and many more. Moreover, it can be scaled up just by
adding nodes in the cluster.

Why is Hadoop important? (FEATURES OF HADOOP/ ADVANTAGES


OF HADOOP)
Ability to store and process huge amounts of any kind of data, quickly. With
data volumes and varieties constantly increasing, especially from
Computing power. Processes big data fast. The more computing nodes you use,
the more processing power you have.
Fault tolerance. Data and application processing are protected against hardware
failure. If a node goes down, jobs are automatically redirected to other nodes to
make sure the distributed computing does not fail. Multiple copies of all data are
stored automatically.
Flexibility. Unlike traditional relational databases, you don’t have to pre-process
data before storing it. You can store as much data as you want and decide how
to use it later. That includes unstructured data like text, images and videos.
Low cost. The open-source framework is free and uses commodity hardware to
store large quantities of data.
Scalability. You can easily grow your system to handle more data simply by adding nodes.
Little administration is required.

Modules of Hadoop

1. HDFS: Hadoop Distributed File System. It states that the files will be broken into blocks
and stored in nodes over the distributed architecture.
2. Yarn: Yet another Resource Negotiator is used for job scheduling and manage the
cluster.
3. Map Reduce: This is a framework which helps Java programs to do the parallel
computation on data using key value pair. The Map task takes input data and converts
it into a data set which can be computed in Key value pair. The output of Map task is
consumed by reduce task and then the out of reducer gives the desired result.
4. Hadoop Common: These Java libraries are used to start Hadoop and are used by other
Hadoop modules.

Hadoop Architecture

The Hadoop architecture is a package of the file system, MapReduce engine and the HDFS
(Hadoop Distributed File System). The MapReduce engine can be MapReduce/MR1 or
YARN/MR2.

A Hadoop cluster consists of a single master and multiple slave nodes. The master node
includes Job Tracker, Task Tracker, NameNode, and DataNode whereas the slave node
includes DataNode and TaskTracker.
Hadoop Distributed File System
The Hadoop Distributed File System (HDFS) is a distributed file system for Hadoop. It contains
a master/slave architecture. This architecture consist of a single NameNode performs the role
of master, and multiple DataNodes performs the role of a slave.

Both NameNode and DataNode are capable enough to run on commodity machines. The Java
language is used to develop HDFS. So, any machine that supports Java language can easily run
the NameNode and DataNode software.

NameNode
o It is a single master server exist in the HDFS cluster.
o As it is a single node, it may become the reason of single point failure.
o It manages the file system namespace by executing an operation like the opening,
renaming and closing the files.
o It simplifies the architecture of the system.

DataNode
o The HDFS cluster contains multiple DataNodes.
o Each DataNode contains multiple data blocks.
o These data blocks are used to store data.
o It is the responsibility of DataNode to read and write requests from the file system's
clients.
o It performs block creation, deletion, and replication upon instruction from the
NameNode.

Job Tracker
o The role of Job Tracker is to accept the MapReduce jobs from client and process the
data by using NameNode.
o In response, NameNode provides metadata to Job Tracker.

Task Tracker
o It works as a slave node for Job Tracker.
o It receives task and code from Job Tracker and applies that code on the file. This
process can also be called as a Mapper.

What is MapReduce?
MapReduce is a processing technique and a program model for distributed
computing based on java. The MapReduce algorithm contains two important
tasks, namely Map and Reduce. Map takes a set of data and converts it into
another set of data, where individual elements are broken down into tuples
(key/value pairs). Secondly, reduce task, which takes the output from a map as
an input and combines those data tuples into a smaller set of tuples. As the
sequence of the name MapReduce implies, the reduce task is always performed
after the map job.
The major advantage of MapReduce is that it is easy to scale data processing
over multiple computing nodes. Under the MapReduce model, the data
processing primitives are called mappers and reducers.

The Algorithm
• Generally, MapReduce paradigm is based on sending the computer to where the
data resides!
• MapReduce program executes in three stages, namely map stage,
shuffle stage, and reduce stage.
• Map stage − The map or mapper’s job is to process the input data.
Generally, the input data is in the form of file or directory and is stored
in the Hadoop file system (HDFS). The input file is passed to the mapper
function line by line. The mapper processes the data and creates several
small chunks of data.
• Reduce stage − This stage is the combination of the Shuffle stage and
the Reduce stage. The Reducer’s job is to process the data that comes
from the mapper. After processing, it produces a new set of output,
which will be stored in the HDFS.
• During a MapReduce job, Hadoop sends the Map and Reduce tasks to
the appropriate servers in the cluster.
• The framework manages all the details of data-passing such as issuing
tasks, verifying task completion, and copying data around the cluster
between the nodes.
• Most of the computing takes place on nodes with data on local disks that
reduces the network traffic.
• After completion of the given tasks, the cluster collects and reduces the
data to form an appropriate result, and sends it back to the Hadoop
server.
Google App Engine

• Google App Engine is a Platform as a Service (PaaS) product that provides


Web app developers and enterprises with access to Google's scalable
hosting and tier 1 Internet service.
• The App Engine requires that apps be written in Java or Python, store
data in Google BigTable and use the Google query language. Non-
compliant applications require modification to use App Engine.
• Google App Engine provides more infrastructure than other scalable
hosting services such as Amazon Elastic Compute Cloud (EC2). The App
Engine also eliminates some system administration and developmental
tasks to make it easier to write scalable applications.
• Google App Engine is free up to a certain amount of resource usage.
Users exceeding the per-day or per-minute usage rates for CPU
resources, storage, number of API calls or requests and concurrent
requests can pay for more of these resources.

Benefits of GAE
• All Time Availability
• Ensure Faster Time to Market
• Easy to Use Platform
• Increased Scalability
• Improved Savings
• Smart Pricing

Disadvantages of GAE
• Locked into Google App Engine?
• Developers have read-only access to the filesystem on App Engine.
• Users may upload arbitrary Python modules, but only if they are pure-Python; C and
Pyrex modules are not supported.
• App Engine limits the maximum rows returned from an entity get to 1000 rows per
Datastore call. (Update - App Engine now supports cursors for accessing larger
queries)
• Java applications cannot create new threads.

OpenStack
OpenStack is an open source platform that uses pooled virtual resources to build and
manage private and public clouds. The tools that comprise the OpenStack platform, called
"projects," handle the core cloud-computing services of compute, networking, storage,
identity, and image services. More than a dozen optional projects can also be bundled
together to create unique, deployable clouds.
Components of OpenStack
• Nova
• Swift
• Neutron
• Cinder
• Horizon
• Keystone
• Glance
• Ceilometer
• Heat

Cloud federation

A cloud federation is an Inter-Cloud where a set of cloud providers willingly interconnect their
cloud infrastructures in order to share resources among each other. The cloud providers in
the federation voluntarily collaborate to exchange resources.

Advantages of Cloud federation

1. cloud Federation allows scaling up of resources.


2. cloud Federation increases reliability.
3. cloud Federation has increased collaboration of cloud resources.
4. Connects multiple cloud service provider globally to let providers buy and sell their
services on demand.
5. Dynamic scalability reduces the cost and time of providers.

Types/levels of Federation in Cloud

The federation can be classified into four types.

• Permissive federation
Permissive federation allows the interconnection of the cloud environment of two
service providers without the verifying identity of peer cloud using DNS lookups. This
raises the chances of domain spoofing.

• Verified Federation
Verified federation allows interconnection of the cloud environment, two service
providers, only after the peer cloud is identified using the information obtained from
DNS. Though the identity verification prevents spoofing hence the connection is still
not encrypted and there are chances of DNS attack.

• Encrypted Federation
Encrypted federation allows interconnection of the cloud environment of two
services provider only if the peer cloud supports transport layer security (TSL). The
peer cloud interested in the federation must provide the digital certificate which still
provides mutual authentication. Thus, encrypted federation results in weak identity
verification.

• Trusted Federation
Trusted federation allows two clouds from different provider to connect only under
a provision that the peer cloud support TSL along with that it provides a digital
certificate authorized by the certification authority (CA) that is trusted by the
authenticating cloud.

Future of federation
Federation creates a hybrid cloud environment with an increased focus on maintaining the
integrity of corporate policies and data integrity. Think of federation as a pool of clouds
connected through a channel of gateways; gateways which can be used to optimize a cloud
for a service or set of specific services. Such gateways can be used to segment service
audiences or to limit access to specific data sets. In essence, federation has the ability for
enterprises to service their audiences with economy of scale without exposing critical
applications or vital data through weak policies or vulnerabilities.

You might also like