CS3551 Distributed Computing Unit5
CS3551 Distributed Computing Unit5
The term cloud refers to a network or the internet. It is a technology that uses remote
servers on the internet to store, manage, and access data online rather than local drives.
The data can be anything such as files, images, documents, audio, video, and more.
1) Agility
The availability of servers is high and more reliable because the chances of
infrastructure failure are minimum.
3) High Scalability
4) Multi-Sharing
With the help of cloud computing, multiple users and applications can work more
efficiently with cost reductions by sharing common infrastructure.
6) Maintenance
7) Low Cost
By using cloud computing, the cost will be reduced because to take the services of cloud
computing, IT Company need not to set its own infrastructure and pay-as-per usage
of resources.
Application Programming Interfaces (APIs) are provided to the users so that they can
access services on the cloud by using these APIs and pay the charges as per the usage
of services.
Though the answer to which cloud model is an ideal fit for a business depends on your
organization's computing and business needs. Choosing the right one from the various
types of cloud service deployment models is essential. It would ensure your business is
equipped with the performance, scalability, privacy, security, compliance & cost-
effectiveness it requires. It is important to learn and explore what different deployment
types can offer - around what particular problems it can solve.
Read on as we cover the various cloud computing deployment and service models to help
discover the best choice for your business.
What Is A Cloud Deployment Model?
Most cloud hubs have tens of thousands of servers and storage devices to enable fast
loading. It is often possible to choose a geographic area to put the data "closer" to users.
Thus, deployment models for cloud computing are categorized based on their location.
To know which model would best fit the requirements of your organization, let us first
learn about the various types.
Public Cloud
The name says it all. It is accessible to the public. Public deployment models in the cloud
are perfect for organizations with growing and fluctuating demands. It also makes a great
choice for companies with low-security concerns. Thus, you pay a cloud service provider
for networking services, compute virtualization & storage available on the public internet.
It is also a great delivery model for the teams with development and testing. Its
configuration and deployment are quick and easy, making it an ideal choice for test
environments.
Benefits of Public Cloud
o Data Security and Privacy Concerns - Since it is accessible to all, it does not fully
protect against cyber-attacks and could lead to vulnerabilities.
o Reliability Issues - Since the same server network is open to a wide range of users,
it can lead to malfunction and outages
o Service/License Limitation - While there are many resources you can exchange
with tenants, there is a usage cap.
Private Cloud
Now that you understand what the public cloud could offer you, of course, you are keen
to know what a private cloud can do. Companies that look for cost efficiency and greater
control over data & resources will find the private cloud a more suitable choice.
It means that it will be integrated with your data center and managed by your IT team.
Alternatively, you can also choose to host it externally. The private cloud offers bigger
opportunities that help meet specific organizations' requirements when it comes to
customization. It's also a wise choice for mission-critical processes that may have
frequently changing requirements.
o Data Privacy - It is ideal for storing corporate data where only authorized
personnel gets access
o Security - Segmentation of resources within the same Infrastructure can help with
better access and higher levels of security.
o Supports Legacy Systems - This model supports legacy systems that cannot access
the public cloud.
Community Cloud
The community cloud operates in a way that is similar to the public cloud. There's just
one difference - it allows access to only a specific set of users who share common
objectives and use cases. This type of deployment model of cloud computing is managed
and hosted internally or by a third-party vendor. However, you can also choose a
combination of all three.
o Smaller Investment - A community cloud is much cheaper than the private &
public cloud and provides great performance
o Setup Benefits - The protocols and configuration of a community cloud must align
with industry standards, allowing customers to work much more efficiently.
Hybrid Cloud
As the name suggests, a hybrid cloud is a combination of two or more cloud architectures.
While each model in the hybrid cloud functions differently, it is all part of the same
architecture. Further, as part of this deployment of the cloud computing model, the
internal or external providers can offer resources.
Let's understand the hybrid model better. A company with critical data will prefer storing
on a private cloud, while less sensitive data can be stored on a public cloud. The hybrid
cloud is also frequently used for 'cloud bursting'. It means, supposes an organization runs
an application on-premises, but due to heavy load, it can burst into the public cloud.
Characteristics of IaaS
PaaS cloud computing platform is created for the programmer to develop, test, run, and
manage the applications.
Characteristics of PaaS
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App
Engine, Apache Stratos, Magento Commerce Cloud, and OpenShift.
Characteristics of SaaS
It provides a virtual data It provides virtual platforms It provides web software and
center to store information and tools to create, test, and apps to complete business
and create platforms for deploy apps. tasks.
app development, testing,
and deployment.
Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud
computing platform. It allows customers to outsource their IT infrastructures such as
servers, networking, processing, storage, virtual machines, and other resources.
Customers access these resources on the Internet using a pay-as-per use model.
In traditional hosting services, IT infrastructure was rented out for a specific period of
time, with pre-determined hardware configuration. The client paid for the configuration
and time, regardless of the actual use. With the help of the IaaS cloud computing platform
layer, clients can dynamically scale the configuration to meet changing requirements and
are billed only for the services actually used.
IaaS cloud computing platform layer eliminates the need for every organization to
maintain the IT infrastructure.
IaaS is offered in three models: public, private, and hybrid cloud. The private cloud
implies that the infrastructure resides at the customer-premise. In the case of public cloud,
it is located at the cloud computing platform vendor's data center, and the hybrid cloud is
a combination of the two in which the customer selects the best of both public cloud or
private cloud.
3. Pay-as-per-use model
IaaS providers provide services based on the pay-as-per-use basis. The users are required
to pay for what they have used.
IaaS providers focus on the organization's core business rather than on IT infrastructure.
5. On-demand scalability
On-demand scalability is one of the biggest advantages of IaaS. Using IaaS, users do not
worry about to upgrade software and troubleshoot the issues related to hardware
components.
1. Security
Security is one of the biggest issues in IaaS. Most of the IaaS providers are not able to
provide 100% security.
Although IaaS service providers maintain the software, but they do not upgrade the
software for some organizations.
3. Interoperability issues
It is difficult to migrate VM from one IaaS provider to the other, so the customers might
face problem related to vendor lock-in.
1. Programming languages
PaaS providers provide various programming languages for the developers to develop the
applications. Some popular programming languages provided by PaaS providers are Java,
PHP, Ruby, Perl, and Go.
2. Application frameworks
3. Databases
PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB, and
Redis to communicate with the applications.
4. Other tools
PaaS providers provide various other tools that are required to develop, test, and deploy
the applications.
Advantages of PaaS
1) Simplified Development
PaaS allows developers to focus on development and innovation without worrying about
infrastructure management.
2) Lower risk
No need for up-front investment in hardware and software. Developers only need a PC
and an internet connection to start building applications.
Some PaaS vendors also provide already defined business functionality so that users can
avoid building everything from very scratch and hence can directly start the projects only.
4) Instant community
PaaS vendors frequently provide online communities where the developer can get the
ideas to share experiences and seek advice from others.
5) Scalability
Applications deployed can scale from one to thousands of users without any changes to
the applications.
1) Vendor lock-in
One has to write the applications according to the platform provided by the PaaS vendor,
so the migration of an application to another PaaS vendor would be a problem.
2) Data Privacy
Corporate data, whether it can be critical or not, will be private, so if it is not located
within the walls of the company, there can be a risk in terms of privacy of data.
It may happen that some applications are local, and some are in the cloud. So there will
be chances of increased complexity when we want to use data which in the cloud with
the local data.
Business Services - SaaS Provider provides various business services to start-up the
business. The SaaS business services include ERP (Enterprise Resource
Planning), CRM (Customer Relationship Management), billing, and sales.
Social Networks - As we all know, social networking sites are used by the general public,
so social networking service providers use SaaS for their convenience and handle the
general public's information.
Mail Services - To handle the unpredictable number of users and load on e-mail services,
many e-mail providers offering their services using SaaS.
Advantages of SaaS cloud computing layer
Unlike traditional software, which is sold as a licensed based with an up-front cost (and
often an optional ongoing support fee), SaaS providers are generally pricing the
applications using a subscription fee, most commonly a monthly or annually fee.
2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the
application is shared by multiple users.
Software as a service removes the need for installation, set-up, and daily maintenance for
the organizations. The initial set-up cost for SaaS is typically less than the enterprise
software. SaaS vendors are pricing their applications based on some usage parameters,
such as a number of users using the application. So SaaS does easy to monitor and
automatic updates.
6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets, phones,
and thin clients.
7. API Integration
SaaS services easily integrate with other software or services through standard APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using the internet
connection, so do not need to require any software installation.
1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However,
cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-
user, there is a possibility that there may be greater latency when interacting with the
application compared to local deployment. Therefore, the SaaS model is not suitable for
applications whose demand response time is in milliseconds.
Switching SaaS vendors involves the difficult and slow task of transferring the very large
data files over the internet and then converting and importing them into another SaaS
also.
*Cloud Computing Challenges*
2. Cost Management
Even as almost all cloud service providers have a “Pay As You Go” model, which
reduces the overall cost of the resources being used, there are times when there are huge
costs incurred to the enterprise using cloud computing. When there is under
optimization of the resources, let’s say that the servers are not being used to their full
potential, add up to the hidden costs. If there is a degraded application performance or
sudden spikes or overages in the usage, it adds up to the overall cost. Unused resources
are one of the other main reasons why the costs go up. If you turn on the services or an
instance of cloud and forget to turn it off during the weekend or when there is no current
use of it, it will increase the cost without even using the resources.
3. Multi-Cloud Environments
Due to an increase in the options available to the companies, enterprises not only use a
single cloud but depend on multiple cloud service providers. Most of these companies
use hybrid cloud tactics and close to 84% are dependent on multiple clouds. This often
ends up being hindered and difficult to manage for the infrastructure team. The process
most of the time ends up being highly complex for the IT team due to the differences
between multiple cloud providers.
4. Performance Challenges
When an organization uses a specific cloud service provider and wants to switch to
another cloud-based solution, it often turns up to be a tedious procedure since
applications written for one cloud with the application stack are required to be re-written
for the other cloud. There is a lack of flexibility from switching from one cloud to
another due to the complexities involved. Handling data movement, setting up the
security from scratch and network also add up to the issues encountered when changing
cloud solutions, thereby reducing flexibility.
Since cloud computing deals with provisioning resources in real-time, it deals with
enormous amounts of data transfer to and from the servers. This is only made possible
due to the availability of the high-speed network. Although these data and resources are
exchanged over the network, this can prove to be highly vulnerable in case of limited
bandwidth or cases when there is a sudden outage. Even when the enterprises can cut
their hardware costs, they need to ensure that the internet bandwidth is high as well
there are zero network outages, or else it can result in a potential business loss. It is
therefore a major challenge for smaller enterprises that have to maintain network
bandwidth that comes with a high cost.
Due to the complex nature and the high demand for research working with the cloud
often ends up being a highly tedious task. It requires immense knowledge and wide
expertise on the subject. Although there are a lot of professionals in the field they need
to constantly update themselves. Cloud computing is a highly paid job due to the
extensive gap between demand and supply. There are a lot of vacancies but very few
talented cloud engineers, developers, and professionals. Therefore, there is a need for
upskilling so these professionals can actively understand, manage and develop cloud-
based applications with minimum issues and maximum reliability.
*Virtualization*
Virtualization is the "creation of a virtual (rather than actual) version of something, such
as a server, a desktop, a storage device, an operating system or network resources".
In other words, Virtualization is a technique, which allows to share a single physical
instance of a resource or an application among multiple customers and organizations. It
does by assigning a logical name to a physical storage and providing a pointer to that
physical resource when demanded.
Creation of a virtual machine over existing operating system and hardware is known as
Hardware Virtualization. A Virtual machine provides an environment that is logically
separated from the underlying hardware.
The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other
hardware resources.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling
virtual machines is much easier than controlling a physical server.
When the virtual machine software or virtual machine manager (VMM) is installed on the
Host operating system instead of directly on the hardware system is known as operating
system virtualization.
Usage:
Operating System Virtualization is mainly used for testing the applications on different
platforms of OS.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.
Usage:
Virtualization plays a very important role in the cloud computing technology, normally
in the cloud computing, users share the data present in the clouds like application etc, but
actually with the help of virtualization users shares the Infrastructure.
The main usage of Virtualization Technology is to provide the applications with the
standard versions to their cloud users, suppose if the next version of that application is
released, then cloud provider has to provide the latest version to their cloud users and
practically it is possible because it is more expensive.
*Load Balancing*
Load balancing is the method that allows you to have a proper balance of the amount of
work being done on different pieces of device or hardware equipment. Typically, what
happens is that the load of the devices is balanced between different servers or between
the CPU and hard drives in a single cloud server.
Load balancing was introduced for various reasons. One of them is to improve the speed
and performance of each single device, and the other is to protect individual devices from
hitting their limits by reducing their performance.
Cloud load balancing is defined as dividing workload and computing properties in cloud
computing. It enables enterprises to manage workload demands or application demands
by distributing resources among multiple computers, networks or servers. Cloud load
balancing involves managing the movement of workload traffic and demands over the
Internet.
Traffic on the Internet is growing rapidly, accounting for almost 100% of the current
traffic annually. Therefore, the workload on the servers is increasing so rapidly, leading
to overloading of the servers, mainly for the popular web servers. There are two primary
solutions to overcome the problem of overloading on the server
o First is a single-server solution in which the server is upgraded to a higher-
performance server. However, the new server may also be overloaded soon,
demanding another upgrade. Moreover, the upgrading process is arduous and
expensive.
o The second is a multiple-server solution in which a scalable service system on a
cluster of servers is built. That's why it is more cost-effective and more scalable to
build a server cluster system for network services.
Cloud-based servers can achieve more precise scalability and availability by using farm
server load balancing. Load balancing is beneficial with almost any type of service, such
as HTTP, SMTP, DNS, FTP, and POP/IMAP.
1. Static Algorithm
Static algorithms are built for systems with very little variation in load. The entire traffic
is divided equally between the servers in the static algorithm. This algorithm requires in-
depth knowledge of server resources for better performance of the processor, which is
determined at the beginning of the implementation.
However, the decision of load shifting does not depend on the current state of the system.
One of the major drawbacks of static load balancing algorithm is that load balancing tasks
work only after they have been created. It could not be implemented on other devices for
load balancing.
2. Dynamic Algorithm
The dynamic algorithm first finds the lightest server in the entire network and gives it
priority for load balancing. This requires real-time communication with the network
which can help increase the system's traffic. Here, the current state of the system is used
to control the load.
The characteristic of dynamic algorithms is to make load transfer decisions in the current
system state. In this system, processes can move from a highly used machine to an
underutilized machine in real time.
3. Round Robin Algorithm
As the name suggests, round robin load balancing algorithm uses round-robin method to
assign jobs. First, it randomly selects the first node and assigns tasks to other nodes in a
round-robin manner. This is one of the easiest methods of load balancing.
Processors assign each process circularly without defining any priority. It gives fast
response in case of uniform workload distribution among the processes. All processes
have different loading times. Therefore, some nodes may be heavily loaded, while others
may remain under-utilised.
Weighted Round Robin Load Balancing Algorithms have been developed to enhance the
most challenging issues of Round Robin Algorithms. In this algorithm, there are a
specified set of weights and functions, which are distributed according to the weight
values.
Processors that have a higher capacity are given a higher value. Therefore, the highest
loaded servers will get more tasks. When the full load level is reached, the servers will
receive stable traffic.
The opportunistic load balancing algorithm allows each node to be busy. It never
considers the current workload of each system. Regardless of the current workload on
each node, OLB distributes all unfinished tasks to these nodes.
The processing task will be executed slowly as an OLB, and it does not count the
implementation time of the node, which causes some bottlenecks even when some nodes
are free.
Under minimum to minimum load balancing algorithms, first of all, those tasks take
minimum time to complete. Among them, the minimum value is selected among all the
functions. According to that minimum time, the work on the machine is scheduled.
Other tasks are updated on the machine, and the task is removed from that list. This
process will continue till the final assignment is given. This algorithm works best where
many small tasks outweigh large tasks.
Load balancing solutions can be categorized into two types -
o Software-based load balancers: Software-based load balancers run on standard
hardware (desktop, PC) and standard operating systems.
o Hardware-based load balancers: Hardware-based load balancers are dedicated
boxes that contain application-specific integrated circuits (ASICs) optimized for a
particular use. ASICs allow network traffic to be promoted at high speeds and are
often used for transport-level load balancing because hardware-based load
balancing is faster than a software solution.
Cloud load balancing takes advantage of network layer information and leaves it to decide
where network traffic should be sent. This is accomplished through Layer 4 load
balancing, which handles TCP/UDP traffic. It is the fastest local balancing solution, but
it cannot balance the traffic distribution across servers.
HTTP(s) load balancing is the oldest type of load balancing, and it relies on Layer 7. This
means that load balancing operates in the layer of operations. It is the most flexible type
of load balancing because it lets you make delivery decisions based on information
retrieved from HTTP addresses.
It is very similar to network load balancing, but is leveraged to balance the infrastructure
internally.
Load balancers can be further divided into hardware, software and virtual load balancers.
It depends on the base and the physical hardware that distributes the network and
application traffic. The device can handle a large traffic volume, but these come with a
hefty price tag and have limited flexibility.
It can be an open source or commercial form and must be installed before it can be used.
These are more economical than hardware solutions.
It differs from a software load balancer in that it deploys the software to the hardware
load-balancing device on the virtual machine.
The technology of load balancing is less expensive and also easy to implement. This
allows companies to work on client applications much faster and deliver better results at
a lower cost.
Cloud load balancing can provide scalability to control website traffic. By using effective
load balancers, it is possible to manage high-end traffic, which is achieved using network
equipment and servers. E-commerce companies that need to deal with multiple visitors
every second use cloud load balancing to manage and distribute workloads.
Load balancers can handle any sudden traffic bursts they receive at once. For example,
in case of university results, the website may be closed due to too many requests. When
one uses a load balancer, he does not need to worry about the traffic flow. Whatever the
size of the traffic, load balancers will divide the entire load of the website equally across
different servers and provide maximum results in minimum response time.
Greater Flexibility
The main reason for using a load balancer is to protect the website from sudden crashes.
When the workload is distributed among different network servers or units, if a single
node fails, the load is transferred to another node. It offers flexibility, scalability and the
ability to handle traffic better.
Conclusion
1. Horizontal Scaling: This involves adding more nodes or servers to the system to handle
increased load. Load balancers distribute incoming requests across these additional
resources, allowing the system to handle a larger volume of traffic.
2. Load Balancer Redundancy: To ensure high availability and avoid single points of failure,
load balancers themselves can be scaled by implementing redundancy. Multiple load
balancers can be deployed in parallel, distributing the load across them and providing
fault tolerance. If one load balancer fails, others can take over seamlessly.
3. Dynamic Configuration: Scalable load balancing systems often have dynamic
configurations that allow for automatic adjustment of resources based on demand. This
includes dynamically adding or removing nodes from the load balancing pool based on
factors like CPU utilization, network traffic, or predefined thresholds.
Elasticity: Elasticity is closely related to scalability but emphasizes the ability of a system
to dynamically adapt its resource allocation in response to workload changes. In load
balancing, elasticity refers to the ability to scale resources up or down based on real-time
demand.
1. Auto Scaling: Auto scaling allows the system to automatically adjust the number of nodes
or servers based on predefined metrics or policies. When the workload increases, new
nodes can be provisioned to handle the additional load, and when the demand decreases,
unnecessary resources can be removed.
2. Load Balancer Health Monitoring: Elastic load balancing systems continuously monitor
the health and performance of individual nodes or servers. If a node becomes overloaded
or unresponsive, the load balancer can dynamically redirect traffic to healthier nodes,
ensuring efficient resource utilization.
3. Dynamic Load Distribution: Elastic load balancers can intelligently distribute incoming
requests based on real-time conditions. For example, they can route requests to nodes
with lower resource utilization or closer proximity to minimize latency.
By combining scalability and elasticity, load balancing systems can efficiently distribute
workload across distributed resources, ensuring optimal performance, responsiveness,
and resource utilization. These characteristics are particularly important in cloud
computing environments, where workloads can vary significantly over time.
*Cloud services and platforms: Compute services*
Compute services are a fundamental component of cloud computing platforms. They
provide the necessary computing resources to run applications, process data, and perform
various computational tasks. Here are some prominent compute services offered by cloud
providers:
1. Amazon EC2 (Elastic Compute Cloud): EC2 is a web service provided by Amazon Web
Services (AWS) that offers resizable virtual servers in the cloud. It allows users to rent
virtual machines (EC2 instances) and provides flexibility in terms of instance types,
operating systems, and configurations. EC2 instances can be rapidly scaled up or down
based on demand, offering a highly scalable compute infrastructure.
2. Microsoft Azure Virtual Machines: Azure Virtual Machines provide users with on-
demand, scalable computing resources in the Microsoft Azure cloud. Users can deploy
virtual machines with various operating systems and configurations, choosing from a
wide range of instance types to meet their specific requirements.
3. Google Compute Engine: Compute Engine is the Infrastructure as a Service (IaaS)
offering of Google Cloud Platform (GCP). It allows users to create and manage virtual
machines with customizable configurations, including options for various CPU and
memory sizes. Compute Engine provides scalable and flexible compute resources in the
Google Cloud environment.
4. IBM Virtual Servers: IBM Cloud offers Virtual Servers, which are scalable and
customizable compute resources. Users can choose from a variety of instance types,
including bare metal servers, virtual machines, and GPU-enabled instances. IBM Virtual
Servers provide the flexibility to customize network and storage configurations according
to specific workload needs.
5. Oracle Compute: Oracle Cloud Infrastructure (OCI) provides compute services through
Oracle Compute, allowing users to provision and manage virtual machines in the Oracle
Cloud. It offers a range of compute shapes, including general-purpose instances,
memory-optimized instances, and GPU instances, enabling users to optimize their
compute resources for different workloads.
These compute services provide the necessary infrastructure to deploy and manage
applications, whether they require simple virtual machines or more specialized instances.
They offer scalability, flexibility, and on-demand provisioning, allowing users to scale
their compute resources up or down based on workload demands. Additionally, these
services often integrate with other cloud services like storage, networking, and databases,
enabling users to build comprehensive cloud-based solutions
*Storage services*
1. Amazon S3 (Simple Storage Service): Amazon S3 is a highly scalable object storage
service provided by AWS. It allows users to store and retrieve any amount of data from
anywhere on the web. S3 provides high durability, availability, and low latency access to
data. It is commonly used for backup and restore, data archiving, content distribution,
and hosting static websites.
2. Azure Blob Storage: Azure Blob Storage is a scalable object storage service in Microsoft
Azure. It offers high availability, durability, and global accessibility for storing large
amounts of unstructured data, such as documents, images, videos, and log files. Blob
Storage provides various storage tiers to optimize costs based on data access patterns.
3. Google Cloud Storage: Google Cloud Storage is a scalable and secure object storage
service in Google Cloud Platform (GCP). It provides a simple and cost-effective solution
for storing and retrieving unstructured data. Google Cloud Storage offers multiple storage
classes, including multi-regional, regional, and nearline, to meet different performance
and cost requirements.
4. IBM Cloud Object Storage: IBM Cloud Object Storage is an scalable and secure storage
service offered by IBM Cloud. It provides durable and highly available storage for storing
large volumes of unstructured data. IBM Cloud Object Storage supports different storage
tiers, data encryption, and integration with other IBM Cloud services.
*Application Services*
1. AWS Lambda: AWS Lambda is a serverless compute service provided by AWS. It allows
developers to run code without provisioning or managing servers. Lambda functions can
be triggered by various events, such as changes in data, API calls, or scheduled events. It
is commonly used for building event-driven architectures, data processing, and executing
small, self-contained tasks.
2. Azure Functions: Azure Functions is a serverless compute service in Microsoft Azure. It
enables developers to run event-triggered code in a serverless environment. Azure
Functions supports multiple programming languages and integrates with various Azure
services, making it suitable for building event-driven applications, data processing
pipelines, and microservices.
3. Google Cloud Functions: Google Cloud Functions is a serverless compute service in
GCP. It allows developers to write and deploy event-driven functions that automatically
scale based on demand. Cloud Functions can be triggered by various events from Google
Cloud services, HTTP requests, or Pub/Sub messages.
4. IBM Cloud Functions: IBM Cloud Functions is a serverless compute service offered by
IBM Cloud. It allows developers to run event-driven functions in a serverless
environment. IBM Cloud Functions supports multiple programming languages and
integrates with other IBM Cloud services, making it suitable for building serverless
applications and event-driven architectures.
These storage services and application services provided by cloud computing platforms
offer scalable, reliable, and cost-effective solutions for data storage, processing, and
application development. They enable organizations to leverage the benefits of cloud
computing while reducing the burden of managing infrastructure and focusing more on
their core business goals.