0% found this document useful (0 votes)
13 views35 pages

Cloud Computing Notes Module2

Virtualization technology is essential for cloud computing, enabling secure and isolated environments for applications. It addresses issues like underutilized resources, lack of space, and rising administrative costs by allowing multiple workloads to run on fewer physical machines. Key characteristics of virtualization include increased security, managed execution, and portability, which enhance efficiency and reduce the environmental impact of data centers.

Uploaded by

ayushngowda838
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views35 pages

Cloud Computing Notes Module2

Virtualization technology is essential for cloud computing, enabling secure and isolated environments for applications. It addresses issues like underutilized resources, lack of space, and rising administrative costs by allowing multiple workloads to run on fewer physical machines. Key characteristics of virtualization include increased security, managed execution, and portability, which enhance efficiency and reduce the environmental impact of data centers.

Uploaded by

ayushngowda838
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module 2 -Virtualization

Virtualization technology is one of the fundamental components of Cloud computing, especially


in case of infrastructure-based services. It allows creation of secure, customizable, and isolated
execution environment for running applications, even if they are untrusted, without affecting
other user’s applications.
Virtualization is the ability of a computer program—or more in general a combination of software
and hardware—to emulate an executing environment separate from the one that hosts such
program. For example, running Windows OS on top of virtual machine, which itself is running
on Linux OS.
INTRODUCTION
Virtualization is a large umbrella of technologies and concepts that are meant to provide an
abstract environment—whether virtual hardware or an operating system—to run applications.
This term is often synonymous with hardware virtualization, which plays a fundamental role in
efficiently delivering Infrastructure-as-a-Service solutions for Cloud computing.
They have come in many flavors by providing virtual environments at operating system level,
programming language level, and application level. Moreover, virtualization technologies not
only provide a virtual environment for executing applications, but also for storage, memory, and
networking.
Virtualization technologies have gained a renewed interested recently due to the confluence
of different phenomena:
(a) Increased Performance and Computing Capacity.
Nowadays, the average end-user desktop PC is powerful enough to fulfi ll almost all the needs
of everyday computing, and there is an extra capacity that is rarely used. Almost all of these
PCs have resources enough to host a virtu al machine manager and execute a virtual machine
with a by far acceptable performance. The same con sideration applies to the high-end side
of the PC market, where supercomputers can provide an immense compute power that can
accommodate the execution of hundreds or thousands of virtual machines.

(b) Underutilized Hardware and Software Resources.


Hardware and software underutilization is occurring due to
(1) the increased performance and computing capacity, and
(2) effect of limited or sporadic use of resources.
Computers today are so powerful that in most cases only a fraction of their capacity is used by
an application or the system. Moreover, if we consider the IT infra structure of an enterprise,
there are a lot of computers that are partially utilized, while they could have been used without
interruption on a 24/7/365 basis. As an example, desktop PCs mostly required by administrative
staff for office automation tasks are only used during work hours, while overnight they remain
completely unused. Using these resources for other purposes after work hours could improve the
efficiency of the IT infrastructure. In order to transparently provide such a service, it would be
necessary to deploy a completely separate environment, which can be achieved through
virtualization.
(C) Lack of Space.

• The continuous need for additional capacity, whether this is storage or compute power,
makes datacentres grow quickly. Companies like Google and Microsoft expand their
infrastructure by building datacentres, as large as football fields, that are able to host
thousands of nodes.
• But most enterprises cannot afford to keep building new datacentres whenever they need
more capacity.
• At the same time, much of their existing hardware is underutilized (not fully used).
• To solve both problems (lack of space + underutilization), a method called server
consolidation is used.
• Server consolidation = combining multiple workloads onto fewer physical machines
using virtualization.

(d) Greening Initiatives.

• Companies want to reduce energy consumption and lower their carbon footprint.
• Data centers are one of the major power consumers and contribute consistently to the
impact that a company has on the environment.
• Maintaining a datacenter operational does not only involve keeping servers on, but a lot
of energy is also consumed for keeping them cool. Infrastructures for cooling have a
significant impact on the carbon footprint of a data center. Hence, reducing the number
of servers through server consolidation helps. Virtualization tech nologies can provide an
efficient way of consolidating servers.

(e) Rise of Administrative Costs.


Power consumption and cooling costs have now become higher than the cost of the IT equipment.
As the demand for additional capacity increases, servers in a datacenter increases and further
increases administrative costs.

Computers, in particular servers, do not operate all on their own, but they require care and feeding
from system administrators. Common system administration tasks include: hardware monitoring;
defective hardware replacement; server setup and updates; server resources monitoring; and
backups. These are labor-intensive operations, and the higher the number of servers that have to
be managed, the higher the administrative costs. Virtualization can help in reducing the number
of re quired servers for a given workload, thus reducing the cost of the administrative personnel.

These can be considered the major causes for the diffusion of hardware virtualization solutions
and, together with them, the other kinds of virtualization.

3.2 CHARACTERISTICS OF VIRTUALIZED ENVIRONMENTS


Virtualization is a broad concept and it refers to the creation of a virtual version of something,
whether this is hardware, software environment, storage, or network.
In a virtualized environment, there are three major components: guest, host, and virtualization
layer.
Guest
The system component that runs inside the virtual environment.
Instead of directly interacting with the host hardware, it interacts with the virtualization layer.

Host
The host represents the original environment where the guest is supposed to be managed.( The
actual physical system or environment )

Virtualization layer
The virtualization layer is responsible for recreating the same or a different environment where
the guest will operate.

Hardware virtualization is the most common and original form of virtualization.

In case of hardware virtualization, the guest is represented by a system image comprising an


operating system and installed applications.

These are installed on top of virtual hardware that is controlled and managed by the virtualization
layer, also called virtual machine manager.

The host is represented by the physical hardware, and in some cases the operating system, that
defines the environment where the virtual machine manager is running.

In case of virtual storage, the guest might be client applications or users that interact with the
virtual storage management software deployed on top of the real storage system.
The case of virtual networking is also similar: the guest—applications and users—interact with
a virtual network, such as a Virtual Private Network (VPN), which is managed by specific
software (VPN client) using the physical network available on the node. VPNs are useful for
creating the illusion of being within a different physical network and thus accessing the resources
in it, which would be otherwise not available.

The technologies of today allow a profitable use of virtualization, and make it possible to fully
exploit the advantages that come with it. Such advantages have always been characteristics of
virtualized solutions.

1. Increased Security

The virtual machine represents an emulated environment in which the guest is executed. All the
operations of the guest are generally performed against the virtual machine, which then translates
and applies them to the host. This level of indirection allows the virtual machine manager to
control and filter the activity of the guest, thus preventing some harmful operations from being
performed.

Resources exposed by the host can then be hidden or simply protected from the guest. Moreover,
sensitive information that is contained in the host can be naturally hidden without the need of
installing complex security policies

Increased security is a requirement when dealing with untrusted code.

For example, applets downloaded from the Internet run in a sandboxed version of the Java Virtual
Machine (JVM), which provides them with limited access to the hosting operating system
resources. Both the JVM and the .NET runtime provide extensive security policies for
customizing the execution environment of applications.

Tools like VMware Desktop, VirtualBox, Parallels let users create a complete virtual
computer. You can install a separate operating system inside it. If malware infects the virtual
machine, it is contained inside the VM and does not affect the host OS.

2. Managed Execution

Through Virtualization, a wider range of features can be implemented. In particular, sharing,


aggregation, emulation, and isolation are the most relevant.

(a)Sharing:

Virtualization allows the creation of a separate computing environment within the same host. In
this way, it is possible to fully exploit the capabilities of a powerful host, which would be
otherwise underutilized. Sharing is a particularly important feature in virtualized datacenters,
which helps reduce the number of active servers and limit power consumption.

(b) Aggregation.

It is not only possible to share the physical resource among several guests, but virtualization
also allows the aggregation, which is the opposite process. A group of separate hosts can be
tied together and represented to guests as a single virtual host. Example: Cluster management
software takes a group of servers and makes them appear as one unified resource.
(c ) Emulation.

Guests run inside a controlled environment managed by the virtualization layer (which is
itself a program).

This allows for controlling and tuning the environment that is exposed to guests. Virtualization
can emulate a completely different environment compared to the host system. This allows
execution of guest systems that need special characteristics (hardware/OS) not available on the
physical host.

Example:

• Running Linux guest OS on a Windows host using VirtualBox.


• Running an application that requires older hardware/architecture on a modern
machine.

This feature becomes very useful for testing purposes where a specific guest has to be validated
against different platforms or architectures.

A virtual machine can use a virtual SCSI device for file I/O, even if the host computer does not
have a physical SCSI controller installed.

Old or legacy software can run on virtual/emulated hardware without changing its original
[Link]: MS-DOS mode in Windows 95/98.

(d) Isolation.

Virtualization provides guests (OS, apps, or other entities) a separate execution


[Link] interact with the abstraction layer (hypervisor) instead of directly using host
resources.

Benefits of Isolation:

1. No interference – Multiple guests can run on the same host independently.


2. Security – Separation between host and guest prevents harmful operations from affecting
the host.
3. Control – The hypervisor can filter and monitor guest activities.
Modern hardware and software make performance tuning of virtual machines possible.
Administrators can adjust the resources (CPU, memory, storage, bandwidth) exposed to each
guest. This ensures better control over how the guest performs. Enables building a Quality of
Service (QoS) infrastructure, so that performance matches the Service Level Agreements
(SLAs) promised to users.

For instance, software implementing hardware virtualization solutions can expose to a guest
operating system only a fraction of the memory of the host machine or to set the maximum
frequency of the processor of the virtual machine.

Managed execution in virtualization allows capturing the state of a guest VM, saving it, and
resuming later.

Virtual Machine Managers (like Xen Hypervisor) can:

1. Pause a guest operating system.


2. Move its virtual image to another physical machine.
3. Resume execution seamlessly, without the guest noticing.

This process is called Virtual Machine Migration.

Portability

The concept of portability applies in different ways, according to the specific type of
virtualization considered.

In hardware virtualization, the guest OS and applications are stored as a virtual image. This
image can usually be moved and executed on other virtual machines, similar to how a picture
can be opened on different computers.

In programming-level virtualization (e.g., Java Virtual Machine (JVM), .NET Runtime),


applications are compiled into intermediate binary code (JARs in Java, assemblies in .NET).
This code can run on any system that has the corresponding virtual machine, without
[Link] version of the application can run on multiple platforms (Windows, Linux,
macOS, etc.) with no [Link] simplifies the development cycle and makes deployment
easier and faster.

Finally, portability allows having your own system always with you and ready to use, given that
the re quired virtual machine manager is available.
3.3 TAXONOMY OF VIRTUALIZATION TECHNIQUES

The first classification discriminates against the service or entity that is being emulated.

Virtualization is mainly used to emulate execution environments, storage, and networks. Among
these categories, execution virtualization constitutes the oldest, most popular, and most
developed area. Therefore, it deserves a major investigation and a further categorization.

In particular, we can divide these execution virtualization techniques into two major categories,
by considering the type of host they require.

Process level techniques are implemented on top of an existing operating system, which has full
control of the hardware.

System level techniques are implemented directly on hardware and do not require—or require a
minimum support from—an existing operating system.

Within these two categories we can list different techniques, which offer to the guest a different
type of virtual computation environment: bare hardware, operating system resources, low-level
programming language, and application libraries.
3.3.1 Execution Virtualization

Execution virtualization includes all those techniques whose aim is to emulate an execution
environment that is separate from the one hosting the virtualization layer. All these techniques
concentrate their interest on providing support for the execution of programs.

Execution virtualization provides support for the execution of programs whether they are:

1. Entire Operating Systems (via hypervisors).


2. Programs compiled against an abstract machine model (e.g., JVM bytecode).
3. Individual applications (through emulation or compatibility layers).

Therefore, execution virtualization can be implemented directly on top of the hardware, by the
operating system, an application, or libraries.

[Link] Reference Model

Modern computing systems can be expressed in terms of the reference model.

• Virtualizing an execution environment at different levels of the computing stack requires


a reference model. It defines the interfaces between the levels of abstractions, which hide
implementation details. Each layer hides implementation details of the layer below and
provides an interface (APIs, system calls, machine instructions) to the layer above.

• Virtualization replaces one of these layers with a “virtual” version.


• It then intercepts calls made to that layer and either:

o Emulates them (pretends to be the real layer), or


o Passes them to the real underlying layer with some modification

At the bottom layer is the model for the hardware. It is expressed in terms of the Instruction Set
Architecture (ISA), which defines the instruction set for the processor, registers, memory, and
interrupts management.
Types of ISA:

• System ISA → For OS developers (managing hardware).


• User ISA → For application developers that directly manage the underlying hardware

The Application Binary Interface (ABI) separates the operating system layer from the
applications and libraries, which are managed by the OS.

ABI covers details such as low-level data types, alignment, and call conventions and defines a
format for executable programs. System calls are defined at this level.

The highest level of abstraction is represented by the Application Programming Interface (API),
which interfaces applications to libraries and/or the underlying operating system.

For any operation to be performed in the application level API, ABI and ISA are responsible to
make it happen. The high-level abstraction is converted into machine-level instructions to
perform the actual operations supported by the processor.

Security Rings and Privileged Modes

We can design a minimal security model

The instruction set exposed by the hardware has been divided into different security classes,
which define who can operate with them.

The first distinction can be made between privileged and non-privileged instructions.

Non-privileged instructions are those instructions that can be used without interfering with other
tasks because they do not access shared resources. This category contains, for example, all the
floating, fixed point, and arithmetic instructions.

Privileged instructions are those that are executed under specific restrictions. They mostly used
for sensitive operations, which expose (behavior sensitive) or modify (control sensitive) the
privileged state.

For instance, behavior-sensitive instructions are those that operate on the I/O, while control-
sensitive instructions alter the state of the CPU registers.

Some types of architecture feature more than one class of privileged instructions.

A hierarchy of privileges in the form of ring based security is shown in Fig 3.5.

Ring 0, Ring 1, Ring 2, and Ring 3; Ring 0 is in the most privileged level, and the Ring 3 in the
least privileged level. Ring 0 is used by the kernel of the OS, Rings 1 and 2 are used by the OS
level services, and Ring 3 is used by the user. Recent systems support only two levels with Ring
0 for the supervisor mode and Ring 3 for user mode.
Most current systems support at least two different execution modes: supervisor mode and user
modes.

• Ring 0 → Kernel (supervisor mode).


• Ring 3 → User applications (user mode).
• The first mode denotes an execution mode where all the instructions (privileged and non-
privileged) can be executed without any restriction. This mode is also called master mode,
or kernel mode and it is generally used by the operating system (or the hypervisor) to
perform sensitive operations on hardware level resources.
• In user mode, there are restrictions to control the machine level resources.

• If code running in user mode invokes the privileged instructions, hardware interrupts
occur. The trap hands control to the operating system (kernel), which decides what to do
→ usually block it or provide controlled access.

[Link]-Level Virtualization

Hardware-level virtualization is a virtualization technique that provides an abstract execution


environment in terms of computer hardware, on top of which a guest operating system can be
run.

In this model, the guest is represented by the operating system, the host by the physical computer
hardware, the virtual machine by its emulation, and virtual machine manager by the hypervisor.

The hypervisor is generally a program, or a combination of software and hardware, that allows
the abstraction of the underlying physical hardware.
Hypervisors

A fundamental element of hardware virtualization is the hypervisor, or Virtual Machine Manager


(VMM). It recreates a hardware environment, where guest operating systems are installed. There
are two major types of hypervisors:

• Type I hypervisor
• Type II hypervisor

Type 1 hypervisors run directly on top of the hardware. Therefore, they take the place of the op
erating systems, interact directly with the ISA interface exposed by the underlying hardware, and
emulate this interface in order to allow the management of guest operating systems. This type of
hypervisors is also called native virtual machine, since it runs natively on hardware.

Type II hypervisors require the support of an operating system to provide virtualization services.
This means that they are programs managed by the operating system, which interact with it
through the ABI, and emulate the ISA of virtual hardware for guest operating systems. This type
of hypervisors is also called hosted virtual machine, since it is hosted within an operating system.
Fig: 3.7 Hosted and Native virtual machine

A virtual machine manager is internally organized as described in Fig. 3.8. Three main modules
coordinate their activity in order to emulate the underlying hardware: dispatcher, allocator, and
interpreter.

It is the entry point for all instructions coming from a VM. It reroutes the instructions issued by
the virtual machine instance to one of the two other modules.

The allocator is responsible for deciding the system resources to be provided to the VM:
whenever a virtual machine tries to execute an instruction that results in changing the machine
resources associated with that VM, the allocator is invoked by the dispatcher.

The interpreter module consists of interpreter routines. These are executed whenever a virtual
machine executes a privileged instruction: a trap is triggered and the corresponding routine is
executed.
Properties and Theorems proposed by Goldberg and Popek

The criteria that need to be met by a virtual machine manager to efficiently support virtualization
were established by Goldberg and Popek in 1974. Three properties have to be satisfied:

Equivalence: a guest running under the control of a virtual machine manager should exhibit the
same behavior as when executed directly on the physical host.

Resource Control : The virtual machine manager should be in complete control of virtualized
resources.

Efficiency. A statistically dominant fraction of the machine instructions should be executed


without intervention from the virtual machine manager.

Three theorems that define the properties that hardware instructions need to satisfy in order to
efficiently support virtualization.( Popek & Goldberg theorem)

Theorem 1: For any conventional third-generation computer, a VMM may be constructed


if the set of sensitive instructions for that computer is a subset of the set of privileged
instructions.

This theorem establishes that all the instructions that change the configuration of the system
resources should trap from User Mode and be executed under the control of VMM. The
theorem always guarantees the resource control property when hypervisor is in Ring 0(most
privileged mode). All the nonprivileged instructions should be executed normally without the
intervention of VMM.

Theorem 2: A conventional third-generation computer is recursively virtualizable if :

• it is virtualizable, and
• a VMM without any timing dependencies can be constructed for it.

This theorem talks about Recursive virtualization. Recursive virtualization is the ability of
running a VMM on top of another VMM. This allows “Nesting Hypervisors” as long as the
capacity of the underlying resources can accommodate that. Virtualizable hardware is a
prerequisite to recursive virtualization.

Theorem 3

A hybrid VMM may be constructed for any conventional third generation machine, in
which the set of user sensitive instructions are a subset of the set of privileged instructions.

A hybrid VMM (HVM) means a VM that is part of hybrid cloud, allowing seamless movement
and management of workloads of on premises, private and public cloud resources. HVM
merges the benefits of full virtualization and paravirtualization.
[Link] Virtualization Techniques

➢ Hardware – Assisted Virtualization: Normally virtualization is achieved purely using


software (hypervisors). This software emulates the hardware, which comes with
performance overhead. Hardware assisted virtualization allows the CPU itself to recognise
and support virtualization instructions reducing the overhead and complexity of the
hypervisors. The hardware provides architectural support for building a VMM which is
able to run a guest operating system in complete isolation.

The CPU adds a special execution mode (VMX root/non-root). This allows guest to run
directly on the CPU without full emulation.

This technique was originally introduced in the IBM System/370. At present, examples of
hardware-assisted virtualization are the extensions to the x86-64 bit architecture introduced
with Intel VT and AMD V. After 2006, Intel and AMD introduced processor extensions,
and a wide range of virtualization solutions took advantage of them: Kernel-based Virtual
Machine (KVM), VirtualBox, Xen, VMware, Hyper-V, Sun xVM, Parallels, and others.

➢ Full Virtualization: Full virtualization refers to the ability to run a program, most likely
an operating system, directly on top of a virtual machine and without any modification, as
though it were run on the raw hardware. To make this possible, virtual machine managers
are required to provide a complete emulation of the entire underlying hardware. The
principal advantage of full virtualization is complete isolation, which leads to enhanced
security, ease of emulation of different architectures, and coexistence of different systems
on the same platform. Here the guest OS runs unmodified i.e. the guest OS does not need
to know that it is running in a virtualised environment. Eg: You install Windows 10 as a
guest OS using VirtualBox on a LINUX host (VirtualBox emulates entire hardware and
windows doesn’t know it is in a virtualized environment).

➢ Paravirtualization: The guest OS is modified to be aware it is running in a virtualized


environment. Instead of emulating all hardware, the guest OS communicates directly with
the VMM (hypervisor) using special interface called hypercalls. Unlike full virtualization,
it requires modifying guest OS, which then collaborates with the hypervisor for better
performance. The guest OS replaces or avoids certain privileged instructions. Instead, it
uses hypercalls to request services from hypervisor (eg: for memory, CPU scheduling, I/O
etc). This reduces the need of expensive emulation and context switching. Eg: Using XEN
hypervisor with a MODIFIED LINUX as a guest OS. The LINUX kernel includes code
that uses hypercalls instead of privileged CPU instructions. This lets XEN avoid full
hardware emulation which improves speed.

➢ Partial Virtualization: Only part of the hardware environment is emulated. Unlike full
virtualization, the entire system is not fully emulated and the guest OS mat need some
modification or may have limited functionality. This can be achieved by, simulating only
certain parts of hardware. Other parts are exposed directly on the guest OS. The guest OS
may need to aware of virtualization or may only be able to access limited resources. Eg:
only some hardware components like CPU or memory are virtualized, while others are
either not virtualised or are shared directly with the host system.
4. Operating System Level Virtualization

This is different from Hardware Virtualization. There is no hypervisor or VMM, the


virtualization is done within a single OS, where the OS kernel allows for multiple isolated
user-space instances (containers). OS level virtualization or containerization allows
multiple isolated user space instances to run on a single OS kernel, making them appear as
independent systems. A user space instance in general contains a proper view of the file system
which is completely isolated, separate IP address, software configuration and access to devices.
Unlike hypervisor-based virtualization which creates full VMs with separate kernels, OS level
virtualization shares the host kernel resulting in faster deployment, lower overhead, reduced
cost and higher performance. This technique is an efficient solution for “server consolidation”
in which we can aggregate different servers into one physical server, where each server is run
in a different space completely isolated from others. Eg: Docker, FreeBSD, IBM LPAR, Jails
etc.
Below is the comparison between VM using hypervisors and OS level virtualization using
Docker.

Advantages:

• Fast – Containers start in seconds, unlike VMs that take minutes.

• Lightweight – Uses less memory and CPU since there’s no extra OS for each container.

• Portable – Run the same container on any machine (laptop, server, cloud).

• Efficient – Many containers can run on one machine without wasting resources.
• Isolated – Each container is separate, so apps don’t interfere with each other.

• Scalable – Easy to add or remove containers when demand changes.

[Link] Language Level Virtualization

Programming language-level virtualization is mostly used to achieve ease of deployment of


applications, managed execution, and portability across different platforms and operating
systems. This allows applications written in one language to run independently of the
underlying machine or OS. It is the ability of providing a uniform execution environment for
programs across different platforms. Programs compiled into bytecode can be executed on
any OS and a platform for which a virtual machine able to execute that code has been provided
(JVM). Programming language-level virtualization involves a VM that executes bytecode or
intermediate code, enabling applications to run on different platforms and OS by providing a
consistent execution environment.

Steps:
1. Compilation: A program written in a high-level programming language (e.g., Java,
Python) is compiled into a platform-independent intermediate code, or bytecode.

2. Virtual Machine (VM): A virtual machine (e.g., JVM, Microsoft .NET CLR) acts as an
interpreter for this bytecode.
3. Execution:

The VM translates and executes the bytecode, providing a consistent runtime environment
regardless of the underlying physical hardware or operating system.

Examples: Java Virtual Machine (JVM): Executes Java bytecode, enabling Java applications
to run on various operating systems. Microsoft .NET Common Language Runtime (CLR):
The execution environment for .NET applications, supporting languages like C# and Visual
Basic. Early systems: Technologies like the UCSD Pascal and the work on BCPL in 1966
were early implementations of this concept.

Advantages:

• Portability: Code written once can run anywhere with the language’s virtual machine.
Example: Java runs on Windows, Linux, or Mac if JVM is installed.
• Consistency Across Platforms: Same behaviour regardless of the underlying
hardware/OS.
• Security: Virtual machines act as a sandbox, restricting unsafe operations. Helps
prevent direct access to the host system.
• Isolation: Each program runs in its own VM instance → reduces interference.
• Rich Ecosystem: Mature language runtimes (JVM, CLR) support multiple languages
and frameworks.
• Simplified Development: Developers focus on writing code in the language, without
worrying about hardware or OS details.
• Cross-Language Interoperability (in some cases): Example: JVM supports Java,
Scala, Kotlin; CLR supports C#, F#, [Link].

[Link] Level Virtualization

Application-level virtualization is a technique that allows applications to run in environments


that don’t natively support them. Applications are not installed locally, but appear as if they
are. Apps are usually hosted on a central server and accessed remotely.
This technique creates a barrier or layer .This layer separates applications from the OS and:
• Intercepts app requests.
• Redirects them to the central server.
• Sends back the application interface to the user’s device.
A thin layer or “vi barrier” is installed on the user’s device, separating the apps from the OS.
The apps itself is hosted and executed on a central server.

The users’ device acts as a display terminal, receiving only the app’s user interface and sending
input back to the server.

➢ Interpretation: Every source instruction is interpreted.


➢ Binary Translation: Every source instruction is converted into native instruction.
After a block of instruction is translated, it is “cached” and “reused”.
Key benefits include centralized management, enhanced security, simplified deployment, and
the ability to run applications on different operating systems.
Example:
• Microsoft App-V: Run a Windows app on Linux without installing locally.
• WINE: Allows Windows applications to run on Linux.
3.3.2 Other Types Of Virtualization

Other than execution virtualization, there exist other types of virtualization.

➢ Storage Virtualization
➢ Network Virtualization
➢ Desktop Virtualization
➢ Application server Virtualization

Storage Virtualization: Storage virtualization is a data management technique which works


by the process of pooling multiple physical storage devices (like hard drives, SSDs, or SANs)
into a single virtual storage resource that looks like one unified storage system i.e.
consolidating all physical storage and putting a single frontend for simplicity. Applications and
users don’t see the complexity of the underlying hardware; they just see a single, simplified
storage space. Using this technique, users do not have to be worried about the specific location
of their data, which can be identified using a logical path. The virtualization software creates a
map to dynamically locate data on the fly. This single pool of storage can be used by everyone.

There are different techniques for storage virtualization, one of the most popular being
network-based virtualization by means of storage area networks (SANs). SANs use a
network-accessible device through a large bandwidth connection to provide storage facilities.

Network Virtualization: Network virtualization is a process of logically grouping physical


networks and making them appear as single or multiple independent networks called Virtual
Networks.

It is a technology that abstract physical network resources like routers, switches, firewalls and
cables into logical, software-based components. It allows multiple virtual networks to run
independently on the same underlying physical infrastructure. It is like turning a real physical
network (routers, cables, switch) into a software based network that you can create, change or
remove quickly without touching the hardware.

Network virtualization can aggregate different physical networks into a single logical network
(external network virtualization), or provide network like functionality to an operating system
partition (internal network virtualization).

Examples:

External network virtualization is generally a Virtual LAN (VLAN). A VLAN is an


aggregation of hosts that communicate with each other as if they were located under the same
broadcasting domain.

Internal network virtualization : VMware virtual switches, Linux bridges for containers.

Desktop Virtualization: Desktop Virtualization is a type of virtualization technology where


a user’s desktop environment (operating system, applications, and settings) is separated from
the physical device and delivered remotely over a network. Instead of running directly on a
local computer, the desktop runs on a centralized server (or cloud), and the user can access it
from different devices like PCs, thin clients, tablets, or even smartphones. Instead of being tied
to one physical computer, your desktop lives in the cloud or datacentre (you just connect it over
the internet). A specific desktop environment is stored in a virtual machine image that is loaded
and started on demand when a client connects to the desktop environment. This is a typical
cloud computing scenario in which the user leverages the virtual infrastructure for performing
the daily tasks on his computer. The advantages of desktop virtualization are high availability,
persistence, accessibility, and ease of management.

Application Server Virtualization: Application Server Virtualization is a type of


virtualization where applications are separated from the underlying hardware and operating
system of the server. Instead of installing applications directly on physical servers, they run
inside virtual machines (VMs) or containers on a virtualized server environment. This allows
multiple applications (and sometimes different versions of the same application) to run securely
and independently on the same physical server. Here, one physical server can host multiple
virtual servers. This is a particular form of virtualization and serves the same purpose of storage
virtualization: providing a better quality of service rather than emulating a different
environment.

3.4 Virtualization and Cloud Computing

Virtualization is the foundation of cloud computing. Cloud providers use virtualization to split
their massive datacentres into many virtual servers. Virtualization technologies are primarily
used to offer configurable computing environments and storage. Hardware and programming
language virtualization are the most popular techniques adopted in cloud computing systems.
Hardware virtualization is an enabling factor for solutions in the Infrastructure-as-a-Service
(IaaS) market segment, while programming language virtualization is a technology leveraged
in Platform-as-a-Service (PaaS) offerings. Virtualization also allows isolation and a finer
control, thus simplifying the leasing of services and their accountability on the vendor side.
We need to mention server consolidation and virtual machine migration in here to understand
better, the utilization of virtualization in cloud computing.

Server Consolidation: It is the process of reducing the number of physical servers in use by
running multiple VMs on fewer/more powerful servers through virtualization

Virtual Machine Migration: Virtual Machine Migration (VM Migration) is the process of
moving a running or powered-off virtual machine (VM) from one physical host/server to
another without affecting the VM’s availability or performance (in case of live migration). It’s
a key feature of virtualization and cloud computing because it helps in load balancing,
maintenance, and disaster recovery.

Live migration and server consolidation

Server consolidation and virtual machine migration are principally used in the case of hardware
virtualization, even though they are also technically possible in the case of programming
language virtualization. Storage virtualization constitutes an interesting opportunity where
vendors backed by large computing infrastructures featuring huge storage facilities can harness
these facilities into a virtual storage service, easily partitionable into slices. These slices can be
dynamic and offered as a service. Finally, cloud computing revamps the concept of desktop
virtualization, initially introduced in the mainframe era. The ability to recreate the entire
computing stack—from infrastructure to application services—on demand.

3.5 Pros and Cons Of Virtualization

Today, the capillary diffusion of the Internet connection and the advancements in computing
technology has made virtualization an interesting opportunity to deliver on-demand IT
infrastructure and services. Despite its renewed popularity, this technology has benefits and
also drawbacks.

Advantages

• Managed Execution and Isolation

• Virtual environments provide security and control.


• A virtual execution environment can act as a sandbox, preventing harmful operations
from affecting the host.
• Resource allocation and partitioning among guests are simplified.
• Fine-tuning of resources supports server consolidation and quality of service
requirements.
• Portability

• Virtual machine (VM) instances are usually represented as files, making them easy to
move across systems.
• VMs are self-contained, requiring only the virtual machine manager (VMM).
• Administration is simplified due to portability and self-containment.
• Example: Java programs run anywhere with a JVM; hardware-level virtualization
provides similar portability.
• Enables building and carrying personalized operating environments (like having your
own laptop virtually).

• Reduced Maintenance Costs

• Fewer physical hosts compared to VM instances.


• Limited risk of guest programs damaging underlying hardware.
• Usually fewer VMMs than VM instances, simplifying management.

• Efficient Resource Utilization

• Multiple systems can securely share host resources without interference.


• Supports server consolidation, reducing the need for active physical resources.
• Resources can be dynamically adjusted based on system load.
• Leads to energy savings and lower environmental impact.

Disadvantages

1. Performance Degradation

• Main disadvantage is due to the abstraction layer between guest and host. This leads to
increased latencies and slower execution.
• Causes of performance issues in hardware virtualization:
o Maintaining virtual processor status.
o Handling privileged instructions (trap and simulate).
o Managing paging within VM.
• If the VMM runs on top of the host OS, it competes for resources with other
applications, causing further slowdown.
• In programming-level virtualization (Java, .NET):
o Binary translation and interpretation slow execution.
o Access to memory and physical resources filtered through the runtime, adding
delays.
• Mitigation:
o Advances like paravirtualization improve performance by offloading
execution to the host.
o Just-in-time (JIT) compilation in JVM and .NET reduces slowdowns by
converting code to native machine code.
2. Inefficiency and Degraded User Experience

• Virtualization may cause inefficient use of host resources.


• Some host features may be hidden by the abstraction layer.
• Examples:
o In hardware virtualization: Default drivers (e.g., graphics) expose only limited
features of the real hardware.
o In programming-level virtualization: Some OS features are inaccessible
without special libraries.
3. Security Holes and New Threats

• Virtualization enables new forms of malware and phishing.


• Attack vectors:
o Hardware virtualization:
▪ Malicious programs (like BluePill and SubVirt) act as thin VMMs,
loading before the OS.
▪ They control the OS, extracting sensitive data.
▪ Spread easily since early CPUs were not designed for virtualization.
▪ Countermeasure: Hardware support like Intel VT and AMD Pacifica.
o Programming-level virtualization:
▪ Modified runtime environments can spy on memory or extract
sensitive data.
▪ Requires replacing the legitimate runtime → often possible if malware
has admin privileges or exploits OS vulnerabilities.
3.6 TECHNOLOGY EXAMPLES

There is a wide range of virtualization technologies available especially for virtualizing


computing environments.

3.6.1 Xen: Paravirtualization


• Xen is an open-source virtualization platform.
• It was developed by researchers at the University of Cambridge.
• Later, it was commercialized by Citrix as XenSource.
• Xen is widely used in desktop virtualization, server virtualization.
• It has also been used to provide Cloud computing solutions by means of Xen Cloud
Platform (XCP).

• Now maintained by Xen Project hosted under LINUX foundation. Xen also supports full
virtualization using hardware-assisted virtualization. Xen is the most popular
implementation of paravirtualization where it modifies portions of the guest operating
systems.
• Below is the architecture of Xen and its mappings onto a x86 model.
• The Xen hypervisor is the core component of the Xen virtualization platform.
• It runs directly on the hardware (bare-metal) — similar to other Type-1 hypervisors
like VMware ESXi.
• It manages how guest operating systems access the CPU, memory, and I/O devices.

Many of the x86 implementations support four different security levels, called rings, where Ring
0 represent the level with the highest privileges and Ring 3 the level with the lowest ones.
In x86 processors, different “rings” represent levels of privilege:

Ring Privilege Level Typical Component

Highest (full hardware


Ring 0 Kernel / Hypervisor
access)
Ring 1 & 2 Medium Device drivers
Ring 3 Lowest User applications

In Xen:
• Xen Hypervisor runs in Ring 0 (highest privilege).
• Guest OS kernels run in Ring 1 (since Ring 0 is taken by Xen).
• Applications in guest OSes run in Ring 3, as usual
• Guest operating systems are executed within domains, which represent virtual machine
instances. Moreover, specific control software, which has privileged access to the host
controls all the other guest operating systems, is executed in a special domain called
Domain 0. This is the first one that is loaded once the virtual machine manager has
completely booted, and it hosts a Hyper Text Transfer Protocol (HTTP) server that serves
requests for virtual machine creation, con figuration, and termination.
Some applications allow code executing in Ring 3 to jump into Ring 0 (kernel mode). Some OS
code (like system calls) tries to directly access hardware (jump from Ring 3 → Ring 0). This will
result in a trap or silent fault, thus preventing the normal operations of the guest operating
system, since this is now running in Ring 1. To fix this: Xen uses hypercalls — special calls that
the guest OS uses to “politely ask” the hypervisor to perform sensitive tasks (like memory access
or device control). The guest OS must be modified to use hypercalls instead of direct hardware
instructions.
Xen works well with LINUX not with windows because paravirtualization needs the operating
system codebase to be modified, and hence not all operating systems can be used as guests in a
Xen-based environment i.e. Linux can be easily modified, since their code is publicly available
and Xen provides full support for their virtualization, whereas components of the Windows
family are generally not supported by Xen unless hardware-assisted virtualization is available.

3.6.2 VMware Full Virtualization:

VMware’s technology is based on the concept of full virtualization, where the underlying
hardware is replicated and made available to the guest operating system, which runs unaware of
such abstraction layers and does not need to be modified. VMware implements full virtualization
either in the desktop environment, by means of Type II hypervisors, or in the server environment,
by means of Type I hypervisors. In both cases, full virtualization is made possible by means of,
➢ Direct Execution (for non-sensitive instructions)
➢ Binary Translation (for sensitive instructions).

Besides these two core solutions, VMware provides additional tools and software that simplify
the use of virtualization technology either in a desktop environment, or in a server
environment.
[Link] Virtualization and Binary Translation
VMware is well known for the capability to virtualize x86 architectures, which runs unmodified
on top of their hypervisors. With the new generation of hardware architectures and the
introduction of hardware-assisted virtualization (Intel VT-x and AMD V) in 2006, full
virtualization is made possible with hardware support, but before that date, the use of dynamic
binary translation was the only solution that allowed running x86 guest operating systems
unmodified in a virtualized environment.

All the privileged instructions need to be executed in Ring 0 and guest OS run in Ring 1. In
case of binary translation, the sensitive/privileged instructions are translated into an
equivalent set of instructions that achieves same goal without generating exceptions. These
translated instructions are cached so that translation is not done in the future occurrences of
same instructions. This approach has both advantages and disadvantages. The major advantage
is that guests can run unmodified in a virtualized environment, which is a crucial feature for
operating systems for which source code is not available. Disadvantage is that translating
instructions at runtime introduces an additional overhead that is not present in other approaches.
Even though such disadvantage exists, binary translation is applied to only a subset of the
instruction set, whereas the others are managed through direct execution on the underlying
hardware. This somehow reduces the impact on performance of binary translation.

Memory virtualization constitutes another challenge of virtualized environments, which can


deeply impact the performance without the appropriate hardware support. The main reason of
this is the presence of a Memory Management Unit (MMU), which needs to be emulated as
part of the virtual hardware.

Every operating system manages memory using a component called the Memory
Management Unit (MMU) — it converts:Virtual address → Physical address. Especially in
the case of hosted hypervisors (Type II), where the virtual MMU and the host-OS MMU are
traversed sequentially before getting to the physical memory page, the impact on performance
can be significant. The guest OS thinks it has full control of MMU as it were running on the
real hardware. However, these physical address are not the actual machine’s physical memory,
they are guest physical addresses. VMware also provides full virtualization of I/O devices such
as network controllers and other peripherals such as keyboard, mouse, disks, and universal
serial bus (USB) controllers.

[Link] Solutions

(a) End-user (desktop) virtualization:

VMware supports virtualization of operating system environments and single applications on


end user computers. The first option is the most popular and allows installing a different
operating systems and applications in a completely isolated environment from the hosting
operating system. Specific VMware software

➢ VMware Workstation, for Windows operating systems, and


➢ VMware Fusion for Mac OS X environments— is installed in the host operating
system to create virtual machines and manage their execution.
VMware Workstation Architecture

Here, we need to install a specific driver (VMware Driver) in the host operating system that
provides two main services:
• Creates a Virtual Machine Manager (VMM) to manage the virtual machine.
• Helps process special commands (like using USBs or saving files)

This setup is called Hosted Virtual Machine Architecture, because the virtual machine runs
"hosted" on your regular operating system. Normal instructions (like math or logic) run directly
on the virtual machine. More complex things (like using a USB or printing) go through
VMware’s system i.e. intervention of VMware application is required only for instructions
such as device I/O, that requires binary translation. Virtual machine images are saved in a
collection of files on the host file system i.e. each VM is saved as a group of files on your
computer. You can Pause and resume the VM, Take snapshots (save the current state), Undo
changes by going back to an earlier state.

Other Solutions:

• VMware Player: A simpler version of VMware Workstation. It lets you run virtual
machines on Windows or Linux.
• VMware ACE: Lets companies create secure, policy-controlled VMs for employees.
• VMware ThinApp: It is a tool that lets you run applications without installing them
on your computer. Instead of installing an application like you usually do, ThinApp
turns the application into file you can just run. No installation needed. Useful for
running old apps that may not work on newer systems OR for using apps across
different computers without installation problems. You do not need Admin Rights to
run virtualized applications.

(b) Server Virtualization

Server virtualization means running many virtual servers on one physical machine. Initial
support for server virtualization was provided by VMware GSX (TYPE II) server as shown in
fig below. The architecture is mostly designed to serve the virtualization of Web servers.

VMware GSX server architecture

A daemon process, called “serverd”, controls and manages VMware application processes.
These applications are then connected to the virtual machine instances by means of the
VMware driver installed on the host operating system. Virtual machine instances are managed
by the VMM. User requests for virtual machine management and provisioning are routed from
the Web server through the VMM by means of ‘serverd’.
The hypervisor based approaches (Type I) to achieve server virtualization are,

➢ VMware ESX Server


➢ VMWare ESXi Server (Enhanced version of ESX)

Both can be installed on bare metal servers and provide services for virtual machine
management. The two solutions provide the same services but differ in the internal

architecture, more specifically in the organization of the hypervisor kernel. VMware ESX
embeds a modified version of a Linux operating system, which provides access through a
service console to hypervisor. VMware ESXi implements a very thin OS layer and replaces the
service console with management tools as shown is fig below.

VMware ESXi server architecture

The base of the infrastructure is the VMkernel, which is a thin Portable Operating System
Interface (POSIX) compliant operating system that provides the minimal functionality for
processes and thread management, file system, I/O stacks, and resource scheduling. The kernel
is accessible through specific APIs called User world API. Remote management of an ESXi
server is provided by the CIM Broker. The ESXi installation can also be managed locally by
a Direct Client User Interface (DCUI), which provides a BIOS-like interface for the
management of local users.

(c ) Infrastructure virtualization and cloud computing solutions

VMware provides a set of products covering the entire stack of cloud computing, from
infrastructure management to Software-as-a-Service solutions hosted in the cloud. Below
diagram (VMware cloud solution stack) gives an overview of the different solutions offered
and how they relate to each other.
VMware offers many products that cover the full cloud computing stack—from managing
infrastructure to providing Software-as-a-Service (SaaS).

• ESX and ESXi: The base of VMware’s virtualization. They let multiple servers work
together as one system, managed by vSphere i.e for base virtualization.
• vSphere: Provides core virtualization services like virtual storage, virtual networks,
and virtual file systems. It also supports features such as virtual machine migration,
storage migration, data recovery, and security zones i.e virtualization platform +
services.
• vCenter: The management tool that centrally controls and administers vSphere in a
data center i.e. console/webportal for management.
• vCloud: Converts data centers into cloud services (Infrastructure-as-a-Service). It
lets users create and manage virtual machines on demand through a web portal.
• vFabric: Helps developers build scalable web applications on virtual infrastructure. It
includes tools for monitoring, data management, and running Java applications(app
development tools).
• Zimbra: A cloud-based SaaS solution for email, messaging, and office collaboration.
3.6.3 Microsoft Hyper-V

Hyper-V is an infrastructure virtualization solution developed by Microsoft for server


virtualization. As the name recalls, it uses a hypervisor-based approach for hardware
virtualization, which leverages several techniques to support a variety of different guest
operating systems. Hyper-V is currently shipped as a component of Windows Server 2008 R2
that installs the hypervisor as a role within the server.

1. Architecture

Hyper-V supports multiple and concurrent execution of the guest operating system by means
of partitions. A partition is a completely isolated environment in which an operating system is
installed and run.

Figure 3.17 provides an overview of the architecture of Hyper-V. Hyper-V uses the concept of
partitions to run multiple operating systems at the same time.

When Hyper-V is installed, it takes control of the computer hardware.

The original host operating system doesn’t directly control the hardware anymore. Instead, it
becomes the parent partition (also called the root partition).

This parent partition has special privileges:

• Direct access to hardware


• Runs the virtualization stack
• Hosts all the drivers needed for guest operating systems
• Creates child partitions through the hypervisor

Child partitions are used to host guest operating systems, and do not have access to the
underlying hardware, but their interaction with it is controlled by either the parent partition
or the hypervisor itself.
(a)Hypervisor

The hypervisor is the component that directly manages the underlying hard ware (processors and
memory). It is logically defi ned by the following components:

Hypercalls Interface. This is the entry point for all the partitions for the execution of sensible
instructions. This is an implementation of para-virtualization approach already discussed with
Xen. This interface is used by drivers in the partitioned operating system to contact the hypervisor
by using the standard Windows calling convention. The parent partition also uses this interface
to create children partitions.

Memory Service Routines (MSRs). These are the set of functionalities that control the memory,
and its access from partitions. By leveraging hardware-assisted virtualization, the hypervisor uses
the Input Output Memory Management Unit (I/O MMU or IOMMU ) to fast track the access to
de vices from partitions, by translating virtual memory addresses.

Advanced Programmable Interrupt Controller (APIC). Virtualization 3.29 This component


repre sents the interrupt controller, which manages the signals coming from the underlying
hardware when some event occurs (timer expired, I/O ready, exceptions and traps). Each virtual
processor is equipped with a Synthetic Interrupt Controller (SynIC), which constitutes an
extension of the local APIC. The hy pervisor is responsible of dispatching, when appropriate, the
physical interrupts to the synthetic interrupt controllers.

Scheduler. This component schedules the virtual processors to run on available physical proces
sors. The scheduling is controlled by policies that are set by the parent partition. Address
Manager. This component is used to manage the virtual network addresses that are allocated to
each guest operating system.

Partition Manager. This component is in charge of performing partition creation, fi nalization,


destruction, enumeration, and confi gurations. Its services are available through the hypercalls
interface API .

(b) Enlightened I/O and Synthetic Devices.

Enlightened I/O is a faster and more efficient way for guest operating systems (OS) to
perform I/O operations.

Instead of going through the full hardware emulation layer (which is slower), the guest OS
communicates directly with the parent partition using a special channel called VMBus.

The Enlightened I/O architecture has three main parts: VMBus, Virtual Service Providers
(VSPs), and Virtual Service Clients (VSCs). VMBus is the communication channel between
partitions. VSPs, in the parent partition, are drivers that access the actual hardware. VSCs, in the
child partitions, are virtual (synthetic) drivers that the guest OS uses to communicate with VSPs.
This setup lets guest OSs efficiently perform I/O for storage, networking, graphics, and input,
and improves performance for child-to-child communication via virtual networks.

(c) Parent Partition.

The parent partition runs the host operating system and manages the virtualization stack that
helps the hypervisor run guest OSs. It always hosts a Windows Server 2008 R2 instance, which
handles the virtualization services for the child partitions. The parent partition is the only one
that can directly access hardware drivers and provides access to child partitions through
Virtual Service Providers (VSPs).
It also manages the creation, running, and deletion of child partitions using the Virtualization
Infrastructure Driver (VID), which controls the hypervisor, virtual processors, and memory.
For each child partition, a Virtual Machine Worker Process (VMWP) runs in the parent to
manage it via the VID. Additionally, virtualization management services can be accessed
remotely through WMI

(d) Children Partitions. Children partitions are used to execute guest operating systems. These
are isolated environments, which allow a secure and controlled execution of guests. There are
two types of children partitions depending on whether the guest operating system is supported
by Hy per-V or not. These are called Enlightened and Unenlightened partitions respectively. The
fi rst one can benefi t from Enlightened I/O while the other ones are executed by leveraging
hardware emulation from the hypervisor.

2. Cloud Computing and Infrastructure Management

Hyper-V constitutes the basic building block of Microsoft virtualization infrastructure. Other
components contributed to create a ful-featured platform for server virtualization.

Windows Server Core is a lightweight version of Windows Server 2008 designed to improve
performance in virtualized environments. It has a smaller footprint because it removes
unnecessary features for servers, such as the graphical user interface, .NET framework, and built-
in applications like PowerShell. The advantages are less maintenance, smaller attack surface,
easier management, and reduced disk usage. The disadvantage is that some features are
missing, but administrators can still use remote management tools like PowerShell and WMI
from a full Windows installation to manage the server.

System Center Virtual Machine Manager (SCVMM) 2008 is a tool from Microsoft’s System
Center suite that provides advanced management for virtual machines. It enhances Hyper-V by
offering features like:

• A management portal to create and manage virtual machines


• V2V (Virtual-to-Virtual) and P2V (Physical-to-Virtual) conversions
• Delegated administration for controlled access
• A library for templates and deep PowerShell integration
• Intelligent placement of virtual machines in the environment
• Host capacity management to optimize resources
[Link] on Hyper-V:

Hyper-V is a hybrid virtualization solution, combining paravirtualization and full hardware


virtualization. Its hypervisor architecture is paravirtualized, allowing guest OSs to use
hypercalls for services and VMBus for fast I/O. The parent partition in Hyper-V is similar to
Domain 0 in Xen, and child partitions are like Domains U. The main difference is that Xen runs
directly on hardware, while Hyper-V runs as a role on Windows Server, similar to VMware in
how it interacts with partitions.

Advantages: Flexible platform supporting many guest OSs.


Disadvantages: Requires 64-bit hardware, a 64-bit processor with hardware-assisted
virtualization and DEP, and Windows Server 2008 to run, unlike Xen or VMware, which can
be installed directly on bare hardware.

You might also like