Cloud Computing Notes Module2
Cloud Computing Notes Module2
• The continuous need for additional capacity, whether this is storage or compute power,
makes datacentres grow quickly. Companies like Google and Microsoft expand their
infrastructure by building datacentres, as large as football fields, that are able to host
thousands of nodes.
• But most enterprises cannot afford to keep building new datacentres whenever they need
more capacity.
• At the same time, much of their existing hardware is underutilized (not fully used).
• To solve both problems (lack of space + underutilization), a method called server
consolidation is used.
• Server consolidation = combining multiple workloads onto fewer physical machines
using virtualization.
• Companies want to reduce energy consumption and lower their carbon footprint.
• Data centers are one of the major power consumers and contribute consistently to the
impact that a company has on the environment.
• Maintaining a datacenter operational does not only involve keeping servers on, but a lot
of energy is also consumed for keeping them cool. Infrastructures for cooling have a
significant impact on the carbon footprint of a data center. Hence, reducing the number
of servers through server consolidation helps. Virtualization tech nologies can provide an
efficient way of consolidating servers.
Computers, in particular servers, do not operate all on their own, but they require care and feeding
from system administrators. Common system administration tasks include: hardware monitoring;
defective hardware replacement; server setup and updates; server resources monitoring; and
backups. These are labor-intensive operations, and the higher the number of servers that have to
be managed, the higher the administrative costs. Virtualization can help in reducing the number
of re quired servers for a given workload, thus reducing the cost of the administrative personnel.
These can be considered the major causes for the diffusion of hardware virtualization solutions
and, together with them, the other kinds of virtualization.
Host
The host represents the original environment where the guest is supposed to be managed.( The
actual physical system or environment )
Virtualization layer
The virtualization layer is responsible for recreating the same or a different environment where
the guest will operate.
These are installed on top of virtual hardware that is controlled and managed by the virtualization
layer, also called virtual machine manager.
The host is represented by the physical hardware, and in some cases the operating system, that
defines the environment where the virtual machine manager is running.
In case of virtual storage, the guest might be client applications or users that interact with the
virtual storage management software deployed on top of the real storage system.
The case of virtual networking is also similar: the guest—applications and users—interact with
a virtual network, such as a Virtual Private Network (VPN), which is managed by specific
software (VPN client) using the physical network available on the node. VPNs are useful for
creating the illusion of being within a different physical network and thus accessing the resources
in it, which would be otherwise not available.
The technologies of today allow a profitable use of virtualization, and make it possible to fully
exploit the advantages that come with it. Such advantages have always been characteristics of
virtualized solutions.
1. Increased Security
The virtual machine represents an emulated environment in which the guest is executed. All the
operations of the guest are generally performed against the virtual machine, which then translates
and applies them to the host. This level of indirection allows the virtual machine manager to
control and filter the activity of the guest, thus preventing some harmful operations from being
performed.
Resources exposed by the host can then be hidden or simply protected from the guest. Moreover,
sensitive information that is contained in the host can be naturally hidden without the need of
installing complex security policies
For example, applets downloaded from the Internet run in a sandboxed version of the Java Virtual
Machine (JVM), which provides them with limited access to the hosting operating system
resources. Both the JVM and the .NET runtime provide extensive security policies for
customizing the execution environment of applications.
Tools like VMware Desktop, VirtualBox, Parallels let users create a complete virtual
computer. You can install a separate operating system inside it. If malware infects the virtual
machine, it is contained inside the VM and does not affect the host OS.
2. Managed Execution
(a)Sharing:
Virtualization allows the creation of a separate computing environment within the same host. In
this way, it is possible to fully exploit the capabilities of a powerful host, which would be
otherwise underutilized. Sharing is a particularly important feature in virtualized datacenters,
which helps reduce the number of active servers and limit power consumption.
(b) Aggregation.
It is not only possible to share the physical resource among several guests, but virtualization
also allows the aggregation, which is the opposite process. A group of separate hosts can be
tied together and represented to guests as a single virtual host. Example: Cluster management
software takes a group of servers and makes them appear as one unified resource.
(c ) Emulation.
Guests run inside a controlled environment managed by the virtualization layer (which is
itself a program).
This allows for controlling and tuning the environment that is exposed to guests. Virtualization
can emulate a completely different environment compared to the host system. This allows
execution of guest systems that need special characteristics (hardware/OS) not available on the
physical host.
Example:
This feature becomes very useful for testing purposes where a specific guest has to be validated
against different platforms or architectures.
A virtual machine can use a virtual SCSI device for file I/O, even if the host computer does not
have a physical SCSI controller installed.
Old or legacy software can run on virtual/emulated hardware without changing its original
[Link]: MS-DOS mode in Windows 95/98.
(d) Isolation.
Benefits of Isolation:
For instance, software implementing hardware virtualization solutions can expose to a guest
operating system only a fraction of the memory of the host machine or to set the maximum
frequency of the processor of the virtual machine.
Managed execution in virtualization allows capturing the state of a guest VM, saving it, and
resuming later.
Portability
The concept of portability applies in different ways, according to the specific type of
virtualization considered.
In hardware virtualization, the guest OS and applications are stored as a virtual image. This
image can usually be moved and executed on other virtual machines, similar to how a picture
can be opened on different computers.
Finally, portability allows having your own system always with you and ready to use, given that
the re quired virtual machine manager is available.
3.3 TAXONOMY OF VIRTUALIZATION TECHNIQUES
The first classification discriminates against the service or entity that is being emulated.
Virtualization is mainly used to emulate execution environments, storage, and networks. Among
these categories, execution virtualization constitutes the oldest, most popular, and most
developed area. Therefore, it deserves a major investigation and a further categorization.
In particular, we can divide these execution virtualization techniques into two major categories,
by considering the type of host they require.
Process level techniques are implemented on top of an existing operating system, which has full
control of the hardware.
System level techniques are implemented directly on hardware and do not require—or require a
minimum support from—an existing operating system.
Within these two categories we can list different techniques, which offer to the guest a different
type of virtual computation environment: bare hardware, operating system resources, low-level
programming language, and application libraries.
3.3.1 Execution Virtualization
Execution virtualization includes all those techniques whose aim is to emulate an execution
environment that is separate from the one hosting the virtualization layer. All these techniques
concentrate their interest on providing support for the execution of programs.
Execution virtualization provides support for the execution of programs whether they are:
Therefore, execution virtualization can be implemented directly on top of the hardware, by the
operating system, an application, or libraries.
At the bottom layer is the model for the hardware. It is expressed in terms of the Instruction Set
Architecture (ISA), which defines the instruction set for the processor, registers, memory, and
interrupts management.
Types of ISA:
The Application Binary Interface (ABI) separates the operating system layer from the
applications and libraries, which are managed by the OS.
ABI covers details such as low-level data types, alignment, and call conventions and defines a
format for executable programs. System calls are defined at this level.
The highest level of abstraction is represented by the Application Programming Interface (API),
which interfaces applications to libraries and/or the underlying operating system.
For any operation to be performed in the application level API, ABI and ISA are responsible to
make it happen. The high-level abstraction is converted into machine-level instructions to
perform the actual operations supported by the processor.
The instruction set exposed by the hardware has been divided into different security classes,
which define who can operate with them.
The first distinction can be made between privileged and non-privileged instructions.
Non-privileged instructions are those instructions that can be used without interfering with other
tasks because they do not access shared resources. This category contains, for example, all the
floating, fixed point, and arithmetic instructions.
Privileged instructions are those that are executed under specific restrictions. They mostly used
for sensitive operations, which expose (behavior sensitive) or modify (control sensitive) the
privileged state.
For instance, behavior-sensitive instructions are those that operate on the I/O, while control-
sensitive instructions alter the state of the CPU registers.
Some types of architecture feature more than one class of privileged instructions.
A hierarchy of privileges in the form of ring based security is shown in Fig 3.5.
Ring 0, Ring 1, Ring 2, and Ring 3; Ring 0 is in the most privileged level, and the Ring 3 in the
least privileged level. Ring 0 is used by the kernel of the OS, Rings 1 and 2 are used by the OS
level services, and Ring 3 is used by the user. Recent systems support only two levels with Ring
0 for the supervisor mode and Ring 3 for user mode.
Most current systems support at least two different execution modes: supervisor mode and user
modes.
• If code running in user mode invokes the privileged instructions, hardware interrupts
occur. The trap hands control to the operating system (kernel), which decides what to do
→ usually block it or provide controlled access.
[Link]-Level Virtualization
In this model, the guest is represented by the operating system, the host by the physical computer
hardware, the virtual machine by its emulation, and virtual machine manager by the hypervisor.
The hypervisor is generally a program, or a combination of software and hardware, that allows
the abstraction of the underlying physical hardware.
Hypervisors
• Type I hypervisor
• Type II hypervisor
Type 1 hypervisors run directly on top of the hardware. Therefore, they take the place of the op
erating systems, interact directly with the ISA interface exposed by the underlying hardware, and
emulate this interface in order to allow the management of guest operating systems. This type of
hypervisors is also called native virtual machine, since it runs natively on hardware.
Type II hypervisors require the support of an operating system to provide virtualization services.
This means that they are programs managed by the operating system, which interact with it
through the ABI, and emulate the ISA of virtual hardware for guest operating systems. This type
of hypervisors is also called hosted virtual machine, since it is hosted within an operating system.
Fig: 3.7 Hosted and Native virtual machine
A virtual machine manager is internally organized as described in Fig. 3.8. Three main modules
coordinate their activity in order to emulate the underlying hardware: dispatcher, allocator, and
interpreter.
It is the entry point for all instructions coming from a VM. It reroutes the instructions issued by
the virtual machine instance to one of the two other modules.
The allocator is responsible for deciding the system resources to be provided to the VM:
whenever a virtual machine tries to execute an instruction that results in changing the machine
resources associated with that VM, the allocator is invoked by the dispatcher.
The interpreter module consists of interpreter routines. These are executed whenever a virtual
machine executes a privileged instruction: a trap is triggered and the corresponding routine is
executed.
Properties and Theorems proposed by Goldberg and Popek
The criteria that need to be met by a virtual machine manager to efficiently support virtualization
were established by Goldberg and Popek in 1974. Three properties have to be satisfied:
Equivalence: a guest running under the control of a virtual machine manager should exhibit the
same behavior as when executed directly on the physical host.
Resource Control : The virtual machine manager should be in complete control of virtualized
resources.
Three theorems that define the properties that hardware instructions need to satisfy in order to
efficiently support virtualization.( Popek & Goldberg theorem)
This theorem establishes that all the instructions that change the configuration of the system
resources should trap from User Mode and be executed under the control of VMM. The
theorem always guarantees the resource control property when hypervisor is in Ring 0(most
privileged mode). All the nonprivileged instructions should be executed normally without the
intervention of VMM.
• it is virtualizable, and
• a VMM without any timing dependencies can be constructed for it.
This theorem talks about Recursive virtualization. Recursive virtualization is the ability of
running a VMM on top of another VMM. This allows “Nesting Hypervisors” as long as the
capacity of the underlying resources can accommodate that. Virtualizable hardware is a
prerequisite to recursive virtualization.
Theorem 3
A hybrid VMM may be constructed for any conventional third generation machine, in
which the set of user sensitive instructions are a subset of the set of privileged instructions.
A hybrid VMM (HVM) means a VM that is part of hybrid cloud, allowing seamless movement
and management of workloads of on premises, private and public cloud resources. HVM
merges the benefits of full virtualization and paravirtualization.
[Link] Virtualization Techniques
The CPU adds a special execution mode (VMX root/non-root). This allows guest to run
directly on the CPU without full emulation.
This technique was originally introduced in the IBM System/370. At present, examples of
hardware-assisted virtualization are the extensions to the x86-64 bit architecture introduced
with Intel VT and AMD V. After 2006, Intel and AMD introduced processor extensions,
and a wide range of virtualization solutions took advantage of them: Kernel-based Virtual
Machine (KVM), VirtualBox, Xen, VMware, Hyper-V, Sun xVM, Parallels, and others.
➢ Full Virtualization: Full virtualization refers to the ability to run a program, most likely
an operating system, directly on top of a virtual machine and without any modification, as
though it were run on the raw hardware. To make this possible, virtual machine managers
are required to provide a complete emulation of the entire underlying hardware. The
principal advantage of full virtualization is complete isolation, which leads to enhanced
security, ease of emulation of different architectures, and coexistence of different systems
on the same platform. Here the guest OS runs unmodified i.e. the guest OS does not need
to know that it is running in a virtualised environment. Eg: You install Windows 10 as a
guest OS using VirtualBox on a LINUX host (VirtualBox emulates entire hardware and
windows doesn’t know it is in a virtualized environment).
➢ Partial Virtualization: Only part of the hardware environment is emulated. Unlike full
virtualization, the entire system is not fully emulated and the guest OS mat need some
modification or may have limited functionality. This can be achieved by, simulating only
certain parts of hardware. Other parts are exposed directly on the guest OS. The guest OS
may need to aware of virtualization or may only be able to access limited resources. Eg:
only some hardware components like CPU or memory are virtualized, while others are
either not virtualised or are shared directly with the host system.
4. Operating System Level Virtualization
Advantages:
• Lightweight – Uses less memory and CPU since there’s no extra OS for each container.
• Portable – Run the same container on any machine (laptop, server, cloud).
• Efficient – Many containers can run on one machine without wasting resources.
• Isolated – Each container is separate, so apps don’t interfere with each other.
Steps:
1. Compilation: A program written in a high-level programming language (e.g., Java,
Python) is compiled into a platform-independent intermediate code, or bytecode.
2. Virtual Machine (VM): A virtual machine (e.g., JVM, Microsoft .NET CLR) acts as an
interpreter for this bytecode.
3. Execution:
The VM translates and executes the bytecode, providing a consistent runtime environment
regardless of the underlying physical hardware or operating system.
Examples: Java Virtual Machine (JVM): Executes Java bytecode, enabling Java applications
to run on various operating systems. Microsoft .NET Common Language Runtime (CLR):
The execution environment for .NET applications, supporting languages like C# and Visual
Basic. Early systems: Technologies like the UCSD Pascal and the work on BCPL in 1966
were early implementations of this concept.
Advantages:
• Portability: Code written once can run anywhere with the language’s virtual machine.
Example: Java runs on Windows, Linux, or Mac if JVM is installed.
• Consistency Across Platforms: Same behaviour regardless of the underlying
hardware/OS.
• Security: Virtual machines act as a sandbox, restricting unsafe operations. Helps
prevent direct access to the host system.
• Isolation: Each program runs in its own VM instance → reduces interference.
• Rich Ecosystem: Mature language runtimes (JVM, CLR) support multiple languages
and frameworks.
• Simplified Development: Developers focus on writing code in the language, without
worrying about hardware or OS details.
• Cross-Language Interoperability (in some cases): Example: JVM supports Java,
Scala, Kotlin; CLR supports C#, F#, [Link].
The users’ device acts as a display terminal, receiving only the app’s user interface and sending
input back to the server.
➢ Storage Virtualization
➢ Network Virtualization
➢ Desktop Virtualization
➢ Application server Virtualization
There are different techniques for storage virtualization, one of the most popular being
network-based virtualization by means of storage area networks (SANs). SANs use a
network-accessible device through a large bandwidth connection to provide storage facilities.
It is a technology that abstract physical network resources like routers, switches, firewalls and
cables into logical, software-based components. It allows multiple virtual networks to run
independently on the same underlying physical infrastructure. It is like turning a real physical
network (routers, cables, switch) into a software based network that you can create, change or
remove quickly without touching the hardware.
Network virtualization can aggregate different physical networks into a single logical network
(external network virtualization), or provide network like functionality to an operating system
partition (internal network virtualization).
Examples:
Internal network virtualization : VMware virtual switches, Linux bridges for containers.
Virtualization is the foundation of cloud computing. Cloud providers use virtualization to split
their massive datacentres into many virtual servers. Virtualization technologies are primarily
used to offer configurable computing environments and storage. Hardware and programming
language virtualization are the most popular techniques adopted in cloud computing systems.
Hardware virtualization is an enabling factor for solutions in the Infrastructure-as-a-Service
(IaaS) market segment, while programming language virtualization is a technology leveraged
in Platform-as-a-Service (PaaS) offerings. Virtualization also allows isolation and a finer
control, thus simplifying the leasing of services and their accountability on the vendor side.
We need to mention server consolidation and virtual machine migration in here to understand
better, the utilization of virtualization in cloud computing.
Server Consolidation: It is the process of reducing the number of physical servers in use by
running multiple VMs on fewer/more powerful servers through virtualization
Virtual Machine Migration: Virtual Machine Migration (VM Migration) is the process of
moving a running or powered-off virtual machine (VM) from one physical host/server to
another without affecting the VM’s availability or performance (in case of live migration). It’s
a key feature of virtualization and cloud computing because it helps in load balancing,
maintenance, and disaster recovery.
Server consolidation and virtual machine migration are principally used in the case of hardware
virtualization, even though they are also technically possible in the case of programming
language virtualization. Storage virtualization constitutes an interesting opportunity where
vendors backed by large computing infrastructures featuring huge storage facilities can harness
these facilities into a virtual storage service, easily partitionable into slices. These slices can be
dynamic and offered as a service. Finally, cloud computing revamps the concept of desktop
virtualization, initially introduced in the mainframe era. The ability to recreate the entire
computing stack—from infrastructure to application services—on demand.
Today, the capillary diffusion of the Internet connection and the advancements in computing
technology has made virtualization an interesting opportunity to deliver on-demand IT
infrastructure and services. Despite its renewed popularity, this technology has benefits and
also drawbacks.
Advantages
• Virtual machine (VM) instances are usually represented as files, making them easy to
move across systems.
• VMs are self-contained, requiring only the virtual machine manager (VMM).
• Administration is simplified due to portability and self-containment.
• Example: Java programs run anywhere with a JVM; hardware-level virtualization
provides similar portability.
• Enables building and carrying personalized operating environments (like having your
own laptop virtually).
1. Performance Degradation
• Main disadvantage is due to the abstraction layer between guest and host. This leads to
increased latencies and slower execution.
• Causes of performance issues in hardware virtualization:
o Maintaining virtual processor status.
o Handling privileged instructions (trap and simulate).
o Managing paging within VM.
• If the VMM runs on top of the host OS, it competes for resources with other
applications, causing further slowdown.
• In programming-level virtualization (Java, .NET):
o Binary translation and interpretation slow execution.
o Access to memory and physical resources filtered through the runtime, adding
delays.
• Mitigation:
o Advances like paravirtualization improve performance by offloading
execution to the host.
o Just-in-time (JIT) compilation in JVM and .NET reduces slowdowns by
converting code to native machine code.
2. Inefficiency and Degraded User Experience
• Now maintained by Xen Project hosted under LINUX foundation. Xen also supports full
virtualization using hardware-assisted virtualization. Xen is the most popular
implementation of paravirtualization where it modifies portions of the guest operating
systems.
• Below is the architecture of Xen and its mappings onto a x86 model.
• The Xen hypervisor is the core component of the Xen virtualization platform.
• It runs directly on the hardware (bare-metal) — similar to other Type-1 hypervisors
like VMware ESXi.
• It manages how guest operating systems access the CPU, memory, and I/O devices.
Many of the x86 implementations support four different security levels, called rings, where Ring
0 represent the level with the highest privileges and Ring 3 the level with the lowest ones.
In x86 processors, different “rings” represent levels of privilege:
In Xen:
• Xen Hypervisor runs in Ring 0 (highest privilege).
• Guest OS kernels run in Ring 1 (since Ring 0 is taken by Xen).
• Applications in guest OSes run in Ring 3, as usual
• Guest operating systems are executed within domains, which represent virtual machine
instances. Moreover, specific control software, which has privileged access to the host
controls all the other guest operating systems, is executed in a special domain called
Domain 0. This is the first one that is loaded once the virtual machine manager has
completely booted, and it hosts a Hyper Text Transfer Protocol (HTTP) server that serves
requests for virtual machine creation, con figuration, and termination.
Some applications allow code executing in Ring 3 to jump into Ring 0 (kernel mode). Some OS
code (like system calls) tries to directly access hardware (jump from Ring 3 → Ring 0). This will
result in a trap or silent fault, thus preventing the normal operations of the guest operating
system, since this is now running in Ring 1. To fix this: Xen uses hypercalls — special calls that
the guest OS uses to “politely ask” the hypervisor to perform sensitive tasks (like memory access
or device control). The guest OS must be modified to use hypercalls instead of direct hardware
instructions.
Xen works well with LINUX not with windows because paravirtualization needs the operating
system codebase to be modified, and hence not all operating systems can be used as guests in a
Xen-based environment i.e. Linux can be easily modified, since their code is publicly available
and Xen provides full support for their virtualization, whereas components of the Windows
family are generally not supported by Xen unless hardware-assisted virtualization is available.
VMware’s technology is based on the concept of full virtualization, where the underlying
hardware is replicated and made available to the guest operating system, which runs unaware of
such abstraction layers and does not need to be modified. VMware implements full virtualization
either in the desktop environment, by means of Type II hypervisors, or in the server environment,
by means of Type I hypervisors. In both cases, full virtualization is made possible by means of,
➢ Direct Execution (for non-sensitive instructions)
➢ Binary Translation (for sensitive instructions).
Besides these two core solutions, VMware provides additional tools and software that simplify
the use of virtualization technology either in a desktop environment, or in a server
environment.
[Link] Virtualization and Binary Translation
VMware is well known for the capability to virtualize x86 architectures, which runs unmodified
on top of their hypervisors. With the new generation of hardware architectures and the
introduction of hardware-assisted virtualization (Intel VT-x and AMD V) in 2006, full
virtualization is made possible with hardware support, but before that date, the use of dynamic
binary translation was the only solution that allowed running x86 guest operating systems
unmodified in a virtualized environment.
All the privileged instructions need to be executed in Ring 0 and guest OS run in Ring 1. In
case of binary translation, the sensitive/privileged instructions are translated into an
equivalent set of instructions that achieves same goal without generating exceptions. These
translated instructions are cached so that translation is not done in the future occurrences of
same instructions. This approach has both advantages and disadvantages. The major advantage
is that guests can run unmodified in a virtualized environment, which is a crucial feature for
operating systems for which source code is not available. Disadvantage is that translating
instructions at runtime introduces an additional overhead that is not present in other approaches.
Even though such disadvantage exists, binary translation is applied to only a subset of the
instruction set, whereas the others are managed through direct execution on the underlying
hardware. This somehow reduces the impact on performance of binary translation.
Every operating system manages memory using a component called the Memory
Management Unit (MMU) — it converts:Virtual address → Physical address. Especially in
the case of hosted hypervisors (Type II), where the virtual MMU and the host-OS MMU are
traversed sequentially before getting to the physical memory page, the impact on performance
can be significant. The guest OS thinks it has full control of MMU as it were running on the
real hardware. However, these physical address are not the actual machine’s physical memory,
they are guest physical addresses. VMware also provides full virtualization of I/O devices such
as network controllers and other peripherals such as keyboard, mouse, disks, and universal
serial bus (USB) controllers.
[Link] Solutions
Here, we need to install a specific driver (VMware Driver) in the host operating system that
provides two main services:
• Creates a Virtual Machine Manager (VMM) to manage the virtual machine.
• Helps process special commands (like using USBs or saving files)
This setup is called Hosted Virtual Machine Architecture, because the virtual machine runs
"hosted" on your regular operating system. Normal instructions (like math or logic) run directly
on the virtual machine. More complex things (like using a USB or printing) go through
VMware’s system i.e. intervention of VMware application is required only for instructions
such as device I/O, that requires binary translation. Virtual machine images are saved in a
collection of files on the host file system i.e. each VM is saved as a group of files on your
computer. You can Pause and resume the VM, Take snapshots (save the current state), Undo
changes by going back to an earlier state.
Other Solutions:
• VMware Player: A simpler version of VMware Workstation. It lets you run virtual
machines on Windows or Linux.
• VMware ACE: Lets companies create secure, policy-controlled VMs for employees.
• VMware ThinApp: It is a tool that lets you run applications without installing them
on your computer. Instead of installing an application like you usually do, ThinApp
turns the application into file you can just run. No installation needed. Useful for
running old apps that may not work on newer systems OR for using apps across
different computers without installation problems. You do not need Admin Rights to
run virtualized applications.
Server virtualization means running many virtual servers on one physical machine. Initial
support for server virtualization was provided by VMware GSX (TYPE II) server as shown in
fig below. The architecture is mostly designed to serve the virtualization of Web servers.
A daemon process, called “serverd”, controls and manages VMware application processes.
These applications are then connected to the virtual machine instances by means of the
VMware driver installed on the host operating system. Virtual machine instances are managed
by the VMM. User requests for virtual machine management and provisioning are routed from
the Web server through the VMM by means of ‘serverd’.
The hypervisor based approaches (Type I) to achieve server virtualization are,
Both can be installed on bare metal servers and provide services for virtual machine
management. The two solutions provide the same services but differ in the internal
architecture, more specifically in the organization of the hypervisor kernel. VMware ESX
embeds a modified version of a Linux operating system, which provides access through a
service console to hypervisor. VMware ESXi implements a very thin OS layer and replaces the
service console with management tools as shown is fig below.
The base of the infrastructure is the VMkernel, which is a thin Portable Operating System
Interface (POSIX) compliant operating system that provides the minimal functionality for
processes and thread management, file system, I/O stacks, and resource scheduling. The kernel
is accessible through specific APIs called User world API. Remote management of an ESXi
server is provided by the CIM Broker. The ESXi installation can also be managed locally by
a Direct Client User Interface (DCUI), which provides a BIOS-like interface for the
management of local users.
VMware provides a set of products covering the entire stack of cloud computing, from
infrastructure management to Software-as-a-Service solutions hosted in the cloud. Below
diagram (VMware cloud solution stack) gives an overview of the different solutions offered
and how they relate to each other.
VMware offers many products that cover the full cloud computing stack—from managing
infrastructure to providing Software-as-a-Service (SaaS).
• ESX and ESXi: The base of VMware’s virtualization. They let multiple servers work
together as one system, managed by vSphere i.e for base virtualization.
• vSphere: Provides core virtualization services like virtual storage, virtual networks,
and virtual file systems. It also supports features such as virtual machine migration,
storage migration, data recovery, and security zones i.e virtualization platform +
services.
• vCenter: The management tool that centrally controls and administers vSphere in a
data center i.e. console/webportal for management.
• vCloud: Converts data centers into cloud services (Infrastructure-as-a-Service). It
lets users create and manage virtual machines on demand through a web portal.
• vFabric: Helps developers build scalable web applications on virtual infrastructure. It
includes tools for monitoring, data management, and running Java applications(app
development tools).
• Zimbra: A cloud-based SaaS solution for email, messaging, and office collaboration.
3.6.3 Microsoft Hyper-V
1. Architecture
Hyper-V supports multiple and concurrent execution of the guest operating system by means
of partitions. A partition is a completely isolated environment in which an operating system is
installed and run.
Figure 3.17 provides an overview of the architecture of Hyper-V. Hyper-V uses the concept of
partitions to run multiple operating systems at the same time.
The original host operating system doesn’t directly control the hardware anymore. Instead, it
becomes the parent partition (also called the root partition).
Child partitions are used to host guest operating systems, and do not have access to the
underlying hardware, but their interaction with it is controlled by either the parent partition
or the hypervisor itself.
(a)Hypervisor
The hypervisor is the component that directly manages the underlying hard ware (processors and
memory). It is logically defi ned by the following components:
Hypercalls Interface. This is the entry point for all the partitions for the execution of sensible
instructions. This is an implementation of para-virtualization approach already discussed with
Xen. This interface is used by drivers in the partitioned operating system to contact the hypervisor
by using the standard Windows calling convention. The parent partition also uses this interface
to create children partitions.
Memory Service Routines (MSRs). These are the set of functionalities that control the memory,
and its access from partitions. By leveraging hardware-assisted virtualization, the hypervisor uses
the Input Output Memory Management Unit (I/O MMU or IOMMU ) to fast track the access to
de vices from partitions, by translating virtual memory addresses.
Scheduler. This component schedules the virtual processors to run on available physical proces
sors. The scheduling is controlled by policies that are set by the parent partition. Address
Manager. This component is used to manage the virtual network addresses that are allocated to
each guest operating system.
Enlightened I/O is a faster and more efficient way for guest operating systems (OS) to
perform I/O operations.
Instead of going through the full hardware emulation layer (which is slower), the guest OS
communicates directly with the parent partition using a special channel called VMBus.
The Enlightened I/O architecture has three main parts: VMBus, Virtual Service Providers
(VSPs), and Virtual Service Clients (VSCs). VMBus is the communication channel between
partitions. VSPs, in the parent partition, are drivers that access the actual hardware. VSCs, in the
child partitions, are virtual (synthetic) drivers that the guest OS uses to communicate with VSPs.
This setup lets guest OSs efficiently perform I/O for storage, networking, graphics, and input,
and improves performance for child-to-child communication via virtual networks.
The parent partition runs the host operating system and manages the virtualization stack that
helps the hypervisor run guest OSs. It always hosts a Windows Server 2008 R2 instance, which
handles the virtualization services for the child partitions. The parent partition is the only one
that can directly access hardware drivers and provides access to child partitions through
Virtual Service Providers (VSPs).
It also manages the creation, running, and deletion of child partitions using the Virtualization
Infrastructure Driver (VID), which controls the hypervisor, virtual processors, and memory.
For each child partition, a Virtual Machine Worker Process (VMWP) runs in the parent to
manage it via the VID. Additionally, virtualization management services can be accessed
remotely through WMI
(d) Children Partitions. Children partitions are used to execute guest operating systems. These
are isolated environments, which allow a secure and controlled execution of guests. There are
two types of children partitions depending on whether the guest operating system is supported
by Hy per-V or not. These are called Enlightened and Unenlightened partitions respectively. The
fi rst one can benefi t from Enlightened I/O while the other ones are executed by leveraging
hardware emulation from the hypervisor.
Hyper-V constitutes the basic building block of Microsoft virtualization infrastructure. Other
components contributed to create a ful-featured platform for server virtualization.
Windows Server Core is a lightweight version of Windows Server 2008 designed to improve
performance in virtualized environments. It has a smaller footprint because it removes
unnecessary features for servers, such as the graphical user interface, .NET framework, and built-
in applications like PowerShell. The advantages are less maintenance, smaller attack surface,
easier management, and reduced disk usage. The disadvantage is that some features are
missing, but administrators can still use remote management tools like PowerShell and WMI
from a full Windows installation to manage the server.
System Center Virtual Machine Manager (SCVMM) 2008 is a tool from Microsoft’s System
Center suite that provides advanced management for virtual machines. It enhances Hyper-V by
offering features like: