0% found this document useful (0 votes)
8 views76 pages

CC Rec

The document provides an overview of cloud computing, detailing its characteristics, service models (SaaS, PaaS, IaaS), and deployment models (private, public, hybrid, community). It also introduces OpenStack as an open-source cloud computing platform and outlines procedures for installing virtual machines and compilers, as well as creating applications using Google App Engine. The document emphasizes the advantages of cloud computing, such as reliability and manageability.

Uploaded by

Sowmiya Usha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views76 pages

CC Rec

The document provides an overview of cloud computing, detailing its characteristics, service models (SaaS, PaaS, IaaS), and deployment models (private, public, hybrid, community). It also introduces OpenStack as an open-source cloud computing platform and outlines procedures for installing virtual machines and compilers, as well as creating applications using Google App Engine. The document emphasizes the advantages of cloud computing, such as reliability and manageability.

Uploaded by

Sowmiya Usha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

PANIMALAR INSTITUTE OF TECHNOLOGY DEPT.

OF IT

INTRODUCTION TO CLOUD COMPUTING

Cloud computing is a technology that uses the internet and central remote servers to
maintain data and applications. It is a model for enabling ubiquitous, on-demand access to a
shared pool of configurable computing resources (e.g., computer networks, servers, storage,
applications and services) that can be rapidly provisioned and released with minimal
management effort. This cloud model promotes availability and is composed of five essential
characteristics, three service model and four deployment models.

Figure: 1 Cloud Architecture

A large-Scale distributed computing paradigm that is driven by economics of


scale, in which a pool of abstracted virtualized, dynamically scalable, managed computing
power storage, platforms and services are delivered on demand to external customers over the
Internet.

DEPLOYMENT MODELS

Private cloud

Private cloud is cloud infrastructure operated solely for a single organization. It may be
managed by the organization or by a third-party and may exist on premise or off premise.

1
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Public cloud

The cloud infrastructure is made available to the general public or a large industry group and
is owned by an organization selling cloud services.

Hybrid cloud

The cloud infrastructure is a composition of two or more clouds (private, community, public)
that remain unique entities but are bound together by standardized or proprietary technology
that enables data and application portability (e.g. Cloud bursting for load-balancing between
clouds ).

Community cloud

The cloud infrastructure is shared by several organizations and supports a specific community
that has shared concerns (security, compliance, jurisdiction, etc.). It may be managed by the
organization or by a third-party and may exist on premise or off premise.

SERVICE MODELS

Software as a Service (SaaS)

The capability provided to the consumer is to use the provider’s applications running
on a cloud infrastructure. The applications are accessible from various client devices through
either a thin client interface, such as a web browser (e.g., web-based email), or a program
interface. The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, or even individual application
capabilities, with the possible exception of limited user-specific application configuration
settings.

Platform as a Service (PaaS)

The capability provided to the consumer is to deploy onto the cloud infrastructure
consumer-created or acquired applications created using programming languages, libraries,
services, and tools supported by the provider. The consumer does not manage or control the
underlying cloud infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly configuration settings for the
application-hosting environment.

2
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Figure: 2 Cloud Service Models

Infrastructure as a Service (IaaS)


The capability provided to the consumer is to provision processing, storage, networks,
and other fundamental computing resources where the consumer is able to deploy and run
arbitrary software, which can include operating systems and applications. The consumer does
not manage or control the underlying cloud infrastructure but has control over operating
systems, storage, and deployed applications; and possibly limited control of select networking
components (e.g., host firewalls).
ESSENTIAL CHARACTERISTICS

On-Demand self service

A consumer can unilaterally provision computing capabilities, such as server time and
network storage, as needed automatically without requiring human interaction with each
service provider.

Broad network access

Capabilities are available over the network and accessed through standard mechanisms
that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets,
laptops and workstations).

Resource pooling

The provider's computing resources are pooled to serve multiple consumers using a
multi-tenant model, with different physical and virtual resources dynamically assigned and

3
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Re assigned according to consumer demand. There is a sense of location independence in that


the customer generally has no control or knowledge over the exact location of the provided
resources but may be able to specify location at a higher level of abstraction (e.g., country, state
or datacenter). Examples of resources include storage, processing, memory and network
bandwidth.

Rapid elasticity

Capabilities can be elastically provisioned and released, in some cases automatically, to


scale rapidly outward and inward commensurate with demand. To the consumer, the
capabilities available for provisioning often appear to be unlimited and can be appropriated in
any quantity at any time.

Measured service

Cloud systems automatically control and optimize resource use by leveraging a


metering capability at some level of abstraction appropriate to the type of service ([Link],
processing, bandwidth and active user accounts). Resource usage can be monitored, controlled
and reported, providing transparency for the provider and consumer.

COMMON CHARACTERISTICS ( VIRTUALIZATION)

Virtualization describes a technology in which an application, guest operating system


or data storage is abstracted away from the true underlying hardware or software. Virtualization
has three characteristics that make it ideal for cloud computing:

Partitioning

In virtualization, many applications and operating systems (OSes) are supported in a


single physical system by partitioning (separating) the available resources.

Isolation

Each virtual machine is isolated from its host physical system and other virtualized
machines. Because of this isolation, if one virtual-instance crashes, it doesn’t affect the other
virtual machines. In addition, data isn’t shared between one virtual container and another.

Encapsulation

A virtual machine can be represented (and even stored) as a single file, so you can
identify it easily based on the service it provides. In essence, the encapsulated process could
4
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

be a business service. This encapsulated virtual machine can be presented to an application as


a complete entity. Therefore, encapsulation can protect each application so that it doesn’t
interfere with another application.

Applications of virtualization

Virtualization can be applied broadly to just about everything that you could imagine:

• Memory
• Networks
• Storage
• Hardware
• Operating systems
• Applications

Forms of virtualization

Virtual memory- virtual memory is a memory management technique that is


implemented using both hardware and software.
• Software – It is a part of a computer system that consists of data or computer
instructions.

ADVANTAGES OF CLOUD COMPUTING



Reliability

Manageability

Strategic Edge

INTRODUCTION TO OPENSTACK

OpenStack is a software package that provides a cloud platform for Public and Private cloud
covering various use cases including Enterprise and Telecom. The main focus is on
Infrastructure as a Service (IaaS) cloud and additional services built upon IaaS. The services
developed by the community are available as tarballs to install them from source and are also
picked up and packaged to make it available for different Linux distributions or as part of
OpenStack distributions.

5
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Map of OpenStack projects

Community

OpenStack is a community working towards one mission:

To produce a ubiquitous Open Source Cloud Computing platform that is easy to use, simple to
implement, interoperable between deployments, works well at all scales, and meets the needs
of users and operators of both public and private clouds.

OpenStack provides an ecosystem for collaboration. It has infrastructure for

• Code review
• Testing
• CI
• Version control
• Documentation
• a set of collaboration tools, like a wiki, IRC channels, Ether pad and Ether calc.

The four opens

The basic principles of the OpenStack community are the four opens.

Open source
Open design
Open development
Open Community
6
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

[Link] INSTALLATION OF VIRTUALBOX WITH DIFFERENT


DATE:
FLAVOURS OF LINUX/WINDOWS

AIM:

To write the procedure to install the virtual box with different flavours
of Linux/Windows.

PROCEDURE:

Step 1: Run the Virtual box setup and click the “Next” option.

Step 2: Repeat Click on “Next” option and Finally Click the option “Install”

Step 3: To Install “Ubuntu” as virtual machine in “Oracle VM Virtual box”, Open the
Oracle VM Virtual box Manager

7
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 4: Create new virtual machine

Step 5: Select the memory size

Step 6: Create a virtual hard drive

8
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 7: Select the type of hardware file

Step 8: Select the type of storage on physical hard drive

Step 9: Select the size of virtual hard drive

9
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 10: Select the new virtual OS created and click on “Settings”

Step 11: Select “Storage” from the left panel of the window

Step 12: Click on the first icon “Add CD/DVD device” in Controller: IDE

10
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 13: Select “Choose Disk” and Choose the virtual machine to be used and click “Open”

Step 14: Click “OK” and select “Start” to run the virtual machine

Step 15: To install Ubuntu , Click 'Install Ubuntu' button

11
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 16: Select 'Erase disk and install Ubuntu' option is selected and click 'Install
Now' button

Step 17: Click 'Continue' button for upcoming dialogue box

Step 18: Enter the name, username and password. Click 'Continue' button

12
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 19: After completion of installation process, click on 'Restart Now' button.

Step 20: The same way, we can install windows OS .

OUTPUT:

(i) Ubuntu Operating System in Virtual Machine

(ii) Windows7 Operating System in Virtual Machine

RESULT:

Thus the virtual box with different flavours of Linux/Windows has been
Installed Success fully.

13
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

[Link]: 2 INSTALLATION OF A C COMPILER IN THE VIRTUAL


DATE: MACHINE AND EXECUTING A SIMPLE PROGRAM

AIM:

To install a C compiler in the virtual machine and execute a simple program.

PROCEDURE:

STEP:1 Before install a C compiler in a virtual machine, we have Create a virtual machine by
opening the Kernal Virtual Machine (KVM). Open the installed virtual manager. As well as
install different Ubuntu and Windows OS in that virtual machine with different names.

STEP:2 Open the Ubuntu OS in our Virtual Machine.

STEP:3 Open Text Editor in Ubuntu OS and type a C program and save it in desktop
to execute.

14
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

STEP:4 To install the C compiler in Ubuntu OS, open the terminal and type the command.

$sudo apt-get install gcc

STEP:5 And then compile the C program and execute it.

OUTPUT:

RESULT:

Thus the C compiler is installed in the virtual machine and executed the program
Successfully.

15
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

EX NO:3 INSTALLATION OF GOOGLE APP ENGINE AND CREATE HELLO


DATE: WORLD APP USING JAVA

AIM:

To install google app engine and create hello world app using java.

PROCEDURE:

• Create a project

Projects bundle code, VMs, and other resources together for easier development and
monitoring.

• Build and run your "Hello, world!" app

You will learn how to run your app using Cloud Shell, right in your browser. At the
end, you'll deploy your app to the web using the App Engine Maven plugin.

GCP (Google Cloud Platform) organizes resources into projects, which collect all of the
related resources for a single application in one place.

Begin by creating a new project or selecting an existing project for this tutorial.

Step1:

Select a project, or create a new one

Step2:

Using Cloud Shell

Cloud Shell is a built-in command-line tool for the console. You're going to use Cloud Shell
to deploy your app.

Open Cloud Shell

Open Cloud Shell by clicking the

Activate Cloud Shell button in the navigation bar in the upper-right corner of the console

Clone the sample code

Use Cloud Shell to clone and navigate to the "Hello World" code. The sample code is
cloned from your project repository to the Cloud Shell.

Note: If the directory already exists, remove the previous files before cloning:

rm -rf appengine-try-java

16
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

git clone \
[Link]

cd appengine-try-java

Step3:

Configuring your deployment

You are now in the main directory for the sample code. You'll look at the files that
configure your application.

Exploring the application

Enter the following command to view your application code:

cat \

src/main/java/myapp/[Link]

This servlet responds to any request by sending a response containing the message Hello,
world!.

Exploring your configuration

For Java, App Engine uses XML files to specify a deployment's configuration.

Enter the following command to view your configuration file:

cat [Link]

The hello world app uses Maven, which means you must specify a Project Object Model, or
POM, which contains information about the project and configuration details used by Maven
to build the project.

Step4:

Testing your app

Test your app on Cloud Shell

Cloud Shell lets you test your app before deploying to make sure it's running as intended, just
like debugging on your local machine.

To test your app enter the following:

mvn appengine:run

17
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Preview your app with "Web preview"

Your app is now running on Cloud Shell. You can access the app by clicking the
Web preview

button at the top of the Cloud Shell pane and choosing Preview on port 8080.

Terminating the preview instance

Terminate the instance of the application by pressing Ctrl+C in the Cloud Shell

Step5:

Deploying to App Engine

Create an application

To deploy your app, you need to create an app in a region:

gcloud app create

Note: If you already created an app, you can skip this step.

Deploying with Cloud Shell

Now you can use Cloud Shell to deploy your app.

First, set which project to use:

gcloud config set project \


<YOUR-PROJECT>
Then deploy your app:
mvn appengine:deploy

Visit your app

Congratulations! Your app has been deployed.

The default URL of your app is a subdomain on [Link] that starts with your
project's ID: <your-project>.[Link].

Try visiting your deployed application.

View your app's status

You can check in on your app by monitoring its status on the App Engine dashboard.

Open the Navigation menu in the upper-left corner of the console.

Then, select the App Engine section

18
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

RESULT:

Thus google app engine is installed and hello world app using java created
successfully.

19
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

[Link]: 4 LAUNCH THE WEB APPLICATIONS USING GAE LAUNCHER


DATE:

AIM:
To launch the web applications using gae launcher.

PROCEDURE:
Before you can host your website on Google App Engine:
1. Create a new Cloud Console project or retrieve the project ID of an existing project
to use:
Go to the Projects page ([Link]
Tip: You can retrieve a list of your existing project IDs with the gcloud command line
tool (#before_you_begin).
2. Install and then initialize the Google Cloud SDK:
Download the SDK (/sdk/docs)
Creating a website to host on Google App Engine
Basic structure for the project
This guide uses the following structure for the project:
[Link]: Configure the settings of your App Engine application.
www/: Directory to store all of your static files, such as HTML, CSS, images, and
JavaScript.
css/: Directory to store style sheets.
[Link]: Basic styles sheet that formats the look and feel of your site.
images/: Optional directory to store images.
[Link]: An HTML file that displays content for your website.
js/: Optional directory to store JavaScript files.
Other asset directories.

Creating the [Link] _le


The [Link] file is a configuration file that tells App Engine how to map URLs to your
static files. In the following steps, you will add handlers that will load www/[Link] when
someone visits your website, and all static files will be stored in and called from the

20
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

www directory.
Create the [Link] file in your application's root directory:
1. Create a directory that has the same name as your project ID. You can find
your project ID in the Console ([Link]
2. In directory that you just created, create a file named [Link].
3. Edit the [Link] file and add the following code to the file:

runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: www/[Link]
upload: www/[Link]
- url: /(.*)
static_files: www/\1
upload: www/(.*)
More reference information about the [Link] file can be found in the [Link] reference
documentation (/appengine/docs/standard/python/config/appref). Creating the [Link]
_le
Create an HTML file that will be served when someone navigates to the root page of
your website. Store this file in your www directory. Deploying your application to App
Engine
When you deploy your application files, your website will be uploaded to App Engine.
To deploy your app, run the following command from within the root directory of your
application where the [Link] file is located: Optional flags:

Include the --project flag to specify an alternate Cloud Console project ID to


what you initialized as the default in the gcloud tool. Example: --project
[YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for you.
Example: -v [YOUR_VERSION_ID]
l>
head>
<title>Hello, world!</title>
<link rel="stylesheet" type="text/css" href="/css/[Link]">

21
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

head>
body>
<h1>Hello, world!</h1>
<p>
This is a simple static HTML file that will be served from Google App
Engine.
</p>
body>
ml>

Deploying your application to App Engine


When you deploy your application files, your website will be uploaded to App Engine.
To deploy your app, run the following command from within the root directory of your
application where the [Link] file is located:

id app deploy
Optional flags:
Include the --project flag to specify an alternate Cloud Console project ID to
what you initialized as the default in the gcloud tool. Example: --project
[YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for
you. Example: -v [YOUR_VERSION_ID

To learn more about deploying your app from the command line, see Deploying a Python
2 App (/app engine/docs/python/tools/uploading an app). Viewing your application

To launch your browser and view the app at [Link]


(#app engine-urls).[Link], run the following command

ud app browse

RESULT:
Thus the web applications using gae launcher launched.

22
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

[Link]: 5 SIMULATE A CLOUD SCENARIO USING CLOUDSIM AND RUN A DATE:


SCHEDULING ALGORITHM THAT IS NOT PRESENT IN CLOUDSIM

What is Cloudsim?
CloudSim is a simulation toolkit that supports the modelling and simulation of the core
functionality of cloud, like job/task queue, processing of events, creation of cloud entities
(datacentre, datacentre brokers, etc), communication between different entities,
implementation of broker policies, etc. This toolkit allows to:

• Test application services in a repeatable and controllable environment.


• Tune the system bottlenecks before deploying apps in an actual cloud.
• Experiment with different workload mix and resource performance scenarios on
simulated infrastructure for developing and testing adaptive application provisioning
techniques

Core features of CloudSim are:

• The Support of modelling and simulation of large scale computing environment as


federated cloud data centres, virtualized server hosts, with customizable policies for
provisioning host resources to virtual machines and energy-aware computational
resources
• It is a self-contained platform for modelling cloud’s service brokers, provisioning, and
allocation policies.
• It supports the simulation of network connections among simulated system elements.
• Support for simulation of federated cloud environment, that inter-networks resources
from both private and public domains.
• Availability of a virtualization engine that aids in the creation and management of
multiple independent and co-hosted virtual services on a data centre node.
• Flexibility to switch between space shared and time-shared allocation of processing
cores to virtualized services.

import [Link];
import [Link];
import [Link];

import [Link];
import [Link];

23
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

import [Link];
import [Link];
import [Link];

/**
* FCFS Task scheduling
* @author Linda J
*/
public class FCFS {

/** The cloudlet list. */


private static List<Cloudlet> cloudletList;

/** The vmlist. */


private static List<Vm> vmlist;

private static int reqTasks = 5;


private static int reqVms = 2;

/**
* Creates main() to run this
example */
public static void main(String[] args) {

[Link]("Starting FCFS...");

try {
// First step: Initialize the CloudSim package. It should be
called // before creating any entities.
int num_user = 1; // number of cloud users
Calendar calendar = [Link]();
boolean trace_flag = false; // mean trace events

// Initialize the CloudSim library


[Link](num_user, calendar, trace_flag);

24
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

// Second step: Create Datacenters


//Datacenters are the resource providers in CloudSim. We need at list one
of them to run a CloudSim simulation
@SuppressWarnings("unused")
Datacenter datacenter0 = createDatacenter("Datacenter_0");

//Third step: Create Broker


FcfsBroker broker = createBroker();
int brokerId = [Link]();

//Fourth step: Create one virtual machine


vmlist = new VmsCreator().createRequiredVms(reqVms, brokerId);

//submit vm list to the broker


[Link](vmlist);

//Fifth step: Create two Cloudlets


cloudletList = new CloudletCreator().createUserCloudlet(reqTasks, brokerId);

//submit cloudlet list to the broker


[Link](cloudletList);

//call the scheduling function via the broker


[Link]();

// Sixth step: Starts the simulation


[Link]();

// Final step: Print results when simulation is over


List<Cloudlet> newList = [Link]();

[Link]();

printCloudletList(newList);

25
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

[Link]("FCFS finished!");
}
catch (Exception e) {
[Link]();
[Link]("The simulation has been terminated due to an unexpected error");
}
}

private static Datacenter createDatacenter(String name){


Datacenter datacenter=new
DataCenterCreator().createUserDatacenter(name, reqVms);

return datacenter;

//We strongly encourage users to develop their own broker policies, to submit vms and
cloudlets according
//to the specific rules of the simulated scenario
private static FcfsBroker createBroker(){

FcfsBroker broker = null;


try {
broker = new FcfsBroker("Broker");
} catch (Exception e) {
[Link]();
return null;
}
return broker;
}

/**
* Prints the Cloudlet objects
* @param list list of Cloudlets
*/

26
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

private static void printCloudletList(List<Cloudlet> list)


{ int size = [Link]();
Cloudlet cloudlet;

String indent = " ";


[Link]();
[Link]("========== OUTPUT ==========");
[Link]("Cloudlet ID" + indent + "STATUS" + indent +
"Data center ID" + indent + "VM ID" + indent + "Time" + indent + "Start Time"
+ indent + "Finish Time");

DecimalFormat dft = new DecimalFormat("###.##");


for (int i = 0; i < size; i++) {
cloudlet = [Link](i);
[Link](indent + [Link]() + indent + indent);

if ([Link]() == [Link]){
[Link]("SUCCESS");

[Link]( indent + indent + [Link]() + indent + indent +


indent + [Link]() +
indent + indent + [Link]([Link]()) + indent + indent
+ [Link]([Link]())+
indent + indent + [Link]([Link]()));
}
}

}
}

Conclusion:
Thus simulating a cloud scenario using cloud sim is simulated successfully.

27
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

[Link]: 6 PROCEDURE TO TRANSFER THE FILES FROM ONE VIRTUAL

DATE: MACHINE TO ANOTHER VIRTUAL MACHINE


Aim:
To write a procedure to transfer the files from one Virtual Machine to another
Virtual Machine.

PROCEDURE:

STEP:1 Select the VM and click File->Export Appliance

STEP:2 Select the VM to be exported and click NEXT.

STEP:3 Note the file path and click “Next”

STEP:4 Click “Export” ,The Virtual machine is being exported.

STEP:5 Install “ssh” to access the neighbour's VM.

28
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

STEP:6 Go to File->Computer:/home/sam/Documents/

STEP:7 Type the neighbour's URL: s[Link]

STEP:8 Give the password and get connected.

STEP:9 Select the VM and copy it in desktop.

STEP:10 Open Virtual Box and select File->Import Appliance->Browse

29
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

STEP:11 Select the VM to be imported and click “Open”.

STEP:12 Click “Next”

STEP:13 Click “Import”.

STEP:14 VM is being imported.

30
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

STEP:15 VM is imported.

RESULT:

Thus the files from one Virtual Machine to another Virtual Machine is
transferred Successfully.

31
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

EX NO: 7.a PROCEDURE TO INSTALL HADOOP SINGLE NODE CLUSTER


DATE:

AIM:

To find procedure to set up the SINGLE node Hadoop Cluster.

PROCEDURE:

sam@sysc40:~$ sudo apt-get update

sam@sysc40:~$ sudo apt-get install default-jdk

sam@sysc40:~$ java -version

openjdk version "1.8.0_131"

32
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Open JDK Runtime Environment (build 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11)

Open JDK 64-Bit Server VM (build 25.131-b11, mixed mode)

sam@sysc40:~$ sudo addgroup hadoop

Adding group `hadoop' (GID 1002) ...

Done.

sam@sysc40:~$ sudo adduser --ingroup hadoop hduser

Adding user `hduser' ...

Adding new user `hduser' (1002) with group `hadoop' ...

Creating home directory `/home/hduser' ...

Copying files from `/etc/skel' ...

Enter new UNIX password: \\Note: Enter any password and remember that, this is only
for unix(applicable for hduser)

Retype new UNIX password:

passwd: password updated successfully

Changing the user information for hduser

Enter the new value, or press ENTER for the default \\Note: Just enter your name and then
click enter button for remaining

Full Name []:

Room Number []:

Work Phone []:

Home Phone []:

Other []:

Is the information correct? [Y/n] y

sam@sysc40:~$ groups hduser


33
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

hduser : hadoop

sam@sysc40:~$ sudo apt-get install ssh

Reading package lists... Done

Building dependency tree

Reading state information... Done

The following NEW packages will be installed: ssh

0 upgraded, 1 newly installed, 0 to remove and 139 not upgraded.

Need to get 7,076 B of archives.

After this operation, 99.3 kB of additional disk space will be used.

Get:1 [Link] xenial-updates/main amd64 ssh all


1:7.2p2-4ubuntu2.2 [7,076 B]

Fetched 7,076 B in 0s (16.2 kB/s)

Selecting previously unselected package ssh.

(Reading database ... 233704 files and directories currently installed.)

Preparing to unpack .../ssh_1%3a7.2p2-4ubuntu2.2_all.deb ...

Unpacking ssh (1:7.2p2-4ubuntu2.2) ...

Setting up ssh (1:7.2p2-4ubuntu2.2) ...

sam@sysc40:~$ which ssh

/usr/bin/ssh

sam@sysc40:~$ which sshd

/usr/sbin/sshd

sam@sysc40:~$ su hduser

Password: \\Note: Enter the password that we have given above for hduser

hduser@sysc40:/home/sam$

hduser@sysc40:/home/sam$ cd
34
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

hduser@sysc40:~$ ssh-keygen -t rsa -P ""

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hduser/.ssh/id_rsa): \\Note: Just click Enter button

Your identification has been saved in /home/hduser/.ssh/id_rsa.

Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:QWYjqMI0g/ElhpXhVvgVITSn4O4HWS98MDqCX7Gsf/g hduser@sysc40

The key's randomart image is:

+--- [RSA 2048] --- +

|o+*=*.=o= |

|oOo=.=.= . |

|o Bo*. . |

|o+.*.* . |

|o.* * o S |

|+=o |

| + .. |

| o. . |

| .oE |

+---- [SHA256] -------+

hduser@sysc40:~$ cat $HOME/.ssh/id_rsa.pub >>

$HOME/.ssh/authorized_keys hduser@sysc40:~$ ssh localhost

The authenticity of host 'localhost ([Link])' can't be established.

ECDSA key fingerprint is


SHA256:+kILEX2sGtgsoPfCQ+Vw2cWHbbWGJt0qTEMu9tEvaX8.

Are you sure you want to continue connecting (yes/no)? yes


35
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.

Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.8.0-58-generic x86_64)

* Documentation: [Link]

* Management: [Link]

* Support:[Link]

143 packages can be updated.

15 updates are security updates.

The programs included with the Ubuntu system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by

applicable law.

hduser@sysc40:~$ wget [Link]


2.6.5/[Link]

--2017-07-21 [Link]-- [Link]


2.6.5/[Link]

Resolving [Link] ([Link])... [Link]

Connecting to [Link] ([Link])|[Link]|:80...

connected. HTTP request sent, awaiting response... 200 OK Length: 199635269

(190M) [application/x-gzip]

Saving to: „[Link].2‟

[Link] 100%[===================>] 190.39M 180KB/s in 17m 4s


36
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

2017-07-21 [Link] (190 KB/s) - „[Link].2‟ saved [199635269/199635269]

hduser@sysc40:~$ tar xvzf [Link] hadoop-2.6.5/

hadoop-2.6.5/include/

hadoop-2.6.5/include/hdfs.h

hadoop-2.6.5/include/[Link]

hadoop-2.6.5/include/[Link]

hadoop-2.6.5/include/[Link]

hadoop-2.6.5/include/[Link]

hadoop-2.6.5/[Link]

hadoop-2.6.5/[Link]

hadoop-2.6.5/share/hadoop/tools/lib/[Link] hadoop-

2.6.5/share/hadoop/tools/lib/[Link]

hadoop-2.6.5/share/hadoop/tools/lib/[Link]

hduser@sysc40:~$ sudo mkdir -p

/usr/local/hadoop [sudo] password for hduser:

hduser is not in the sudoers file. This incident will be reported.

hduser@sysc40:~$ cd hadoop-2.6.5

hduser@sysc40:~/hadoop-2.6.5$ su sam

Password: sam123

sam@sysc40:/home/hduser/hadoop-2.6.5$ sudo adduser hduser sudo

37
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

[sudo] password for sam:

Adding user `hduser' to group `sudo' ...

Adding user hduser to group sudo

Done.

sam@sysc40:/home/hduser/hadoop-2.6.5$ su hduser

Password: \\Note: Enter the password that we have given above for hduser

hduser@sysc40:~/hadoop-2.6.5$ sudo mkdir /usr/local/hadoop

hduser@sysc40:~/hadoop-2.6.5$ sudo mv * /usr/local/hadoop

hduser@sysc40:~/hadoop-2.6.5$ sudo chown -R hduser:hadoop

/usr/local/hadoop hduser@sysc40:~/hadoop-2.6.5$ cd

hduser@sysc40:~$ update-alternatives --config java

There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-
openjdk-amd64/jre/bin/java

Nothing to configure.

hduser@sysc40:~$ nano ~/.bashrc

Add the below content at the end of the file and save it

#HADOOP VARIABLES START

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-

amd64 export HADOOP_INSTALL=/usr/local/hadoop

export PATH=$PATH:$HADOOP_INSTALL/bin

export PATH=$PATH:$HADOOP_INSTALL/sbin

export HADOOP_MAPRED_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_HOME=$HADOOP_INSTALL

export HADOOP_HDFS_HOME=$HADOOP_INSTALL
38
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

export YARN_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native

export HADOOP_OPTS="-[Link]=$HADOOP_INSTALL/lib" #HADOOP

VARIABLES END

hduser@sysc40:~$ source ~/.bashrc

hduser@sysc40:~$ javac -version

javac 1.8.0_131

hduser@sysc40:~$ which javac

/usr/bin/javac

hduser@sysc40:~$ readlink -f /usr/bin/javac

/usr/lib/jvm/java-8-openjdk-amd64/bin/javac

hduser@sysc40:~$ nano /usr/local/hadoop/etc/hadoop/hadoop-

[Link] Add the below line at the end of the file

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

hduser@sysc40:~$ sudo mkdir -p /app/hadoop/tmp

hduser@sysc40:~$ sudo chown hduser:hadoop /app/hadoop/tmp

hduser@sysc40:~$ nano /usr/local/hadoop/etc/hadoop/[Link]

Add the below line inside the <configuration></configuration> tag.

<configuration>

<property>

<name>[Link]</name>

<value>/app/hadoop/tmp</value>

39
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

<description>A base for other temporary directories.</description>

</property>

<property>

<name>[Link]</name>

<value>hdfs://localhost:54310</value>

<description>The name of the default file system. A URI whose

scheme and authority determine the FileSystem implementation. The

uri's scheme determines the config property ([Link]) naming

the FileSystem implementation class. The uri's authority is used to

determine the host, port, etc. for a filesystem.</description>

</property>

</configuration>

hduser@sysc40:~$ cp /usr/local/hadoop/etc/hadoop/mapred-
[Link] /usr/local/hadoop/etc/hadoop/[Link]

hduser@sysc40:~$ nano /usr/local/hadoop/etc/hadoop/[Link]

Add the below line inside the <configuration></configuration> tag.

<configuration> <property>

<name>[Link]</name>

<value>localhost:54311</value>

<description>The host and port that the MapReduce job tracker

runs at. If "local", then jobs are run in-process as a single map and

reduce task.

</description>

</property>

</configuration>

40
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

hduser@sysc40:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode

hduser@sysc40:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode

hduser@sysc40:~$ sudo chown -R hduser:hadoop /usr/local/hadoop_store

hduser@sysc40:~$ nano /usr/local/hadoop/etc/hadoop/[Link]

<configuration>

<property>

<name>[Link]</name>

<value>1</value>

<description>Default block replication.

The actual number of replications can be specified when the file is created.

The default is used if replication is not specified in create time.

</description>

</property>

<property>

<name>[Link]</name>

<value>file:/usr/local/hadoop_store/hdfs/namenode</value>

</property>

<property>

<name>[Link]</name>

<value>file:/usr/local/hadoop_store/hdfs/datanode</value>

</property>

</configuration>

hduser@sysc40:~$ hadoop namenode -format

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

41
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

16/11/10 [Link] INFO [Link]: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode STARTUP_MSG: host =

laptop/[Link]

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 2.6.5

...

...

...

16/11/10 [Link] INFO [Link]: Exiting with status 0

16/11/10 [Link] INFO [Link]: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at laptop/[Link]

************************************************************/

Starting Hadoop

Now it's time to start the newly installed single node cluster.
We can use [Link] or ([Link] and [Link])

hduser@sysc40:~$ su sam

Password: sam123

sam@sysc40:/home/hduser$ cd

sam@sysc40:~$ cd /usr/local/hadoop/sbin

sam@sysc40:/usr/local/hadoop/sbin$ ls

[Link] [Link] [Link]

[Link] [Link] [Link]

[Link] [Link] [Link]

[Link] [Link] [Link]


42
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

[Link] [Link] [Link]

[Link] [Link] [Link]

[Link] [Link] [Link]

[Link] [Link] [Link]

[Link] [Link]

[Link] [Link]

sam@sysc40:/usr/local/hadoop/sbin$ sudo su hduser

[sudo] password for sam: sam123

Start NameNode daemon and DataNode daemon:

hduser@sysc40:/usr/local/hadoop/sbin$ [Link]

16/11/10 [Link] WARN [Link]: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

Starting namenodes on [localhost]

localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-


[Link]

localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-


[Link]

Starting secondary namenodes [[Link]]

The authenticity of host '[Link] ([Link])' can't be established.

ECDSA key fingerprint is


SHA256:e9SM2INFNu8NhXKzdX9bOyKIKbMoUSK4dXKonloN7JY.

Are you sure you want to continue connecting (yes/no)? yes

[Link]: Warning: Permanently added '[Link]' (ECDSA) to the list of known hosts.

[Link]: starting secondary namenode, logging to /usr/local/hadoop/logs/hadoop-


[Link]

16/11/10 [Link] WARN [Link]: Unable to load native-hadoop library f

hduser@sysc40:/usr/local/hadoop/sbin$ [Link]
43
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

starting yarn daemons

starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-


[Link]

localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-


[Link]

hduser@sysc70:/usr/local/hadoop/sbin$ jps

14306 DataNode14660 ResourceManager

14505 SecondaryNameNode

14205 NameNode

14765 NodeManager

15166 Jps

hduser@laptop:/usr/local/hadoop/sbin$ [Link]

16/11/10 [Link] WARN [Link]: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

Stopping namenodes on [localhost]

localhost: stopping namenode

localhost: stopping datanode

Stopping secondary namenodes [[Link]]

[Link]: stopping secondarynamenode

16/11/10 [Link] WARN [Link]: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

hduser@laptop:/usr/local/hadoop/sbin$ [Link]

stopping yarn daemons

stopping resourcemanager

localhost: stopping nodemanager

44
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

no proxyserver to stop

Hadoop Web Interfaces

hduser@laptop:/usr/local/hadoop/sbin$ [Link]

hduser@laptop:/usr/local/hadoop/sbin$ [Link]

Type [Link] into our browser, then we'll see the web UI of the
NameNode daemon: In the Overview tab, you can see the Overview, Summary,
NameNode Journal Status and the NameNode Storage informations.

Type in [Link] as url, we get Secondary NameNode:

The default port number to access all the applications of cluster is 8088. Use the following url
to visit Resource Manager: [Link]

We need to click the Nodes option in the left Cluster panel, then it will show the node that
we have created.

45
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

RESULT:

Thus Hadoop one node cluster has been created.

46
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

EX NO: 7.b IMPLEMENT A SIMPLE WORDCOUNT APPLICATIONS


DATE:
AIM

To write a word count program to demonstrate the use of Map and Reduce task.

PROCEDURE

Step 0: We need to check whether the hadoop dashboard has been created with all the
six nodes.

sam@sysc65:~$ su hduser

Password:

hduser@sysc65:/home/sam$ cd /usr/local/hadoop/sbin

hduser@sysc65:/usr/local/hadoop/sbin$ [Link]

hduser@sysc65:/usr/local/hadoop/sbin$ jps 3797

NameNode

4279 ResourceManager

4120 SecondaryNameNode

47
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

3916 DataNode

4396 NodeManager

5486 Jps

hduser@sysc65:/usr/local/hadoop/sbin$

Type this command in Browser “localhost:50070” We

should get the dashboard for hadoop environment.

Then only, we can continue with the below lines to execute this exercise. Open
Separate terminal to workout the below commands by Alt+Ctrl+t

Step 1: sam@sysc65:~$ su hduser

Password:

Step 2:hduser@sysc65:/home/sam$ cd

Step 3: hduser@sysc65:~$ cd /home/hduser

Step 4: hduser@sysc65:~$ nano [Link]

Paste the program into that file and save it by Ctrl+o, Enter & Ctrl+x

import [Link];

import [Link];

import [Link];

import [Link];

import [Link];

import [Link];

import [Link];

import [Link];

import [Link];

import [Link];

import [Link];
48
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

import [Link];

public class WordCount {

public static void main(String [] args) throws Exception

Configuration c=new Configuration();

String[] files=new GenericOptionsParser(c,args).getRemainingArgs();

Path input=new Path(files[0]);

Path output=new Path(files[1]); Job j=new

Job(c,"wordcount");

[Link]([Link]);

[Link]([Link]);

[Link]([Link]);

[Link]([Link]);

[Link]([Link]);

[Link](j, input);

[Link](j, output);

[Link]([Link](true)?0:1); }

public static class MapForWordCount extends Mapper<LongWritable, Text, Text,


IntWritable>{

public void map(LongWritable key, Text value, Context con) throws IOException,
InterruptedException

String line = [Link]();

String[] words=[Link](",");

for(String word: words )


49
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Text outputKey = new

Text([Link]().trim()); IntWritable outputValue

= new IntWritable(1); [Link](outputKey, outputValue);

public static class ReduceForWordCount extends Reducer<Text, IntWritable, Text,


IntWritable>

public void reduce(Text word, Iterable<IntWritable>values, Context con)


throws IOException, InterruptedException

int sum = 0;

for(IntWritable value : values)

sum += [Link]();

[Link](word, new IntWritable(sum));

Step5: hduser@sysc65:~$ /usr/local/hadoop/bin/hadoop classpath

/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop
/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoo

50
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

p/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:
/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/l
ocal/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar

Step 6: Copy and paste the above classpath to compile the java program and make it as jar
file.

hduser@sysc65:~$ javac -cp


"/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/
hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop
/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share
/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hado
op/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-
scheduler/*.jar" [Link]

Note: [Link] uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

Step 7: hduser@sysc65:~$ jar cvf [Link]

*.class added manifest

adding: [Link](in = 1678) (out= 885)(deflated 47%)

adding: WordCount$[Link](in = 1788) (out= 752)(deflated 57%)

adding: WordCount$[Link](in = 1651) (out= 690)(deflated 58%)

Step 8: hduser@sysc65:~$ nano [Link]

Paste the below lines into that file andd save it by Ctrl+o, Enter & Ctrl+x

bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,tr
ain,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,ca
r,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,
TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,train,bus,bus

Step 9: hduser@sysc65:~$ unset HADOOP_COMMON_HOME


51
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 10: hduser@sysc65:~$ /usr/local/hadoop/bin/hadoop fs -mkdir -p /home/hduser

17/08/23 [Link] WARN [Link]: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

Step 11:

hduser@sysc65:~$ /usr/local/hadoop/bin/hadoop fs -put


/home/hduser/[Link] /home/hduser/[Link]

17/08/23 [Link] WARN [Link]: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

Step 12:

hduser@sysc65:~$ /usr/local/hadoop/bin/hadoop jar /home/hduser/[Link]


WordCount /home/hduser/[Link] /home/hduser/MRDir1

17/08/23 [Link] WARN [Link]: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

17/08/23 [Link] INFO [Link]: [Link] is deprecated. Instead,


use [Link]-id

Map-Reduce Framework

Map input records=1

Map output records=69

Map output bytes=598

Map output materialized bytes=742

Merged Map outputs=1

GC time elapsed (ms)=77

CPU time spent (ms)=0

Physical memory (bytes) snapshot=0

Virtual memory (bytes) snapshot=0

52
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Total committed heap usage (bytes)=446169088

Shuffle Errors

BAD_ID=0

CONNECTION=0

IO_ERROR=0

WRONG_LENGTH=0

WRONG_MAP=0

WRONG_REDUCE=0

File Input Format Counters

Bytes Read=322

File Output Format Counters

Bytes Written=23

Step 13: hduser@sysc65:~$ /usr/local/hadoop/bin/hadoop fs -ls /home/hduser/MRDir1

17/08/23 [Link] WARN [Link]: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

Found 2 items

-rw-r--r-- 1 hduser supergroup 0 2017-08-23 13:15 /home/hduser/MRDir1/_SUCCESS

-rw-r--r-- 1 hduser supergroup 23 2017-08-23 13:15 /home/hduser/MRDir1/part-r-


00000

Step 14:

hduser@sysc65:~$ /usr/local/hadoop/bin/hadoop fs -cat /home/hduser/MRDir1/part-r-


00000

17/08/23 [Link] WARN [Link]: Unable to load native-hadoop library for


your platform... using builtin-java classes where applicable

BUS 24

CAR 22

TRAIN 23

53
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

RESULT:

Thus the word count program to demonstrate the use of Map and Reduce task has
been created and executed.

54
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

EXP NO :8 CREATING AND EXECUTING YOUR FIRST CONTAINER USING


DATE:
DOCKER
AIM:
To create and execute your first container using Docker.
PROCEDURE
To follow the instructions specific to your operating system. Here are the general steps for the
most popular operating systems:
Windows:
a. Visit the Docker website ([Link] and navigate to the "Get Docker"
section.
b. Click on the "Download for Windows" button to download the Docker Desktop installer.
c. Run the installer and follow the prompts. During the installation, Docker may require you
to enable Hyper-V and Containers features, so make sure to enable them if prompted.
d. Once the installation is complete, Docker Desktop will be installed on your Windows
machine. You can access it from the Start menu or the system tray.
Mac:
a. Visit the Docker website ([Link] and navigate to the "Get Docker"
section.
b. Click on the "Download for Mac" button to download the Docker Desktop installer.
c. Run the installer and drag the Docker icon to the Applications folder to install Docker
Desktop.
d. Launch Docker Desktop from the Applications folder or the Launchpad. It will appear in
the status bar at the top of your screen.
Linux:
Docker supports various Linux distributions. The exact installation steps may vary based on
your distribution. Here's a general outline:
a. Visit the Docker website ([Link] and navigate to the "Get Docker"
section.
b. Click on the "Download for Linux" button.
c. Docker provides installation instructions for various Linux distributions such as Ubuntu,
CentOS, Debian, Fedora, and more. Follow the instructions specific to your distribution.
d. Once Docker is installed, start the Docker service using the appropriate command for your
Linux distribution.
After completing the installation, you can open a terminal or command prompt and run the
docker -- version command to verify that Docker is installed correctly. It should display the
version of Docker installed on your system.

55
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

That's it! You now have Docker installed on your machine and can start using it to manage
containers.
Install Docker: First, you need to install Docker on your machine. Docker provides platform-
specific installation instructions on their website for different operating systems. Follow the
instructions to install Docker for your particular OS.
Docker Image: Docker containers are created based on Docker images. An image is a
lightweight, standalone, and executable package that includes everything needed to run a
piece of software, including the code, runtime, libraries, and system tools. Docker Hub
([Link]) is a popular online repository of Docker images. You can search for
existing images on Docker Hub or create your own. For this example, we'll use an existing
image.
Pull an Image: Open your terminal or command prompt and execute the following command
to pull an existing Docker image from Docker Hub. We'll use the official hello-world image
as an example:
Copy code
docker pull hello-world
Docker will download the image from the Docker Hub repository.
Run a Container: Once you have the Docker image, you can create and run a container based
on thatimage. Execute the following command to run the hello-world container:
arduino
Copy code
docker run hello-world
Docker will create a container from the image and execute it. The container will print a
"Hello from
Docker!" message along with some information about your Docker installation.
Note: If you haven't pulled the hello-world image in the previous step, Docker will
automatically download it before running the container.
Congratulations! You've created and executed your first Docker container. Docker will
handle the container lifecycle, including starting, stopping, and managing resources for you.
This simple example demonstrates the basic concept of running a container using Docker.
You can explore further by trying out different Docker images and running more complex
applications within containers.

RESULT:
Thus the first container using docker is created and executed successfully.

56
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

EXP NO :9 RUN A CONTAINER FROM DOCKER HUB


DATE:
AIM:
To run a container from docker hub
PROCEDURE:
Run a Container from Docker Hub
To run a container from Docker Hub, you need to follow these steps:
Search for an Image: Visit the Docker Hub website ([Link] and use the
search bar to find the image you want to run. You can search for popular images like nginx,
mysql, redis, etc., or specific images based on your requirements.
Pull the Image: Once you've found the desired image, open a terminal or command prompt
and execute the following command to pull the image from Docker Hub:
php
Copy code
docker pull <image_name>
Replace <image_name> with the name of the image you want to pull. For example, if you
want to pull the nginx image, you would use:
Copy code
docker pull nginx
Docker will download the image and store it on your local machine.
Run the Container: After pulling the image, you can create and run a container based on that
image. Use the following command:
arduino
Copy code
docker run <image_name>
Replace <image_name> with the name of the image you pulled. For example:
arduino
Copy code
docker run nginx
Docker will create a container from the image and start it. The container will run the default
command specified in the image, such as starting a web server, database, or any other
application.
Note: By default, Docker will allocate a random port on your host machine and map it to the
container's exposed ports. If you want to specify a specific port mapping, you can use the -p
option. For example:
arduino
Copy code
57
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

docker run -p 8080:80 nginx


This will map port 8080 on your host machine to port 80 inside the container.
Interact with the Container: Once the container is running, you can interact with it as needed.
For example,
if you ran an nginx container, you can access it in your web browser by visiting
[Link] or
[Link] (if you specified a port mapping).
To stop the container, you can use the docker stop command followed by the container ID or
name:
arduino
Copy code
docker stop <container_id_or_name>
You can list the running containers using the docker ps command and stop them as necessary.

(or)
Flow-1: Pull Docker Image from Docker Hub and Run it
Step-1: Verify Docker version and also login to Docker Hub
docker version
docker login
Step-2: Pull Image from Docker Hub
docker pull stacksimplify/dockerintro-springboot-helloworld-rest-api:1.0.0-RELEASE
Step-3: Run the downloaded Docker Image & Access the Application
Copy the docker image name from Docker Hub
docker run --name app1 -p 80:8080 -d stack simplify/dockerintro-springboot-helloworld-rest-
api:1.0.0-
RELEASE
[Link]
Step-4: List Running Containers
docker ps
docker ps -a
docker ps -a -q
Step-5: Connect to Container Terminal
docker exec -it <container-name> /bin/sh
Step-6: Container Stop, Start
docker stop <container-name>
docker start <container-name>

58
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

RESULT:
Thus the program ran a container from docker hub

59
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

ADDITIONAL EXERCISES

EX NO: 1 CLIENT SERVER COMMUNICATION BETWEEN TWO VIRTUAL


DATE: MACHINE INSTANCES, EXECUTION OF CHAT APPLICATION

AIM:
To Create communication between two virtual machines in an virtual environment.

PROCEDURE:

Step 1: Implement two host operating systems onto a single virtual box

Step 2: Then implement a internal networking in between them by the following steps

OS-> settings -> Network -> internal network ->internet

Step 3: Connect the two machines internally

Step 4: Run the virtual machines

Step 5: Open terminal in one VM, give ifconfig command

Step 6: Then ping the Ip of one machine in the other terminal

ping [Link]

Step 7: Then run the communication between the terminals

RESULT:
Thus the communication between two virtual machines in an virtual environment
was created Successfully.

60
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

EX NO: 2 STUDY AND IMPLEMENTATION OF STORAGE AS A SERVICE

DATE:
AIM:
To study and implementation of Storage as a Service.

PROCEDURE:

Step 1: Sign into the Google Drive website with your Google account.
If you don‟t have a Google account, you can create one for free. Google Drive will allow you
to store your files in the cloud, as well as create documents and forms through the Google
Drive web interface.

Step 2: Add files to your drive.


There are two ways to add files to your drive. You can create Google Drive documents, or
you can upload files from your computer. To create a new file, click the CREATE button. To
upload a file, click the “Up Arrow” button next to the CREATE button.

61
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 3: Change the way your files are displayed.


You can choose to display files by large icons (Grid) or as a list (List). The List mode will
show you at a glance the owner of the document and when it was last modified. The Grid
mode will show each file as a preview of its first page. You can change the mode by clicking
the buttons next to the gear icon in the upper right corner of the page. // List Mode

Step 4: Use the navigation bar on the left side to browse your files.
“My Drive” is where all of your uploaded files and folders are stored. “Shared with Me” are
documents and files that have been shared with you by other Drive users. “Starred” files are

62
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

files that you have marked as important, and “Recent” files are the ones you have most
recently edited.
•You can drag and drop files and folders around your Drive to organize them as you see fit.
•Click the Folder icon with a “+” sign to create a new folder in your Drive. You can create
folders inside of other folders to organize your files.

Step 5: Search for files.


You can search through your Google Drive documents and folders using the search bar at the
top of your page. Google Drive will search through titles, content, and owners. If a file is found
with the exact term in the title, it will appear under the search bar as you type so that you can
quickly select it.

63
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 1: Click the NEW button.


A menu will appear that allows you to choose what type of document you want to create. You
have several options by default, and more can be added by clicking the “More “ link at the
bottom of the menu:

Step 2: Create a new file.


Once you‟ve selected your document type, you will be taken to your blank document. If you
chose Google Docs/Sheets/Slides , you will be greeted by a wizard that will help youconfigure
the feel of your document.

Step 3: Name the file.


At the top of the page, click the italic gray text that says “Untitled <file type>”. When you
click it, the “Rename document” window will appear, allowing you to change the name of your
file.

64
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 4: Edit your document.


Begin writing your document as you would in its commercially-equivalent. You will most
likely find that Google Drive has most of the basic features, but advanced features you may
be used to are not available.
[Link] document saves automatically as you work on it.

Step 5: Export and convert the file.


If you want to make your file compatible with similar programs, click File and place your
cursor over “Download As”. A menu will appear with the available formats. Choose the format
that best suits your needs. You will be asked to name the file and select a download location.
When the file is downloaded, it will be in the format you chose.

65
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 6: Share your document.


Click File and select Share, or click the blue Share button in the upper right corner to open
the Sharing settings. You can specify who can see the file as well as who can edit it.

66
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Other Capabilities:
1. Edit photos
2. Listen Music
3. Do drawings
4. Merge PDFs

CONCLUSION:
Google Docs provide an efficient way for storage of data. It fits well in Storage as a
service (SaaS). It has varied options to create documents, presentations and also spreadsheets.
It saves documents automatically after a few seconds and can be shared anywhere on the
Internet at the click of a button.

67
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

EX NO: 3 STUDY OF AMAZON WEB SERVICES


DATE:

AIM:

To Study of Amazon Web Services

PROCEDURE:

Security using MFA(Multi Factor Authentication) device code:


Step 1: Go to [Link] and click on "My Account"
Step 2 : Select "AWS management console" and click on it.

68
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Step 3: Give an Email id in the required field, if you are registering first time then select "I
am a new user" radio button and click on "sign in using our secure server" button. Step 4:
Again go to "My Account". select "AWS management console" and click on it.
Sign in again by entering the user name and valid password ( check "I am returning user and
my password is" radio button)
Step 5: All AWS project can be viewed by you, but you cant make any changes in it or you
cant create new thing as you are not paying any charges to amazon .
Step 6: To create the user in a root user follow the steps mentioned below:
1) Click on "Identity and Access Management" in security and identity project
2) Click in "Users" from dashboard,It will take you to "Create New Users" and click
on
create new user button .Enter the "User Name" and click on "Create" button at
right bottom
3) Once the user is created click on it
4) Go to security credentials tab
5) Click on "Create Access Key", it will create an access key for user.
6) Click on "Manage MFA device" it will give you one QR code displayed on the
screen
you need to scan that QR code on your mobile phone using barcode scanner (install it
in mobile phone) you also need to install "Google Authenticator" in your mobile phone
to generate the MFA code
7) Google authenticator will keep on generating a new MFA code after every 60
seconds
that code you will have to enter while logging as a user. Hence, the security is
maintained by MFA device code...
one cannot use your AWS account even if it may have your user name and password,
because MFA code is on your MFA device (mobile phone in this case) and it is getting
changed after every 60 seconds.
Step 7: Permissions in user account:
After creating the user by following above mentioned steps; you can give certain
permissions to specific user
1) Click on created user
2) Go to "Permissions" tab
3) Click on "Attach Policy"
button 4)Click on apply.

69
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

SAMPLE OUTPUT:

Click on "My Account". Select "AWS management console" and click on it. Give Email id in
the required field

70
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Addition of security features

71
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Sign in to an AWS account

72
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Creation of users

Adding users to group

73
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Creating Access key

74
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

Setting permissions to users

75
SOWMIYA B 211521205151
PANIMALAR INSTITUTE OF TECHNOLOGY DEPT. OF IT

CONCLUSION:

We have studied how to secure the cloud and its data. Amazon EWS provides the best
security with its extended facilities and services like MFA device. It also gives you the ability
to add your own permissions and policies for securing data more encrypted.

76
SOWMIYA B 211521205151

You might also like