0% found this document useful (0 votes)
25 views30 pages

Unit 5 Lecture No-1 (Hive)

Uploaded by

Kaustubh Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views30 pages

Unit 5 Lecture No-1 (Hive)

Uploaded by

Kaustubh Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Big Data

(KCS061)
Unit-5
3rd YEAR (6th Sem)
(2023-24)
Presented By:
Saurabh Singh Tomar
Asst. Prof.(CSE)
UCER, Prayagraj
HIVE
Big data andHadoop
• The term ‘Big Data’ is used for collections of
large datasets that include huge volume, high
velocity, and a variety of data that is increasing
day by day.
• Using traditional data management systems, it
is difficult to process Big Data. Therefore, the
Apache Software Foundation introduced a
framework called Hadoop to solve Big Data
management and processing challenges.

3
Hadoop
• Hadoop is an open-source framework to store and
process Big Data in a distributed environment. It contains
two modules, one is MapReduce and another is Hadoop
Distributed File System (HDFS).
– MapReduce: It is a parallel programming model for
processing large amounts of structured, semi-
structured, and unstructured data on large clusters of
commodity hardware.
– HDFS: Hadoop Distributed File System is a part of
Hadoop framework, used to store and process the
datasets. It provides a fault-tolerant file system to run
on commodity hardware.

4
Hadoop Tools
• The Hadoop ecosystem contains different sub-
projects (tools) such as Sqoop, Pig, and Hive that
are used to help Hadoop modules.
– Sqoop: It is used to import and export data to
and fro between HDFS and RDBMS.
– Pig: It is a procedural language platform used
to develop a script for MapReduce
operations.
– Hive: It is a platform used to develop SQL
type scripts to do MapReduce operations.
5
Ways to execute MapReduce
• The traditional approach using Java MapReduce
program for structured, semi-structured, and
unstructured data.
• The scripting approach for MapReduce to
process structured and semi structured data
using Pig.
• The Hive Query Language (HiveQL or HQL) for
MapReduce to process structured data using
Hive.

6
What is hive?
• Hive is a data warehouse infrastructure tool to
process structured data in Hadoop.
• It resides on top of Hadoop to summarize Big
Data, and makes querying and analyzing easy.
• Initially Hive was developed by Facebook, later the
Apache Software Foundation took it up and
developed it further as an open source under the
name Apache Hive.
• It is used by different companies. For example,
Amazon uses it in Amazon Elastic MapReduce.
7
Hive is not-
• A relational database
• A design for OnLine Transaction Processing
(OLTP)
• A language for real-time queries and row-level
updates

8
Features of Hive
• It stores schema in a database and processed
data into HDFS.
• It is designed for OLAP.
• It provides SQL type language for querying
called HiveQL or HQL.
• It is familiar, fast, scalable, and extensible.

9
Hive Architecture

10
Hive Architecture
• User Interface
– Hive is a data warehouse infrastructure software
that can create interaction between user and
HDFS. The user interfaces that Hive supports are
Hive Web UI, Hive command line, and Hive HD
Insight (In Windows server).
• Meta Store
– Hive chooses respective database servers to store
the schema or Metadata of tables, databases,
columns in a table, their data types, and HDFS
mapping.

11
Hive Architecture
• HiveQL Process Engine
– HiveQL is similar to SQL for querying on schema info on the
Metastore. It is one of the replacements of traditional approach
for MapReduce program. Instead of writing MapReduce program
in Java, we can write a query for MapReduce job and process it.
• Execution Engine
– The conjunction part of HiveQL process Engine and MapReduce
is Hive Execution Engine. Execution engine processes the query
and generates results as same as MapReduce results. It uses the
flavor of MapReduce.
• HDFS or HBASE
– Hadoop distributed file system or HBASE are the data storage
techniques to store data into file system.

12
Working of Hive

13
Execution of Hive
1. Execute Query
The Hive interface such as Command Line or Web UI sends query
to Driver (any database driver such as JDBC, ODBC, etc.) to
execute.
2. Get Plan
The driver takes the help of query compiler that parses the query
to check the syntax and query plan or the requirement of query.
3. Get Metadata
The compiler sends metadata request to Metastore (any
database).
4. Send Metadata
Metastore sends metadata as a response to the compiler.

14
Execution of Hive
5 Send Plan
The compiler checks the requirement and resends the plan to the
driver. Up to here, the parsing and compiling of a query is
complete.
6 Execute Plan
The driver sends the execute plan to the execution engine.
7 Execute Job
Internally, the process of execution job is a MapReduce job. The
execution engine sends the job to JobTracker, which is in Name
node and it assigns this job to TaskTracker, which is in Data node.
Here, the query executes MapReduce job.

15
Execution of Hive
7.1 Metadata Ops
Meanwhile in execution, the execution engine can
execute metadata operations with Metastore.
8 Fetch Result
The execution engine receives the results from Data
nodes.
9 Send Results
The execution engine sends those resultant values to the
driver.
10 Send Results
The driver sends the results to Hive Interfaces.

16
:

Difference between RDBMS and Hive


RDBMS Hive

It is used to maintain database. It is used to maintain data warehouse.

It uses SQL (Structured Query Language). It uses HQL (Hive Query Language).

Schema is fixed in RDBMS. Schema varies in it.

Normalized and de-normalized both type of


Normalized data is stored.
data is stored.

It doesn’t support partitioning. It supports automation partition.

No partition method is used. Bucket method is used for partition.

17
HIVE Components
• The major components of Apache Hive are:
• Hive Client
• Hive Services
• Processing and Resource Management
• Distributed Storage

18
Hive Client
• Hive clients are categorized into three types:
1. Thrift Clients
• The Hive server is based on Apache Thrift so that it can
serve the request from a thrift client.
2. JDBC client
• Hive allows for the Java applications to connect to it using
the JDBC driver. JDBC driver uses Thrift to communicate
with the Hive Server.
3. ODBC client
• Hive ODBC driver allows applications based on the ODBC
protocol to connect to Hive. Similar to the JDBC driver, the
ODBC driver uses Thrift to communicate with the Hive
Server.
19
Hive Service
• Hive CLI (Command Line Interface): This is the default shell provided by
the Hive where you can execute your Hive queries and commands directly.
• Apache Hive Web Interfaces: Apart from the command line interface,
Hive also provides a web based GUI for executing Hive queries and
commands.
• Hive Server: Hive server is built on Apache Thrift and therefore, is also
referred as Thrift Server that allows different clients to submit requests to
Hive and retrieve the final result.
• Apache Hive Driver: The Hive driver receives the HiveQL statements
submitted by the user through the command shell.
• Metastore: You can think metastore as a central repository for storing all
the Hive metadata information. Hive metadata includes various types of
information like structure of tables and the partitions along with the
column, column type, etc.

20
Data Model
• Data in Hive can be categorized into three types
on the granular level:
• Table: Hive tables are same as the tables present
in a Relational Database.
• Partition: Hive organizes tables into partitions for
grouping similar type of data together based on a
column
• Bucket: Data in each may further be subdivided
into Buckets based on the hash function of a
column in the table.
21
Data Model

22
Create Database Statement
• The following query is used to create a database:

CREATE DATABASE [IF NOT EXISTS] userdb;

• The following query is used to verify a databases list:


SHOW DATABASES;
default
userdb
• Drop Database Statement
DROP DATABASE IF EXISTS userdb;
(to drop database along with all the tables in the database)
DROP DATABASE userdb CASCADE;

23
Create Table Statement
• Let us assume you need to create a table
named employee using CREATE TABLE statement. The
following table lists the fields and their data types in
employee table:

Sr. No. Field Name Data Type


1 Eid Int
2 Name String
3 Salary Float
4 Designation String

24
Create Table Statement
• CREATE TABLE IF NOT EXISTS employee ( eid int, name
String, salary String, designation String)
COMMENT ‘Employee details’
ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’
LINES TERMINATED BY ‘\n’
STORED AS TEXTFILE;

*Comments can also be added in the table to provide


more details.

25
Load Data Statement
• Generally, after creating a table in SQL, we can insert
data using the Insert statement. But in Hive, we can
insert data using the LOAD DATA statement.
• Example
• We will insert the following data into the table. It is a
text file named [Link] in /home/user directory.
• 1201 Gopal 45000 Technical manager
• 1202 Manisha 45000 Proof reader
• 1203 Maheshwari 40000 Technical writer
• 1204 Kiran 40000 Hr Admin
• 1205 Kranthi 30000 Op Admin

26
Load Data Statement
• LOAD DATA LOCAL INPATH
'/home/user/[Link]' OVERWRITE INTO
TABLE employee;
• INSERT INTO TABLE employee VALUES {1201,
‘Gopal’, 45000, ‘Technical Manager’};
• INSERT OVERWRITE TABLE employee SELECT *
FROM emp;

27
Select Statement
• SELECT * FROM employee;
• SELECT * FROM employee WHERE
name=‘Ranvijay’;

28
Creation of partition table
• create table state_part(District string,Enrolments
string) PARTITIONED BY(state string);
• set [Link]=nonstrict
• (above statement is required for execution in
dynamic partition mode)
• Laoding Data
• INSERT OVERWRITE TABLE state_part
PARTITION(state) SELECT district,enrolments,state
from allstates;

29
Thank You!

30

You might also like