0% found this document useful (0 votes)
34 views4 pages

Hierarchical Clustering in Data Mining

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
34 views4 pages

Hierarchical Clustering in Data Mining

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 4

Hierarchical clustering in data mining

Hierarchical clustering refers to an unsupervised learning procedure that determines


successive clusters based on previously defined clusters. It works via grouping data into a
tree of clusters. Hierarchical clustering stats by treating each data points as an individual
cluster. The endpoint refers to a different set of clusters, where each cluster is different from
the other cluster, and the objects within each cluster are the same as one another.

There are two types of hierarchical clustering

o Agglomerative Hierarchical Clustering


o Divisive Clustering

Agglomerative hierarchical clustering


Agglomerative clustering is one of the most common types of hierarchical clustering used to
group similar objects in clusters. Agglomerative clustering is also known as AGNES
(Agglomerative Nesting). In agglomerative clustering, each data point act as an individual
cluster and at each step, data objects are grouped in a bottom-up method. Initially, each data
object is in its cluster. At each iteration, the clusters are combined with different clusters until
one cluster is formed.

Agglomerative hierarchical clustering algorithm

1. Determine the similarity between individuals and all other clusters. (Find
proximity matrix).
2. Consider each data point as an individual cluster.
3. Combine similar clusters.
4. Recalculate the proximity matrix for each cluster.
5. Repeat step 3 and step 4 until you get a single cluster.

Let’s understand this concept with the help of graphical representation using a
dendrogram.

With the help of given demonstration, we can understand that how the actual
algorithm work. Here no calculation has been done below all the proximity among the
clusters are assumed.

Let's suppose we have six different data points P, Q, R, S, T, V.


Step 1:

Consider each alphabet (P, Q, R, S, T, V) as an individual cluster and find the distance
between the individual cluster from all other clusters.

Step 2:

Now, merge the comparable clusters in a single cluster. Let’s say cluster Q and Cluster
R are similar to each other so that we can merge them in the second step. Finally, we
get the clusters [ (P), (QR), (ST), (V)]

Step 3:

Here, we recalculate the proximity as per the algorithm and combine the two closest
clusters [(ST), (V)] together to form new clusters as [(P), (QR), (STV)]

Step 4:

Repeat the same process. The clusters STV and PQ are comparable and combined
together to form a new cluster. Now we have [(P), (QRSTV)].

Step 5:

Finally, the remaining two clusters are merged together to form a single cluster
[(PQRSTV)]
Divisive Hierarchical Clustering
Divisive hierarchical clustering is exactly the opposite of Agglomerative Hierarchical
clustering. In Divisive Hierarchical clustering, all the data points are considered an
individual cluster, and in every iteration, the data points that are not similar are
separated from the cluster. The separated data points are treated as an individual
cluster. Finally, we are left with N clusters.

Advantages of Hierarchical clustering

o It is simple to implement and gives the best output in some cases.


o It is easy and results in a hierarchy, a structure that contains more information.
o It does not need us to pre-specify the number of clusters.

Disadvantages of hierarchical clustering

o It breaks the large clusters.


o It is Difficult to handle different sized clusters and convex shapes.
o It is sensitive to noise and outliers.
o The algorithm can never be changed or deleted once it was done previously.
DBSCAN stands for Density-Based Spatial Clustering of Applications with Noise. It
depends on a density-based notion of cluster. It also identifies clusters of arbitrary size
in the spatial database with outliers.

OPTICS

OPTICS stands for Ordering Points To Identify the Clustering Structure. It gives a
significant order of database with respect to its density-based clustering structure. The
order of the cluster comprises information equivalent to the density-based clustering
related to a long range of parameter settings. OPTICS methods are beneficial for both
automatic and interactive cluster analysis, including determining an intrinsic clustering
structure.

DENCLUE

Density-based clustering by Hinnebirg and Kiem. It enables a compact mathematical


description of arbitrarily shaped clusters in high dimension state of data, and it is good
for data sets with a huge amount of noise.

You might also like