Code
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import sklearn.datasets as sk_data
from sklearn.cluster import KMeans
import seaborn as sns
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import sklearn.datasets as sk_data
from sklearn.cluster import KMeans
import seaborn as sns
Today we will look at a fairly different approach to clustering.
So far, we have been thinking of clustering as finding a partition of our dataset.
That is, a set of nonoverlapping clusters, in which each data item is in one cluster.
However, in many cases, the notion of a strict partition is not as useful.
How many clusters would you say there are here?
= sk_data.make_blobs(
X_rand, y_rand =[100, 100, 250, 70, 75, 80],
n_samples= [[1, 2], [1.5, 1], [3, 2], [1.75, 3.25], [2, 4], [2.25, 3.25]],
centers = 2,
n_features = (-10.0, 10.0),
center_box = [.2, .2, .3, .1, .15, .15],
cluster_std = 0
random_state
)= pd.DataFrame(np.column_stack([X_rand[:, 0], X_rand[:, 1], y_rand]), columns = ['X', 'Y', 'label'])
df_rand = df_rand.astype({'label': 'int'})
df_rand 'label2'] = [{0: 0, 1: 1, 2: 2, 3: 3, 4: 3, 5: 3}[x] for x in df_rand['label']]
df_rand['label3'] = [{0: 0, 1: 0, 2: 1, 3: 2, 4: 2, 5: 2}[x] for x in df_rand['label']]
df_rand[# kmeans = KMeans(init = 'k-means++', n_clusters = 3, n_init = 100)
# df_rand['label'] = kmeans.fit_predict(df_rand[['X', 'Y']])
'X', 'Y', kind = 'scatter',
df_rand.plot(= False, figsize = (6, 6))
colorbar 'square')
plt.axis('off')
plt.axis( plt.show()
'X', 'Y', kind = 'scatter', c = 'label3', colormap='viridis',
df_rand.plot(= False, figsize = (6, 6))
colorbar 'square')
plt.axis('off')
plt.axis( plt.show()
'X', 'Y', kind = 'scatter', c = 'label2', colormap='viridis',
df_rand.plot(= False, figsize = (6, 6))
colorbar 'square')
plt.axis('off')
plt.axis( plt.show()
'X', 'Y', kind = 'scatter', c = 'label', colormap='viridis',
df_rand.plot(= False, figsize = (6, 6))
colorbar 'square')
plt.axis('off')
plt.axis( plt.show()
'X', 'Y', kind = 'scatter', c = 'label', colormap='viridis',
df_rand.plot(= False, figsize = (4, 4))
colorbar 'square')
plt.axis('off')
plt.axis( plt.show()
This dataset shows clustering on multiple scales.
To fully capture the structure in this dataset, two things are needed:
These observations motivate the notion of hierarchical clustering.
In hierarchical clustering, we move away from the partition notion of \(k\)-means,
and instead capture a more complex arrangement that includes containment of one cluster within another.
A hierarchical clustering produces a set of nested clusters organized into a tree.
A hierarchical clustering is visualized using a…
dendrogram
Hierarchical clustering has a number of advantages:
Another advantage is that the dendrogram may itself correspond to a meaningful structure, for example, a taxonomy.
Another aspect of hierachical clustering is that it can handle certain cases better than \(k\)-means.
Because of the nature of the \(k\)-means algorithm, \(k\)-means tends to produce:
In many real-world situations, clusters may not be round, they may be of unequal size, and they may overlap.
Hence we would like clustering algorithms that can work in those cases also.
There are two main approaches to hierarchical clustering: “bottom-up” and “top-down.”
Agglomerative Clustering (“bottom-up”):
Divisive Clustering (“top-down”):
To illustrate the impact of the choice of cluster distances, we’ll focus on agglomerative clustering.
An \((n-1,4)\) shaped linkage matrix.
Where each row is [idx1, idx2, dist, sample_count]
is a single step in the clustering process and
idx1
and idx2
are indices of the cluster being merged, where
And sample_count
is the number of original data samples in the new cluster.
Given two clusters, how do we define the distance between them?
Here are three natural ways to do it.
Single-Linkage: the distance between two clusters is the distance between the closest two points that are in different clusters.
\[ D_\text{single}(i,j) = \min_{x, y}\{d(x, y) \,|\, x \in C_i, y \in C_j\} \]
Complete-Linkage: the distance between two clusters is the distance between the farthest two points that are in different clusters.
\[ D_\text{complete}(i,j) = \max_{x, y}\{d(x, y) \,|\, x \in C_i, y \in C_j\} \]
Average-Linkage: the distance between two clusters is the average distance between all pairs of points from different clusters.
\[ D_\text{average}(i,j) = \frac{1}{|C_i|\cdot|C_j|}\sum_{x \in C_i,\, y \in C_j}d(x, y) \]
Notice that it is easy to express the definitions above in terms of similarity instead of distance.
Here is a set of 6 points that we will cluster to show differences between distance metrics.
= [0.4, 0.22, 0.35, 0.26, 0.08, 0.45]
pt_x = [0.53, 0.38, 0.32, 0.19, 0.41, 0.30]
pt_y 'o', markersize = 10, color = 'k')
plt.plot(pt_x, pt_y, .15, .60])
plt.ylim([0.05, 0.70])
plt.xlim([for i in range(6):
f'{i}', (pt_x[i]+0.02, pt_y[i]-0.01), fontsize = 12)
plt.annotate('on')
plt.axis(
plt.xticks([])
plt.yticks([]) plt.show()
We can calculate the distance matrix
= np.array([pt_x, pt_y]).T
X from scipy.spatial import distance_matrix
= ['p0', 'p1', 'p2', 'p3', 'p4', 'p5']
labels = pd.DataFrame(distance_matrix(X, X), index = labels, columns = labels)
D format('{:.2f}') D.style.
p0 | p1 | p2 | p3 | p4 | p5 | |
---|---|---|---|---|---|---|
p0 | 0.00 | 0.23 | 0.22 | 0.37 | 0.34 | 0.24 |
p1 | 0.23 | 0.00 | 0.14 | 0.19 | 0.14 | 0.24 |
p2 | 0.22 | 0.14 | 0.00 | 0.16 | 0.28 | 0.10 |
p3 | 0.37 | 0.19 | 0.16 | 0.00 | 0.28 | 0.22 |
p4 | 0.34 | 0.14 | 0.28 | 0.28 | 0.00 | 0.39 |
p5 | 0.24 | 0.24 | 0.10 | 0.22 | 0.39 | 0.00 |
import scipy.cluster
import scipy.cluster.hierarchy as hierarchy
= hierarchy.linkage(X, method='single', metric='euclidean')
Z
hierarchy.dendrogram(Z) plt.show()
Single-linkage clustering can handle non-elliptical shapes.
Here we use SciPy’s fcluster to form flat custers from hierarchical clustering.
= sk_data.make_moons(random_state = 0, noise = 0.05)
X_moon_05, y_moon_05
= hierarchy.linkage(X_moon_05, method='single')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels
0], X_moon_05[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_moon_05[:, 'off')
plt.axis( plt.show()
Single-Linkage can find different sized clusters:
= sk_data.make_blobs(n_samples=[20, 200], centers = [[1, 1], [3, 1]], n_features = 2,
X_rand_lo, y_rand_lo = (-10.0, 10.0), cluster_std = [.1, .5], random_state = 0)
center_box
= hierarchy.linkage(X_rand_lo, method='single')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels
0], X_rand_lo[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_rand_lo[:, # plt.title('Single-Linkage Can Find Different-Sized Clusters')
'off')
plt.axis( plt.show()
Single-linkage clustering can be sensitive to noise and outliers.
The results can change drastically on even slightly more noisy data.
= sk_data.make_moons(random_state = 0, noise = 0.1)
X_moon_10, y_moon_10
= hierarchy.linkage(X_moon_10, method='single')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels
0], X_moon_10[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_moon_10[:, # plt.title('Single-Linkage Clustering Changes Drastically on Slightly More Noisy Data')
'off')
plt.axis( plt.show()
And here’s another example where we bump the standard deviation on the clusters slightly.
= sk_data.make_blobs(n_samples=[20, 200], centers = [[1, 1], [3, 1]], n_features = 2,
X_rand_hi, y_rand_hi = (-10.0, 10.0), cluster_std = [.15, .6], random_state = 0)
center_box
= hierarchy.linkage(X_rand_hi, method='single')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels
0], X_rand_hi[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_rand_hi[:, 'off')
plt.axis( plt.show()
= hierarchy.linkage(X, method='complete')
Z
hierarchy.dendrogram(Z) plt.show()
Produces more-balanced clusters – more-equal diameters
= sk_data.make_moons(random_state = 0, noise = 0.05)
X_moon_05, y_moon_05
= hierarchy.linkage(X_moon_05, method='complete')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels
0], X_moon_05[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_moon_05[:, 'off')
plt.axis( plt.show()
= hierarchy.linkage(X_rand_hi, method='complete')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels
0], X_rand_hi[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_rand_hi[:, 'off')
plt.axis( plt.show()
Less susceptible to noise:
= hierarchy.linkage(X_moon_10, method='complete')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels 0], X_moon_10[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_moon_10[:,
'off')
plt.axis( plt.show()
Some disadvantages for complete-linkage clustering are:
= hierarchy.linkage(X, method='average')
Z
hierarchy.dendrogram(Z) plt.show()
Average-Linkage clustering is in some sense a compromise between Single-linkage and Complete-linkage clustering.
Strengths:
Limitations:
Produces more isotropic clusters.
= hierarchy.linkage(X_moon_10, method='average')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels 0], X_moon_10[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_moon_10[:,
'off')
plt.axis( plt.show()
More resistant to noise than Single-Linkage.
= hierarchy.linkage(X_rand_hi, method='average')
Z = hierarchy.fcluster(Z, 2, criterion = 'maxclust')
labels 0], X_rand_hi[:, 1], c = [['b','g'][i-1] for i in labels])
plt.scatter(X_rand_hi[:,
'off')
plt.axis( plt.show()
Finally, we consider one more cluster distance.
Ward’s distance asks “What if we combined these two clusters – how would clustering improve?”
To define “how would clustering improve?” we appeal to the \(k\)-means criterion.
So:
Ward’s Distance between clusters \(C_i\) and \(C_j\) is the difference between the total within cluster sum of squares for the two clusters separately, compared to the within cluster sum of squares resulting from merging the two clusters into a new cluster \(C_{i+j}\):
\[ D_\text{Ward}(i, j) = \sum_{x \in C_i} (x - c_i)^2 + \sum_{x \in C_j} (x - c_j)^2 - \sum_{x \in C_{i+j}} (x - c_{i+j})^2 \]
where \(c_i, c_j, c_{i+j}\) are the corresponding cluster centroids.
In a sense, this cluster distance results in a hierarchical analog of \(k\)-means.
As a result, it has properties similar to \(k\)-means:
Hence it tends to behave more like average-linkage hierarchical clustering.
Now we’ll look at doing hierarchical clustering in practice.
We’ll use the same synthetic data as we did in the k-means case – i.e., three “blobs” living in 30 dimensions.
= sk_data.make_blobs(n_samples=100, centers=3, n_features=30,
X, y =(-10.0, 10.0),random_state=0) center_box
The raw data is shown in the following visualization:
=False, yticklabels=False, linewidths=0,cbar=False)
sns.heatmap(X, xticklabels plt.show()
then an embedding into 2-D (using MDS).
import sklearn.manifold
import sklearn.metrics as metrics
= metrics.euclidean_distances(X)
euclidean_dists = sklearn.manifold.MDS(n_components = 2, max_iter = 3000, eps = 1e-9, random_state = 0,
mds = "precomputed", n_jobs = 1)
dissimilarity = mds.fit(euclidean_dists)
fit = fit.embedding_
pos 'equal')
plt.axis(0], pos[:, 1], s = 8)
plt.scatter(pos[:, plt.show()
Hierarchical clustering is available in sklearn
, but there is a much more fully developed set of tools in the scipy package and that is the one to use.
Let’s run hierarchical clustering on our synthetic dataset.
Try the other linkage methods and see how the clustering and dendrogram changes.
import scipy.cluster
import scipy.cluster.hierarchy as hierarchy
import scipy.spatial.distance
# linkages = ['single','complete','average','weighted','ward']
= hierarchy.linkage(X, method = 'single') Z
And draw the dendrogram.
= hierarchy.dendrogram(Z) R
Once again we’ll use the “20 Newsgroup” data provided as example data in sklearn.
Load three of the newsgroups.
from sklearn.datasets import fetch_20newsgroups
= ['comp.os.ms-windows.misc', 'sci.space','rec.sport.baseball']
categories = fetch_20newsgroups(subset = 'train', categories = categories) news_data
Vectorize the data.
from sklearn.feature_extraction.text import TfidfVectorizer
= TfidfVectorizer(stop_words='english', min_df = 4, max_df = 0.8)
vectorizer = vectorizer.fit_transform(news_data.data).todense()
data data.shape
(1781, 9409)
Cluster hierarchically and display dendrogram. Feel free to experiment with different metrics.
# linkages are one of 'single','complete','average','weighted','ward'
#
# metrics can be ‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘cityblock’, ‘correlation’, ‘cosine’,
# ‘dice’, ‘euclidean’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’,
# ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’,
# ‘sqeuclidean’, ‘yule’.
= hierarchy.linkage(data, method = 'ward', metric = 'euclidean')
Z_20ng =(14,4))
plt.figure(figsize= hierarchy.dendrogram(Z_20ng, p=4, truncate_mode = 'level', show_leaf_counts=True) R_20ng
Let’s flatten the hierarchy to different numbers clusters and calculate the Silhouette Score.
= 20
max_clusters = np.zeros(max_clusters+1)
s
for k in range(2, max_clusters+1):
= hierarchy.fcluster(Z_20ng, k, criterion = 'maxclust')
clusters = metrics.silhouette_score(np.asarray(data), clusters, metric = 'euclidean')
s[k]
range(2, len(s)), s[2:], '.-')
plt.plot('Number of clusters')
plt.xlabel('Silhouette Score')
plt.ylabel( plt.show()
We see a first peak at 5.
Top terms per cluster when we flatten to a depth of 5.
= 5
k = hierarchy.fcluster(Z_20ng, k, criterion = 'maxclust')
clusters for i in range(1,k+1):
= np.array([item for item,clust in zip(data, clusters) if clust == i])
items = np.squeeze(items).mean(axis = 0)
centroids = centroids.argsort()#[:, ::-1]
asc_order_centroids = asc_order_centroids[::-1]
order_centroids = vectorizer.get_feature_names_out()
terms print(f'Cluster {i}:')
for ind in order_centroids[:10]:
print(f' {terms[ind]}')
print('')
Cluster 1:
space
nasa
edu
henry
gov
alaska
access
com
moon
digex
Cluster 2:
ax
max
b8f
g9v
a86
145
1d9
pl
2di
0t
Cluster 3:
edu
com
year
baseball
article
writes
cs
team
game
university
Cluster 4:
risc
instruction
ghhwang
csie
set
nctu
cisc
tw
reduced
mq
Cluster 5:
windows
edu
file
dos
com
files
card
drivers
driver
use
Scikit-Learn has a very nice notebook and plot, copied here, that shows the different clusters resulting from different linkage methods.
This wraps up our partitional Cluster topics. We covered: