Welcome back! On our journey into unsupervised learning, we've become quite familiar with the Iris dataset and have explored its captivating world. Today's expedition invites us further into this adventure by introducing an integral concept of unsupervised learning - K-means clustering. This fascinating algorithm groups data into K
non-overlapping clusters, with every data point belonging to the cluster with the nearest centroid or mean. Intrigued? Let's dive together into this riveting world and discover the beauty and elegance of K-means clustering!
Before we start, let's take a moment to appreciate what clustering is all about. Imagine you're at a party, and you notice people clustering together. Groups usually form around shared interests — sports enthusiasts gather in one corner, movie buffs in another, and foodies crowd around the buffet. That's clustering in action!
In machine learning, clustering performs a similar role but with data. It's a type of unsupervised learning that helps us categorize data into different groups or clusters. The key here is that we don't know what we're looking for ahead of time, which is what makes it exciting—it's like embarking on a voyage of discovery!
After understanding clustering, let's move on to our star of the show - K-means. K-means is a type of partition-based clustering that's popular because of its simplicity and efficiency. The algorithm partitions the data into K
clusters such that each observation belongs to the cluster with the closest mean.
Going deeper, we need to understand that K
is an input parameter representing the number of clusters. Each centroid is calculated as the mean of the data points that belong to its cluster. The algorithm alternates between these steps until it reaches a stable equilibrium or stagnation point, which is what we call convergence.
Outlined in a step-by-step manner, the algorithm process would look as follows:
The algorithm randomly selects K
data points (from n) to be the initial centroids.
The algorithm calculates the Euclidean distance between each data point and the centroids and assigns each data point to the centroid nearest to it.
The algorithm calculates the new centroid (mean) for each cluster.
These steps are repeated until the algorithm converges, i.e., the centroids don't change significantly between iterations or until we reach the desired number of iterations.
K-means clustering is incredibly versatile and has applications in numerous scenarios. Whether it's customer segmentation in business, identifying patterns in spatial phenomena in geography, or even compressing colors in computer graphics, k-means clustering has proven its value.
Let's illustrate this with our awesome Iris dataset. Suppose we have a botanist friend who has collected some Iris flowers but has mixed them up. They know that flowers come in three specific species but cannot distinguish them merely by looking. Here's where k-means clustering comes to our rescue!
Let's see how this would work:
Python1from sklearn.cluster import KMeans 2from sklearn.datasets import load_iris 3import pandas as pd 4 5# Load the iris dataset 6iris = load_iris() 7iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names) 8 9# Initialise KMeans 10kmeans = KMeans(n_clusters=3, n_init=10) 11 12# Fit the model 13kmeans.fit(iris_df) 14 15# Get the cluster assignments for each data point 16assignments = kmeans.labels_ 17 18print(assignments) 19""" 20[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 21 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 22 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 2 2 2 2 0 2 2 2 2 23 2 2 0 0 2 2 2 2 0 2 0 2 0 2 2 0 0 2 2 2 2 2 0 2 2 2 2 0 2 2 2 0 2 2 2 0 2 24 2 0] 25"""
The code above will return an array where each number represents the cluster of that data point. Isn't that interesting? We've just helped our friend categorize their mixed-up flowers into three distinct species!
One might wonder how K-means knows when the assignments of data points to clusters are optimal, and can it stop updating the clusters? This is where convergence comes in! K-means has reached convergence when the centroids do not change substantially from one iteration to the next or when we reach the maximum number of iterations defined for the algorithm.
So, how do we define a substantial change? That's through a term called inertia, or within-cluster sum-of-squares (WCSS). Inertia tells us the total squared distance from each point to its centroid; thus, the more compact our clusters are (i.e., points are close to their centroid), the smaller our inertia value will be. In simpler words, a lower inertia means better clusters!
We can calculate the inertia using scikit-learn's inertia_
attribute:
Python1print("Inertia: ", kmeans.inertia_) 2# Inertia: 78.85144142614601
Although K-means is an incredibly useful tool, it's not without its quirks. K-means is sensitive to the initial placement of centroids. Random initializations can lead to different clusters, and not all these clusters may be meaningful. This drawback can be remedied using strategies like the K-means++ initialization technique.
The K-means algorithm also struggles with clusters of different densities and is sensitive to outliers. Additionally, K-means assumes that all clusters are spherical and tend to perform poorly with non-convex shapes.
Another consideration is the choice of k
. While we need to specify the number of clusters beforehand, we often won't know the right number. Various techniques can help us find the optimal number of clusters, like the Elbow Method and Average Silhouette Method. In the Elbow Method, we calculate the sum of squared errors (SSE) for some values of k
(for example, 1 to 10), and the k
after which SSE decreases abruptly (forming an elbow shape in the graph) is the optimal value of k
.
Here's a quick look at how we can use the Elbow Method
to find the best k
to use:
Python1from sklearn.cluster import KMeans 2import matplotlib.pyplot as plt 3 4# A list holds the SSE values for each k 5sse = [] 6 7for k in range(1, 11): 8 kmeans = KMeans(n_clusters=k, n_init=10) 9 kmeans.fit(iris_df) 10 sse.append(kmeans.inertia_) 11 12# Plot sse against k 13plt.figure(figsize=(6, 6)) 14plt.plot(range(1, 11), sse, '-o') 15plt.xlabel(r'Number of clusters *k*') 16plt.ylabel('Sum of squared distance') 17plt.grid(True) 18plt.show()
Wow, look how far we've come! We've transitioned from understanding the nitty-gritty of what makes K-means clustering tick to a formal introduction to Python's sklearn
library. Armed with our newfound knowledge of inertia and how to choose the right number of clusters, we're now ready to uncover hidden patterns within any dataset.
In conclusion, we embarked on an illuminating journey today, brushing up on our understanding of the principles of K-means clustering and how it works, covering different aspects like convergence, inertia, and choice of k
, and finally reflecting on the limitations of K-means clustering. Ready to move from theory into practice? We've got just the thing for you!
Get ready to be excited, as it's time for some practice! Brace yourself for the upcoming hands-on sessions, where you'll have the chance to use the theory you've learned today to tackle practical problems. Sharpen your coding skills, put on your logical reasoning hat, and let's put that theory to the test! Best of luck, explorer! We're ready when you are. Let's leap into the world of practice together!