Clustering is grouping of data or dividing a large data set into smaller data sets of some similarity. A well known clustering algorithm in unsupervised machine learning is K-Means clustering. The K-Means algorithm takes in n observations (data points), and groups them into k clusters, where each observation belongs to a cluster based on the nearest mean (cluster centroid). The distance between a data point and cluster centroid is calculated using the Euclidean distance.

How the K-Means algorithm works is relatively straight forward. We just follow these steps:

K is usually found by another algorithm to achieve the best k value, for example, by using the Bayesian information criterion, The Elbow Method, or the Rule of thumb, which is simply k= √(n/2) .

Given a set of observations (x1, x2, …, xn) , where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k (≤ n) sets S = {S1, S2, …, Sk} so as to minimize the within-cluster sum of squares (WCSS) where μi is the mean of points in Si . ( wikipedia )

Most of my understanding and knowledge of the K-Means algorithm and implementation came from this article by Burak Kanber.