Clustering is an unsupervised machine learning algorithm. There does not exist an optimal clustering result. For example, the following dataset can be clustered in multiple ways: by rows, by columns, or by squarish shapes.
The aim of Directed Kmeans is to improve the maneuverability of shape and distribution of clusters. Given partial information of cluster formations, we want to extrapolate the shape and distribution of clustered data points to unclusterd dataset.
Standard Kmeans algorithm uses regular Euclidean distance metric. In Directed Kmeans, weighted Euclidean distance metric is used to change the shape and distribution of clusters. In the graph above, the first clustering by row is achieved through weighing y-axis over x-axis; the second clustering by column is achieved through weighing
We also present the following algorithm to learn the weights in weighted Euclidean distance based on the partially given information of cluster formations. Suppose we know
and through gradient descent we will be able to identify an optimal set of weights
However, we want to keep the sum of weights constant, or else the above algorithm is likely to converge to a zero vector as
- Calculate the weight using the softmax function:
$w = \text{softmax}(g)$ . - Compute the gradient of the objective function with respect to the weight:
- Compute the gradient of the weight with respect to
$g$ and update the parameter$g$ using the learning rate$\eta$ :
- Repeat steps 1-4 until convergence.
Here is a visualized overview of this process