KMEANS
the K-Means Data Clustering Problem


KMEANS is a C++ library which handles the K-Means problem, which organizes a set of N points in M dimensions into K clusters;

In the K-Means problem, a set of N points X(I) in M-dimensions is given. The goal is to arrange these points into K clusters, with each cluster having a representative point Z(J), usually chosen as the centroid of the points in the cluster.

        Z(J) = Sum ( all X(I) in cluster J ) X(I) /
               Sum ( all X(I) in cluster J ) 1.
      
The energy of cluster J is
        E(J) = Sum ( all X(I) in cluster J ) || X(I) - Z(J) ||^2
      

For a given set of clusters, the total energy is then simply the sum of the cluster energies E(J). The goal is to choose the clusters in such a way that the total energy is minimized. Usually, a point X(I) goes into the cluster with the closest representative point Z(J). So to define the clusters, it's enough simply to specify the locations of the cluster representatives.

This is actually a fairly hard problem. Most algorithms do reasonably well, but cannot guarantee that the best solution has been found. It is very common for algorithms to get stuck at a solution which is merely a "local minimum". For such a local minimum, every slight rearrangement of the solution makes the energy go up; however a major rearrangement would result in a big drop in energy.

A simple algorithm for the problem is known as the "H-Means algorithm". It alternates between two procedures:

These steps are repeated until no points are moved, or some other termination criterion is reached.

A more sophisticated algorithm, known as the "K-Means algorithm", takes advantage of the fact that it is possible to quickly determine the decrease in energy caused by moving a point from its current cluster to another. It repeats the following procedure:

This procedure is repeated until no points are moved, or some other termination criterion is reached.

The Weighted K-Means Problem

A natural extension of the K-Means problem allows us to include some more information, namely, a set of weights associated with the data points. These might represent a measure of importance, a frequency count, or some other information. The intent is that a point with a weight of 5.0 is twice as "important" as a point with a weight of 2.5, for instance. This gives rise to the "weighted" K-Means problem.

In the weighted K-Means problem, we are given a set of N points X(I) in M-dimensions, and a corresponding set of nonnegative weights W(I). The goal is to arrange the points into K clusters, with each cluster having a representative point Z(J), usually chosen as the weighted centroid of the points in the cluster:

        Z(J) = Sum ( all X(I) in cluster J ) W(I) * X(I) /
               Sum ( all X(I) in cluster J ) W(I).
      
The weighted energy of cluster J is
        E(J) = Sum ( all X(I) in cluster J ) W(I) * || X(I) - Z(J) ||^2
      

Licensing:

The computer code and data files described and made available on this web page are distributed under the GNU LGPL license.

Languages:

KMEANS is available in a C version and a C++ version and a FORTRAN90 version and a MATLAB version.

Related Data and Programs:

ASA058, a C++ library which implements the K-means algorithm of Sparks.

ASA136, a C++ library which implements the Hartigan and Wong clustering algorithm.

CITIES, a C++ library which handles various problems associated with a set of "cities" on a map.

CITIES, a dataset directory which contains sets of data defining groups of cities.

POINT_MERGE, a C++ library which considers N points in M dimensional space, and counts or indexes the unique or "tolerably unique" items.

SPAETH, a dataset directory which contains a set of test data.

SPAETH2, a dataset directory which contains a set of test data.

Reference:

  1. John Hartigan, Manchek Wong,
    Algorithm AS 136: A K-Means Clustering Algorithm,
    Applied Statistics,
    Volume 28, Number 1, 1979, pages 100-108.
  2. Wendy Martinez, Angel Martinez,
    Computational Statistics Handbook with MATLAB,
    Chapman and Hall / CRC, 2002.
  3. David Sparks,
    Algorithm AS 58: Euclidean Cluster Analysis,
    Applied Statistics,
    Volume 22, Number 1, 1973, pages 126-130.

Source Code:

Examples and Tests:

There are data files read by the sample code:

TEST01 applies HMEANS_01 to points_100.txt:

TEST02 applies HMEANS_02 to points_100.txt:

TEST03 applies KMEANS_01 to points_100.txt:

TEST04 applies KMEANS_02 to points_100.txt:

TEST05 applies KMEANS_03 to points_100.txt:

TEST06 applies HMEANS_01 + KMEANS_01 to points_100.txt:

TEST07 applies HMEANS_01 + KMEANS_02 to points_100.txt:

TEST08 applies KMEANS_01 + KMEANS_03 to points_100.txt:

TEST09 applies HMEANS_W_01 to points_100.txt and weights_equal_100.txt:

TEST10 applies HMEANS_W_02 to points_100.txt and weights_equal_100.txt:

TEST11 applies KMEANS_W_01 to points_100.txt and weights_equal_100.txt:

TEST12 applies KMEANS_W_03 to points_100.txt and weights_equal_100.txt:

TEST13 applies HMEANS_W_01 to points_100.txt and weights_unequal_100.txt:

TEST14 applies HMEANS_W_02 to points_100.txt and weights_unequal_100.txt:

TEST15 applies KMEANS_W_01 to points_100.txt and weights_unequal_100.txt:

TEST16 applies KMEANS_W_03 to points_100.txt and weights_unequal_100.txt:

List of Routines:

You can go up one level to the C++ source codes.


Last revised on 10 October 2011.