Wednesday, July 1, 2009

Data Clustering with Self-Organizing Graph

We have just released VisuMap version 2.7.832. In this release we have added a very interesting clustering service called the self-organizing graph (SOG). SOG is basically an extension to the self-ogranizing map (SOM) that simulates a homogeneous artificial neural network. Unlike SOM, SOG simulates a network of arbitrary structure. The network can be in fact any weighted undirected graph. The network is a kind of parameter for the algorithm: the user defines the network depending on his/her knowledge or interest about the dataset.

The following picture illustrates an application of SOG. On the left side is a map of a dataset that shows roughly two data point clouds. The right side shows the network we used to classify the dataset. During the learning process each node of the network will become associated with some data points. After the learning process has finished each data point will be classified as their corresponding nodes in the SOG network.


We see that the SOG algorithm not only correctly classified the two data point clouds, but also captured those data points located a the peripheries of the clouds (data points shown as blue triangles).

The following is a short video tutorial for the SOG clustering service in VisuMap:




Comparing with traditional clustering algorithms, like k-Mean and hierarchical clustering algorithms, SOG provides more flexible clustering service. With SOG the user can not only specify the number of clusters, but also the size and other geometric or topological relationships between the clusters. In fact, SOG enables the user to specify a model to reflect existing knowledge and interests about the data.

There are many variants of the self-organizing map that use irregular networks. Those algorithms often employ adaptive strategies to generate the irregular networks depending on the data. Those irregular network are results of those algorithms. Contrary to those algorithms SOG uses the network as input for the algorithm, the user can express his/her interest and knowledge about the data through the network.

SOG is our first attempt to provide a new kind of clustering service. While its framework resembles to provide general pattern matching capability. The learning algorithm it employed has only limited capability to capture structural information. In view of this there are still many challenges and potentials for SOG algorithm in the future.

Friday, March 13, 2009

Visualizing the real projecitve plane.

I have just uploaded a video clip to YouTube that shows some ways to explorer the real projective plane with VisuMap. The link to this video is included here:



This dataset contains 14,358 data points (4-dimensional vectors) sampled from the surface of RP2 embedded into R4. The analytical formula is given in the video clip. A smaller sample dataset in VisuMap format can be downloaded at KleinBottle4D that also contains the dataset for the Klein-Bottle surface.

It is interesting to notice that RPM algorithm flattens the 2D sphere by cutting it into two equal sized half; but it flattens the real projective plane in the way that the whole plane is kept in a single connected region.

Monday, March 9, 2009

Projective and Injective Exploration Methods

In a recent blog post Terence Tao has put mathematical approaches to study a space X (which can be a space, a manifold etc.) into two fundamental camps:

1. By looking into a map f: Y->X that maps certain (better known) space Y into X.

2. By looking into a map f: X->Y that maps X into certain (better known) space Y.

I think that these statements not only apply to mathematical studies, they also provide useful framework for the exploration of high dimensional data. In our case, the unknown space X becomes a high dimensional dataset that we wish to understand. The space Y is usually a lower dimensional space that we know better and is normally easy to visualize.

For the sake of simplicity I would like to refer to methods from the above two camps as injective and projective methods respectively. That means, injective methods inject known space into the target space, whereas the projective methods project the target space into a better known space.

Following this point of view, algorithms to visualize high dimensional data can be grouped as injective or projective. Most MDS (multidiemsnional scaling) based visualization algorithms are projective. Algorithm like Sammon map and PCA (principal component analysis) map the high dimensional dataset into a low dimensional Euclidean space. The RPM algorithm (relational perspective map) maps high dimensional to flat torus.

On the other side, most clustering algorithms can be considered as injective algorithm. For instance, the method of vector quantization maps a finite discrete set into the target space, whereas the Kohonen net (or Self-organizing map) maps a discrete set with a mesh topology into the target space. The hierarchical clustering algorithms (like the agglomerative clustering algorithm) basically map binary tree structures into the target space.

One major benefit of this classification of algorithms is that it can guide us to apply existing algorithms to new data, in the sense that algorithms from the same camp might be able to share some methods/concepts/tricks. For instance, we might be apply an optimization method used by one clustering algorithm to another injective algorithm.

Furthermore, as be pointed out in Tao's blog post, there is certain "duality" between injective and projective methods. That means, for a given method in one camp you can sometimes find its corresponding counter part in the other camp. This duality can then help us to transfer a method from camp to anther one. As an example for this "duality", the RPM algorithm is basically a dual algorithm of the self-organizing map: they are in different camp, but are dual to each there as they share the flat torus as the image space (i.e. the space Y mentioned at the beginning of this post).

Similar to the duality concept I have blogged before about the "symmetry" between these two camps in more restricted situations, namely when the datasets are multidimensional vector spaces (e.g. data tables). In these cases, we can often transfer a method from one camp to the another one in a straightforward way (e.g. by transposing the data matrix).

The injective and projective algorithm are realized in VisuMap by clustering and mapping services, which form the two main groups of services in the VisuMap application.

Thursday, March 5, 2009

Dimension analysis on mulitdimensional scaling

This post brings forward an inconsistency problem in the study of multidimensional scaling from the perspective of dimensional analysis. This post is rather of philosophical nature. Nevertheless, I find it is worthwhile being aware of this issue
for further theoretical investigations.

Dimensional analysis is a simple and useful rule to check the consistence of a mathematical formula or equation. Roughly speaking, this rule demands that both sides of an equation (that has any physical or chemical bearings) must have the same dimension, or unit of measurement. So for instance the equation E = mC2 is consistent with the dimensional analysis, since both sides of the equation have Energy as dimension. The equation E=mC is not consistent, since the right side would have momentum as dimension.

Similarly, if a formula is composed of multiple terms by addition, then all terms must have the same dimension. So for instance, the formula: S:=π·r2 + r2/2 makes sense, but not S:=π·r2 + r/2.

Now, let's turn to a problem of multidimensional scaling (MDS) from the perspective of the dimensional analysis. Roughly speaking, MDS maps data points from an abstract space with a distance function to a low dimensional, normally, Euclidean space like R2. The goal of a MDS algorithm is to construct the map in a way, so that the distance in the low dimensional space approximates the distance in the original abstract space.

More formally, let Dij be the distance between two data points in the original space, dij be the distance low dimensional space after MDS mapping; Then a MDS algorithm tries to minimize the following quantity (often called stress):

S := Σ ( Dij - dij)2

Now, from the view point of dimensional analysis the stress function in its above form is not consistent, since Dij and dij have arguably different dimensions as they measure distance in different spaces.

One work-around to solve this inconsistency issue is to add a special constant or apply a monotonous function on dij like:

S := Σ ( Dij - C·dij)2     or     S := Σ ( Dij - f(dij))2

The constant C or the function f should magically convert the dimension of dij to that of Dij. This kind of ad-hoc techniques look pretty artificial. Most scientists probably won't like this method.

It should be noticed that the RPM (relational perspecitve map) algorithm avoided this inconsistency by using the following stress function:

S := Σ Dij/dij

The dimensional analysis rule does not prohibit composing terms of different dimensions by multiplication. In above formula, both sides of the equation have a completely new dimension, that is the dimension of Dij divided by the dimension of dij.