Saturday, November 8, 2014

Geometric Modeling of Sequential Data (continued)

In a previous note geometrical modeling of sequential data (GMS) I have described a framework to convert discrete sequential data to 3D geometrical shapes. This note is going to extend GMS to a general form that allows more efficient sampling of higher dimensional data from sequence data.

The GMS framework

Recall that GMS consists of three basic steps: 1. Scanning a sequence to produces a collection of high dimensional vectors; 2: Reordering the components of  these vectors to harmonize sampled vectors; 3: Applying a dimensionality algorithm to embed these vectors into low dimensional space. For the first step, we start with a sequence sk∈S;  k=0, 1, 2, ..., K; where S is a finite set of alphabets. Imaging that the sequence has been put through a scanning machine that quickly takes series of snapshots of the part that is just in the machine:

In above that diagram, the illustrated scanning machine is sized to hold 3 nodes of the sequence. As output the machines produces a series of  sampling vectors Vt for a series of discrete time points.  A sampling vector Vt is actually a 2n+1 dimensional vectors (α0,  ..., αn-1; sk, ..., sk+n-1; t). The above diagram illustrates the case n=3. Here, sk, ..., sk+n-1 are the type of the n nodes currently passing the machine; α0,  ..., αn-1 are the coefficients, called amplitudes, for the corresponding nodes. For the sack of compactness we denote the pair (αk, sk) simply as αksk in above diagram.

The amplitude αk for the k-th node at time t is calculated as following:

The right side of above formula is called the amplifying function. This amplifying function implies that the node sequence passes the scanning machine by the speed of one node per time unit.  In general, any continues function that is zero at point 0 and could be used as an amplifying function.

We notice that the above formula specifies a continues function with respect t as long as no new node entered the machine to replace an old one. In case that a new node, say sk-1, entered the machine to replace an old one, say sk+2, at a time between a small interval t and t';  the sampling vector will undergo the following change:

Since the amplifying function vanishes at the entry 0 and exit points n, the amplitude α2 and α'0 will be close to zero; and since the amplifying function is continues, we'll have α0≈α'1 and α1≈α'2. Thus,
 when we apply a circular shifting on the Vt' as illustrated in the following diagram, the shifted vector t' will be close the Vt:

Notice that the third components of t' and Vt  may be values for different node types (e.g. sk-1 ≠ sk+2), but the amplitudes α2 and α'0 are close to zero, so that t' will be close the Vt as high dimensional vectors. Thus, after applied the circular shifting operation, the scanning machine will produces a series of gradually changing vectors in high dimensional space.

More generally for the implementation of the second step, the scanning machine will have a circular shifting operator to cicularly shift the sampled vectors; and the shifting operator will increment its shifting length by 1 every time when a new node entered the machine.

As third step, a dimensionality reduction algorithm will be applied on the vectors to embed them into a low dimension space. For this study I picked the affinity embedding (AE) algorithm. I have used the t-SNE algorithm in the previous note, but by my experiments, AE worked normally better for this purpose, as it runs much faster for large dataset. To apply AE algorithm we need either to define a metric distances to measure dissimilarity between data points; or an affinity function that measures the similarities (or kind of attraction ) between data points. For this study, we uses the following affinity function:

where the summation runs over all between 0 and n-1, such that  sk= s'k ; λ is an algorithmic constant that can be any positive values. In clear text, if we define the affinity between two nodes as the product of their amplitudes, then the affinity between two sampling vectors is then the sum of affinities between all matching nodes decayed by the time elapsed between the two samplings; The constant λ, called the decay speed, controls how fast that affinity diminishes depending on the time elapsed between the two samplings. More particularly, the real affinity will reduce to half of its value if the two sampling vectors are separated by λn nodes.

The effect of the scanning size n

The scanning size n is a key parameter for the scanning machine, it determines how many consecutive nodes of the sequence will be read (or scanned) to construct an output vector. For larger scanning size, the affinities between these data point will be an aggregation of larger number of consecutive nodes. Thus, the scanning size controls a kind of granularity of the scanning machine.

In order to see effect of the scanning size, I have downloaded the DNA sequence of a relative short gene, the CD8 gene that consists of 744 base pairs. This sequence will then be processed by the GMS framework to create 25000 data points; and then embedded in to the 3D space. The following video shows the resulting 3D maps created with different scanning size:

We see clearly that the 3D curve gradually become simpler and smoother as the scanning size grows from 3 to 24.

The effect of the decay speed λ

The decay speed λ provides a way to differentiate samplings created at different time. A larger λ means that the affinity between similar node pattern will diminish faster as the elapsed time between them grows longer. To demonstrate the effect of decay speed I have created a series of maps for the CD8 gene sequence with increasing λ. The following short video shows those maps:

We can see clearly that the map stretches gradually as λ grows from 0 to 0.08.

Implementation GMS in VisuMap

VisuMap version 4.2.905 offers support for GMS framework through two new metrics "Sequence Metric" and "Sequence Affinity". The former is a distance metric that can be used by most mapping algorithm, the latter is an affinity metric that can only be used by the affinity embedding algorithm.

In order to create a model for a sequence with VisuMap, we first create data table with a single columns and with 5 to 10 thousands rows.  The content of the table is not relevant for the modelling, only the number of rows will be used by the scanning machining as the number of samplings (The link SeqVis provides a such sample dataset together with some sample maps and supporting scripts.) We then open the map configuration window to select "Sequence Affinity" as the metric; and specify a new filter as settings for the scanning machine. The following pictures shows the settings for CD8 sequence used in previous examples.

Notice that the field labeled "Stretch Factor" set value for the decay speed, since in normal cases this parameter defines how far the resulting maps will be stretched along the sequences direction.

Also notice that spaces and newline characters in the sequence given in Sequence Definition window will be ignored by the scanning machine. With this filter definition window we can easily generate 3D maps for arbitrary sequences. The following pictures show some 3D maps together with their corresponding sequences:


We have extended the GMS framework with large scanning size and an more efficient dimensionality reduction algorithm. With these extensions we can model much large large sequences from different perspectives; and therefore capture more information from more realistic sequences.

We notice that the proposed framework has certain similarity with the ribosomes machine that translates RNA sequences to proteins. Just like the biology scientists believe all  macroscopic  pattern and features are ultimately encoded in the DNA sequences, I believe that GMS can capture a large class of relevant patterns in discrete sequences with 3-dimensional geometrical models. GMS thus offers us a toy ribosomes machine to simulate the translation of sequential information to geometrical models.

Tuesday, September 23, 2014

On geometric modeling of sequential data

The biological cellular system appears to be a universal machine that is capable to translate discrete sequential data, i.e. genetic code, to all kinds of organisms with diverse phonetic features. In this note I am going to put forward a computational framework that translates discrete sequential data to lower dimensional geometrical shapes. Inspired by biological cellular system, this framework first samples high dimensional vectors from the discrete data; then it applies dimensionality reduction methods to embed those vectors into low dimensional space. The resulting map normally forms shapes with geometrical properties that reflect information in initial sequence.

Let's start with an input sequence of nucleic acids sk;  k=0, 1, 2, ..., K  with sk  ∈ S:={C, G, A, T}. To illustrate the sampling process, we image that the sequence is a series of marbles of 4 different imaginary colors. Mathematically,  these "colors", i.e. the set S, can be represented by 4 dimensional vectors each with just a single non-zero component as follows:

Now, image we have a camera that moves slowly from one end to the other end of the sequence; and during the movement we quickly take a series of pictures of the two closest marbles. These pictures record different colors and intensities of the two closest marbles. As depicted in the following schema, we code those "pictures" as 8 dimensional vectors:

In above schema, α and β are the intensities coefficients for the two closest marbles (they can be multiplied directly into the first half and second of the 8-vectors correspondingly). α and β are calculated as follows: We assume that the camera moves evenly one marble per time unit. So at given time, say t1,  the camera is located between k-th and k+1-th marble; then the phase of the camera is calculated as:   p1 :=  t1 - k ;  where k the largest integer smaller than  t1, i.e. k=⌊t1⌋. Then,  α and β are calculated as: α := cos(p1π/2);  β:=sin(p1π/2).  We notice here that when the camera moves smoothly from k to k+1-ε  for any infinitesimal small number ε,  the pair (α, β)  changes smooth from (1, 0) to (0, 1).

However, at the time point k+1, the pair (α, β) takes the value (1, 0), since the camera is now considered between the k+1-th and k+2-th marble.  That means, (α, β) as a time dependent 2-dimensional variable is not continues at the integer time points; Consequently, the corresponding  8-vectors in general won't change smoothly in the 8-dimensional space.

In order to avoid this discontinuity, we swap the two 4-vectors of a 8-vectors for those  vectors produces when ⌊t1⌋ is an odd integer. That means in above schema, at the time  t2, the 8-vector produced is swapped from v2 to v'2 as shown in the following diagram:
With this swapping operation, we can easily verify that the 8-dimensinal vectors sequence actually changes smoothly as the camera smoothly processes.

Additionally to these 8 components, the sampling process at each step also adds a timestamp ct as the 9-th components to the output vectors, where c is a small constant chosen as an algorithm parameter.

Summing-up above steps, the sampling process can be denoted as an operator that operates on discrete sequences s as following:
where t ∈[0, K] is the the time parameter; c is an algorithmic parameter; k:= ⌊t⌋ is the sequential index; α := cos(pπ/2) and β:=sin(pπ/2) with p:= t- k as intensity coefficients for the two closest marbles.

After we have sampled a set of 9-dimensional vectors, we then use a dimensionality reduction (DR) algorithm to embed those vectors into 2- or 3-dimensional space. For our framework, we need a DR algorithm that is strong in preserving local information, since the "camera" basically samples only local information. Algorithms like RPM, CCAAffinity Embedding and t-SNE appear to be good candidates for this purpose. For this note I picked the t-SNE algorithm implemented in VisuMap software. 

As first example we use our algorithm to model a short nucleic acid sequence "CCCTGTGGAGCCACACCCTAG". Notice that the timestamp coefficient in  above description is the only component that grows with time unlimitedly; whereas the other 8 components are confined to the range between 0 and 1.0.  Thus, larger c will likely stretch the model further apart. In order to show this property, we created four different data sets with c=0, 0.5/N, 2.5/N, and 10/N;  where N is the number of samplings, which is in this note 15000. The following four clips show the resulting geometric models of these data sets. For the sack of clarity, I have colored the data points with increasing brightness as time progresses; and when the scanning camera is at closest to a node, the corresponding sampling data point will be represented with an icon that indicates the type of the nucleic acid.

Above models showed clearly that the coefficient c  controls how far the model get stretched. The other 8 components contributes information to the folding structure.

The second example compares models created by reversed sequence. The following clip compares the model of previous sequence with the model of its reversed sequence. We can see that the two models are geometrically more or less unchanged, except that the gradient direction is reversed. This means that the described modeling method is invariant under the sequence inversion.

The third example shows how the model changes when the sequence is duplicated multiple times. The following video clip shows the model of our sample sequence and the model of 5-time duplicates of that sequences. We can see that model of the 5-time duplicates resembles 5 time overlaps of the single sequence model.

As next example, we created a model for the protein sequence MELNSSFWTLIKTKMKSRNDNNKLLDTWLDPIEYVSTTGSADRP. The slightly more complicated model is shown in the following video clip:

We notice that the sampled vectors are all sparse vectors. In practical implementations, we can directly calculate the Euclidean distance between those vectors, without explicitly calculating those high dimensional vectors. In this way, we can model sequences of any base set. As final example
I have created a 3D model for the text sequence "One fish two fish red fish blue fish" as show in the following video clip:


The framework described here demonstrates a new way to derive geometrical models from discrete data. The framework is abstract in the sense that it is independent of the physical and chemical properties of the sequence.The experiments demonstrates that high dimensional feature space might be an effective way, as an intermediate stage, to derive 3 dimensional phonetic structure.

We notice that the models created above are all basically 1-dimensional curves folded in different ways. For the future study, it would be interesting to develop 2-dimensional sampling method that produces 2- or 3-dimensional models.  More speculatively for future study, can we develop an evolutionary process to find a sequence that gives rise to models with certain properties?

Monday, September 15, 2014

On affinity embedding

We have just released VisuMap version 4.2.902. A new mapping algorithm called Affinity Embedding (AE) has been introduced in this release. I'll briefly describe AE as a multidimensional scaling method(MDS);  or, in more recent terminology, as a dimensionality reduction method.

Like many MDS methods, AE tries to create a low dimensional representation of high dimensional data points by minimizing a stress function. AE's stress function is as following:
where aij is a positive number representing the affinity between data points i and j; dij is the Euclidean distance between that two points in the low dimensional space. Affinity as a generic concept quantifies certain kind of similarity between data entities. We notice that most MDS methods try to minimize difference (e.g. squared error) between a dissimilarity matrix and dij; whereas AE tries to approximate the similarity matrix.

Since many data sets from practices have been characterized by a distance matrix δij, VisuMap uses the following exponential affinity function to convert a distance matrix to an affinity matrix:
where cij are constants estimated based on densities around data points in the high dimensional space. This affinity function is just one simple way to convert dissimilarity to similarity; in fact any monotonous decreasing function may be used as affinity function. For more general applications, VisuMap offers a plugin interface to allow users to implement custom affinities.

AE uses a simple version of gradient descent method to minimize the stress function. The computational complexity for each iterative step is O(kN) where N is the number of data points and k is constant that is typically around value 100.  Moreover, the memory complexity of AE for each step is O(N). Thus, AE can be applied on relative large data sets, i.e. with a million and more data points.

The main characteristics of AE is its focus on local information (i.e. data with high affinity or low dissimilarity). The reason for this is that lower affinity are represented by close-to zero values and the optimization algorithm can more effectively ignore zero values; whereas other algorithms like CCA or Sammon map need to reduce the effect of large values (e.g. large distance or large dissimilarities.) which is normally harder to do.

AE has been largely inspired by the algorithm Stochastic Neighbor Embedding(SNE) and the more recent version t-SNE. Both SNE algorithms work pretty well in focusing on local information and both of them use a kind of exponential affinity functions in their stress function. However, because of their  probabilistic nature, they require in general quadratic computational and memory complexity, which makes it difficult to attack large data sets.

Like SNE, AE works pretty well in unfolding closed manifolds. The following pictures compare AE and t-SNE in mapping three spheres of different sizes in the 3D space to 2D plane. Notice that AE somehow preserves the size the three sphers; whereas t-SNE basically ignores the size information:

Data set forming three spheres in 3D space.

The t-SNE map for the three sphere data set.

The AE map for the three sphere data set.

Wednesday, January 15, 2014

VisuMap 4.2.896 Released

I have just released our software package VisuMap version 4.2.896.  In this release we have upgraded the graphics library DirectX from version 9 to version 11.  Due to this upgrade some of 3D animation data views have improved performance by more than 10 folds. VisuMap is now capable to interactively explore more than one million data points in 3D view on a normal desktop computer.

Also with this upgrade, VisuMap will require OS system Windows 7 or higher. Windows XP will be continually supported, but without new features which take advantage of DirectX 11 library.  In the future, we'll likly implement more performance enhancements by tackling the computation power of GPU.