Friday, June 28, 2019

Representation Learning with Deep Neural Network

Representation Learning for Algebraic Relationships


This note puts forward a simple framework to learn data representation with deep neural networks. The framework is inspired by mathematical representation theory where we look for vector representations for elements of algebraic structure like groups.

To describe the framework, let's assume that we have the following table for an abstract operation:

Table 1: Binary operation table for abstract items a, b, c and d.

Above table shows that the abstract entities a, b, c and d fulfill an abstract operation ⊕, for instance: aa = 0;  bc = 3; etc..  We now try to find representations for a, b, c and d with two dimensional vectors. In order to do so, we start with a default zero-representation as in following table:

a:(0, 0)
b:(0, 0)
c:(0, 0)
d:(0, 0)
Table 2: Initial representation table for a, b, c, d as 2 dimensional zero vectors.

That means a, b, c and d are all represented by vectors (0, 0). We then construct a feed-forwards neural networks as illustrated as following:


The network has four input nodes to accept a pair of 2-dimensional vectors; and one output node to approximate the operation ⊕.  Like conventional feed-forward neural network, the input and output nodes will get data from the operation and representation table before each optimization step. Unlike the conventional feed-forward network, the input nodes are configured as trainable variables which will be adjusted during the learning process; and their values will be saved back to the representation table after each learning step. More particularly, the learning process goes as follows:
  1. Randomly select a sample from the operation table, say b⊕c = 3.
  2. Lookup the representation for a and b from the representation table. Assign them to input nodes of the network. Lookup the expected output for b⊕c from the operation table, and pass it to the output node as learning target.
  3. Run the optimization algorithm, e.g. the ADAM optimization algorithm, for one step, which will change the state of input nodes to new values b'0, b'1, c'0 and c'1
  4. Save  b'0, b'1, c'0 and c'1 back to the representation table as new representation for b and c.
  5. Continue with step 1.
With an appropriate choices for the network and training algorithm, the above training process will eventually converge to a representation for a,b,c,d, so that the network realizes the abstract operation as described in table 1. A tensorflow based implementation of above training process is available at
Z4GroupLearning, which can be run on the google cloud computing platform from a browser.

Representation Learning for Transformations

In previous example the table 1 actually describes the finite group ℤ4, representation are linked to the input layer.  In general, the framework is not restricted to binary operations and the representation vectors can be linked to any state vectors, i.e. weights, bias and other trainable variables, of the network.

For the demonstration we describe here a network that finds representation for transformations in the 3D space. Let's assume that X is a 1000x3 table consisting of 1000 points from a  sphere in the 3-dimensional space. Let {T0, T1, ...} be a sequences of 3D-to-3D transformations which transform X to {Y0, Y1, ...}. Assuming that X and  {Y0, Y1, ...} are known, we can then find representations for the transformations {T0, T1, ...} with a deep neural network as illustrated as following:

The transformations are linked to some internal part of the weight matrix. Before starting the training process, the transformations are initialized with zero-matrices with the same shape as the linked weight matrix. At each training epoch, a 3-tuple (X, TtYt) is randomly selected; Tt is pushed to the linked weight matrix; then the optimization process is run to learn the map X-to-Yt mapping; After the optimization process has run for several loops for all data points in X and Yt, the linked weight matrix is pulled back in to the representation Tt, stored as T't for future epochs. The training process then continues with other randomly selected 3-tuples, until all  Yt have sufficiently approximated by the network.

As an example, we have selected 30 linear translations and 30 rotations to transform the sphere in the 3D space. The following video clip shows the transformed images Yt:



After the training process has completed after about 100 epochs, we have got 60 matrices as representation for the transformations. We have embedded the 60 matrices into the 3-dimensional space with PCA (principal component analysis) algorithm. The following picture shows the embedding of the 60 transformations:


In above picture, each dot represents a transformation: a red dot corresponds to a linear translation, a yellow dot to rotation. We see that the representations generated by the deep neural network correspond very well the geometry structure of the transformations.

Representation Learning for Dimensionality Reduction

Representation learning (RL) described here can be used to perform non-linear dimensionality reduction in a straightforward way. So for example, in order to reduce a m dimensional dataset to k dimension, we can simply take a feedforward network with k input nodes and m output nodes. As depicted in the following diagram, the input nodes are configured as learning nodes whose state values get values from the training data and adjusted during the learning process.

We notice that autoendcoder (AE) networks has often be used in various ways for dimensionality reduction and unsupervised learning. Comparing to AE, RL basically performs just decoding service. However, the essential difference is that the AE trains a network, whereas RL trains directly the lower dimentional representation.

The following maps shows an example of mapping a 3-dimenional sphere dataset to the 2 dimensional plane. An implementation with tensorflow on colab platform can be found here.


We notice that 2-dimenional map preserved large part of the local topology, while unfolding and flattening the 3D-sphere structure.

If encoding service (like that in AE) is needed, we can extend the above RL network with an encoding network that maps the network output to the lower dimensional data. This encoding network can be then trained independently from the decoding part, or concurrently with the decoding network part.

Application for Data Imputation in Epigenomics

Epigenomic study shows that different cell types of multi-cellular organisms developed from a common genome by selectively express different genes. Many types of assays have been developed to probe those epi-genomic manipulations. The result of such an assay is normally a sequence of numerical values profiling certain biochemical properties along the whole genome strand. With a set of assays and a set of cell types we can then obtain a profile matrix as depicted as following:

Table 3: Profile matrix for cell and essay types.

A large profile matrix about various cell types can help us to understand the behaviors of those cells. However, it is currently rather expensive to conduct essays in large numbers. To alleviate such limitations, researchers have tried to build models based on available data, so that they can impute profiles for news cell types or essay types (ChromImputeAVOCADO, PREDICTD, Encode Imputation Challenge,)

We see that table 3 has similar structure as table 1, thus we can model table 3 similarly as table 1.  We notice here two main difference between the two tables: Firstly, table 3 is only sparsely defined as only limited number of experiments have been done. Secondly, an element in table 3 is a whole sequence (which may be potentially billions of numerical values.), whereas an element in table 1 contains a single value. It is obvious that the training algorithm for table 1 can handle the data sparsity problem without any change. In order to obtain representation for each cell and assay type, we only have to make sure that each row and each column contains some defined profile data.

In order to address the high dimensional data issue, we extend our neural network as depicted as following:


Figure 1: Neural network for cell- and assay-type representation learning.

The training algorithm runs roughly as following:

  1. Randomly select a profile from table 3, say a profile P(A, b) for cell-type A and assay type b. 
  2. Segment the profile into sections of equal length N; and arrange the sections into a matrix PA,b that has L rows and N columns.
  3. Lookup the representation RA for cell type A in present set of representations. If no such representation is found, initialize RA as a zero-matrix of  dimension LxDc, where Dis a preset constant.
  4. Lookup the representation Rb for assay type b in present set of representations. If no such representation is found, initialize Rb as a zero-matrix of dimension LxDa, where Dis a preset constant.
  5. Randomly select an integer k between 0 and L; Push the k -th row of RA and Rb into the input layer of the network; Run the optimization process, e.g. with ADAM optimizer, for one step with k-th of PA,b as the target label. 
  6. Pull back the state of the input layer into the k -th row of RA and Rb respectively. 
  7. Repeat the steps 5 and 6 for certain preset numbers. 
  8. Select an other profile and repeat step 1 to 7 till the training process converges to certain  state with small errors.
Notice that the network depicted above consists mainly of dense feed-forward layers. Only the last layer is a transposed convolution layer. This layer significantly extends the output dimension and therefore speedup the training process for long profile sequences. Another technique to speedup the training, that is not directly visible in above diagram, is the batched learning: the training process updates the weight matrix and input layer states only after 30 to 40 samples have been feed into the algorithm.

Representation Learning versus Model Learning

Most deep learning frameworks are model learning in the sense that they train models to learn data. Compared to the amount of training data, the size of the trained models are relatively small. RL extends model training to learn vector representation for key entities, which subject to certain algebraic relationships. Those key entities can be, element of mathematical groups,  3D image transformation, biological cell and assay types. Since RL exploits modeling power of deep neural network in the same way as model learning does, methods and techniques developed for model learning (like data batching, normalization and regularization,) can be directly used by RL framework.

It is interesting to notice RL incidentally resembles the working mechanism in cellular organism as revealed in molecular biology. In the latter case, the basic machinery of a cell is the RNA ribosome, which is relatively simple and generic compared to the DNA sequences. The ribosome could be considered as a biological model, whereas the DNA sequences as representation (i.e. code) for plethora of organic phenotypes. The learning process of the two models are however different. Whereas the ribosome model learns by recombination and optimizing selection, RL learns by gradient back-propagation. The former seems to be more general, whereas the latter be more specific and, arguably, more efficient. Nevertheless, I do think that nature organism at certain level might host more efficient gradient directed learning procedure.

Conclusion

We have presented a representation learning method with deep neural network. The method basically swaps the data representation with states of neuron nodes during the training process. Intuitively, those learned representation can be considered as target data back-propagated to designated neuron nodes. With examples we have demonstrated that RL algorithm can effectively learn abstract data like algebraic entities, transformations, and can perform services like dimensionality reduction and data imputation.









No comments: