Von Neumann languages do not have useful properties for reasoning about programs. Axiomatic and denotational semantics are precise tools for describing and understanding conventional programs, but they only talk about them and cannot alter their ungainly properties. Unlike von Neumann languages, the language of ordinary algebra is suitable both for stating its laws and for transforming an equation into its solution, all within the language. - John Backus

Introduction

When John Backus was awarded the Turing award in 1977 for his work on the Fortran compiler, he followed it with the lecture that became known as his apology for having done it. In the lecture, he condemned von Neumann languages (a set Fortran definitely belongs to) and the very architecture they are implemented for. He delved deeper and touched on the algebra of programs, or better yet, the lack of it, calling for the ability of languages to reason about the programs they implement.

With recent developments in hardware catering towards deep learning (ex. see Nvidia and Google), his von Neumann bottleneck concern is thoroughly addressed by the field. Yet his concern pertaining algebra and reasoning still tinders, as mathematical treatment of our models is usually thought of as separate from the language that implements them. A theory that would capture the mechanics of deep learning as an algebraic language would not only assist the development of tensor processing units, but also enhance the ability for the desired reasoning.

The development of an algebraic language deep learning is contained in has been a recurring subject in our research. As the field is populated by gradients and Jacobians, we are faced with the need for a language that allows analytic deductions through algebraic means; a need to be met by operational calculus and tensor algebra. We present a high level overview of some of our findings, inviting those who are interested in an in-depth treatment of the mechanics to read Operational Calculus on Programming Spaces. We will not proceed by directly treating deep learning, yet still, by seeking the simplest and most efficient extension to existing formalisms, arrive precisely at the calculus deep learning is contained in.

Programs as maps on the memory space

We begin by thinking of our model as an abstract concept, as it will shortly be made clear of how it is to be manifested, through examples of state of the art projects that employ some aspects of it.

If we only focus on the real valued variables (of type float or double), the set of all possible states of the program’s memory, can be modeled by a finite dimensional real vector space \(\mathcal{V}\), a familiar concept to those in touch with current research on differentiable computing. As such, the effects of a program on its memory space \(\mathcal{V}\) can be described by a map \[P:\mathcal{V}\to\mathcal{V},\] motivating our starting definition.

\(\mathcal{V}\) is a finite dimensional vector space over the field \(K\), serving as memory

\(\mathcal{F} The tuple \((\mathcal{V},\mathcal{F})\) is an Euclidean virtual machine, where

At first glance, the Euclidean machine seems to be a description of functional programming, with its compositions inherited from \(\mathcal{F}\). An intended impression, as we wish for our construction to be afforded the many fruits functional programming has gifted computer science. To that effect, we assume the standard model of \(\mathcal{F}\), with differentiability being the only restriction imposed on it, for now, as we turn our glance to the memory space \(\mathcal{V}\).

Virtual memory space

The memory space of a program is rarely treated as more than a storage. But to endow the Euclidean machine with extra structure, this is precisely the place to look to, while retaining back compatibility with the standard model of functional programming. Loosely speaking, functional programming is described by monoids, so a (multi)linear algebra description of the memory space would seem to be the step to take in attaining the extra structure.

We desire our memory space to be the most general algebra containing \(\mathcal{V}\), in the sense of the corresponding universal property. We start with the memory space \(\mathcal{V}\) and its dual \(\mathcal{V^*}:\mathcal{V}\to K\), that maps from the memory to the field over which it is defined and let the virtual memory space be defined by \[\mathcal{V}_k=\mathcal{V}_{k-1}+(\mathcal{V}_{k-1}\oplus\mathcal{V^*})\tag{1}\label{eq:memRec},\] with initial condition \(\mathcal{V}_0=\mathcal{V}\). The space that satisfies the relation \(\eqref{eq:memRec}\) is \[\mathcal{V}_k=\mathcal{V}\otimes T_k(\mathcal{V^*}),\tag{2}\label{eq:memSpace}\] where \(T_k(\mathcal{V^*})\) is the subspace of the tensor algebra of the dual of the memory space \(\mathcal{V}\), consisting of linear combinations of tensors of rank less or equal to \(k\). As such, the virtual memory space \eqref{eq:memSpace} is a map from the memory to itself, \[\mathcal{V}_n:\mathcal{V}\to \mathcal{V}\] which, for \(W\in\mathcal{V}_n\), we define to be \[W(v)=w_0+w_1\cdot v+\cdots +w_n\cdot(v)^{\otimes n}\tag{3}\label{contraction}\] the sum of multiple contractions (noting that the expressionistic convenience of \(W(v)=w_0+w_1\cdot v\), has already been adopted in contemporary research). By such a construction, expansions and contractions of the memory space, reminiscent of the breathing of the stack, would hold meaning parallel to storing values, motivating a new definition.

Let \((\mathcal{V},\mathcal{F})\) be a Euclidean machine an let \[\mathcal{V}_\infty=\mathcal{V}\otimes T(\mathcal{V^*}),\] where \(T(\mathcal{V^*})\) is the tensor algebra of the dual memory space. We call \(\mathcal{V}_\infty\) the virtual memory space of a virtual computing machine \((\mathcal{V},\mathcal{F})\).

By seeking a natural extension to functional programming, we converged to objects in terms of which deep learning is thought of. Such a virtual memory space is by itself the most efficient algebra of the memory, with its multi-linear mappings being ripe for parallel execution on the GPU, which is already throughly exploited. Yet, the nature in which it was derived seems to imply that there is something to be discovered about such machines, if we examine their memory for what it is beyond being an \(n\)-dimensional array.

Memory as a space

Treating the space itself as an object, the stored values can be seen as properties at a point (ex. what the particular basis element is being tensored with). These properties are best described by distributions over the space, where the deterministic behavior of the memory is recovered by simply imposing the dirac distribution at that point. Such concepts are already familiar in contemporary research (see handling of register values in Kurach et al.). Reading is done by flipping a coin (or working with the distribution itself) and writing by modifying the distributions; operations that can be made differentiable by various reparametrization tricks (ex. see section 2.4 in Jaderberg at al.). Moreover, we are able to come to terms with enumerable memory locations upon transitioning from theory to practice, by having patches of space share distributions.

Viewing the memory as a space affords us a framework in which we can manipulate it through established methods from attention models, enhanced by methods from computer vision. A similar approach has already been used in contemporary research for manipulating inputs by Spatial Transformer Networks, that have been further integrated in other projects for ROI selection to great success (see DenseCap).

Example

A simply constructible experiment is to apply these methods to the memory itself. As a toy demonstration, we construct a network with a shared external memory, used by all layers for reading and writing. As all layers have the ability to read and write to any part of the memory, this has a similar effect to skip and residual connections frequently used today. When used with recurrent networks, it even allows gradients to flow directly between different time steps, skipping the steps in between. Moreover, as layers can decide to read and write to different parts of the memory, when processing conceptually different inputs, this may lead to formation of clusters in the memory, similarly to how this is observed with word2vec in a slightly different environment.

For the sake of simplicity, we modify well known models to achieve our construction. Let \(M\) be a rank \(2\) tensor serving as memory and \(C\) be a word from it (similarly to how a byte is treated as a word of memory). With \(x\) standing for the input, we construct a layer from two LSTM layers (hereon denoted by \(L_i\)) as follows. First part is the controller, that produces the transformation parameters \(\Theta\) \[\Theta^t,C_1^t=L_1(\Theta^{t-1}_1, x, C_1^{t-1})\tag{4}\label{L1}\] where \(C_1\) is the hidden cell of \(L_1\). We than feed the \(\Theta\) to a spatial transformer, along with the memory \(M\), which selects our word \(C_2\) \[C^{\prime t}_2=STN(\Theta^t,M^{t-1}),\tag{5}\label{wordSel}\] which servers as a cell of \(L_2\) at the current time step \[h^t,C_2^t=L_2(h^{t-1},x,C^{\prime t}_2),\tag{6}\label{L2}\] producing the output \(h^t\) and the updated cell state \(C^t_2\). Now instead on simply holding the state, we write it to the same location we have read it from in \eqref{wordSel}, updating the memory state \(M^{t-1}\to M^t\). We denote such a layer spatially transformed memory \[STM: (\Theta^{t-1},x,C^{t-1}_1, h^{t-1},M^{t-1})\to(\Theta^t,C^{t}_1, h^t M^{t})\tag{7}\label{STM}\] in honor of LSTM and Spatial Transformer Networks it is derived from.

Experiments

We trained a small \(3\) layered \(STM\) \eqref{STM} on the simple task of character generation (trained on the corpus of Nietzsche), to observe the activity in the memory. The memory \(M\) was a \(20\) by \(20\) grid and the word \(C\) a \(10\) by \(10\) grid (which is later flattened, to be used in the LSTM). The network generates \(10\) characters at a time.

After a few hours of training (Adam + scheduling was used), we see that each layer treats the memory \(M\) differently, with its own pattern of reading and writing. The first layer spreads its reading and writing across most of \(M\), as indicated by the blue colored cells in the figure bellow.

The second layer performs its operations on a more concentrated patch of memory, and moves across it through time, as the network generates characters.

The final, third layer (preceding the softmax) can be seen to operate on multiple contiguous blocks of memory, which are no longer spread out, like they were in the previous two layers. This seems to indicate clustering of the data in the memory, mentioned before in reference to word2Vec.

We observe that the sentences generated by \(STM\) \eqref{STM} are more sensible and connected than those produced by regular LSTM (with comparable number of parameters) and conjecture this is due to the ability of the network to cluster "remembered" data in the memory \(M\) and the fact that said locations can be shared between different layers at different time steps.

Code

For those interested in playing with the example we include this code snippet to help get you started. It includes the \(STM\) layer and the code that produces the visualizations.

A note on training

Spatial transformers are usually employed to sample patches from existing images (i.e. non-zero tensors), while with \(STM\) we are starting with an empty memory \(M\) (i.e. zero tensors). Thus special care must be taken with initializations, so the memory \(M\) is thoroughly filled from the start, to facilitate information and gradient flow between different layers at different time steps.

Thus when training \(STM\), naive methods can cause the network to always read and write to the same locations in the memory \(M\), effectively converging to an LSTM network, as seen in Figure 5. The choice of the activation functions to be used when producing parameters \(\Theta\) has an effect on the way the network distributes data in the memory. This means that they can be used as a means of its control by the programmer, but naive choices can again lead to unwanted behavior in said distributions.

Programming spaces

Motivated by our definiton of the virtual memory space we define the following function spaces \[\mathcal{F}_n=\{f:\mathcal{V}\to\mathcal{V}\otimes T_n(\mathcal{V}^*)\},\] where the Fréchet derivative is hereon denoted by \(\partial\), with higher derivatives simply being the images of its higher powers \[\partial^k:\mathcal{F}^n\to\mathcal{F}^{n+k}.\] We are now ready to restrict our set \(\mathcal{F}\) to best suit our needs. As the derivatives need to be implementable in our machine we must impose closure under the differentiable operator \(\partial\) on the set.