Fast Tensors in Clojure - a Sneak Peek

Need help with your custom Clojure software? I'm open to (selected) contract work.

August 26, 2019

Please share: .

These books fund my work! Please check them out.

I hope that many Clojurists are wondering about the progress of the books that I'm writing: "Where are the new chapters?" "Are you doing something?" "I'd like new chapters, please!" (I mean: I hope you're saying that. It means that you care!)

There's a lot going on behind the scenes, since I am not only writing the books; at the same time I have to create the software to write about. For example, a slim and super-fast deep learning library.

I call it Deep Diamond, and here's a sneak peek of some basic things from my trusty REPL.

Let's create a 4D tensor \(x\). If you've ever used Neanderthal (), you already expect this to be as straightforward as creating any plain Clojure structure. Just evaluate the tensor function, and provide the shape of the tensor in all dimensions ([2 3 2 4]), the data type of its entries (:float), and the layout of its four dimensions in the one-dimensional raw memory (:nchw).

(def x (tensor [2 3 2 4] :float :nchw))

We can be interested in its basic properties, such as its shape.

(shape x)
=> [2 3 2 4]

Shape is always consistently reported in the canonical \(abcdefg\dots\) order, which corresponds to nchw in 4D case, regardless of the layout of the dimensions.

Another key information is the data type of the entries.

(data-type x)
=> :float

All tensors can be viewed as Neanderthal vectors, which gives us a convenient way to compute operations that are not implemented in low-level performance tensor libraries, but are provided by Neanderthal.

Via Neanderthal, we can inspect the values of tensor entries.

(view x)
=>
#RealBlockVector[float, n:48, offset: 0, stride:1]
[   0.00    0.00    0.00    ⋯      0.00    0.00 ]

We did not do anything to set particular values of this tensor, so all entries are zero. Fortunately, Neanderthal's transfer! function can be used to transfer a Clojure sequence to the tensor \(x\).

(transfer! (range) x)
=>
#RealBlockVector[float, n:48, offset: 0, stride:1]
[   0.00    1.00    2.00    ⋯     46.00   47.00 ]

What if we need to transfer the data from one tensor to another?

(def y (tensor [2 3 2 4] :float :nhwc))

Notice how \(x\) and \(y\) have different layouts: nhwc vs nchw.

(transfer! x y)
=>
class clojure.lang.ExceptionInfoclass clojure.lang.ExceptionInfoExecution error
(ExceptionInfo) at uncomplicate.commons.utils/dragan-says-ex (utils.clj:101).

Dragan says: You need a specialized transformer to transfer these two MKL-DNN tensors.

If we let Neanderthal blindly transfer the underlying entries, the \(h\), \(w\), and \(c\) dimensions of \(y\) would have been mixed up. Deep Diamond warns us about that, with two possible options. If we wanted to disregard layout, we could transfer (view x) to (view y). Since we want to take different layouts into account, we should create a specialized transformer offered by Deep Diamond.

(def transf (transformer x y))

This function implements an optimized reordering transformation between these two tensors. Invoke it to do the in-place transformation.

(transf)

The entries have been transferred, with the proper order of dimensions.

(view y)
=>
#RealBlockVector[float, n:48, offset: 0, stride:1]
[   0.00    8.00   16.00    ⋯     39.00   47.00 ]

When the layouts are compatible, Neanderthal's transfer! lets us use it without the need for special transformer functions.

(def z (tensor [2 3 2 4] :float :nchw))
(transfer! x z)
=>
#'user/z#RealBlockVector[float, n:48, offset: 0, stride:1]
[   0.00    1.00    2.00    ⋯     46.00   47.00 ]

All Vector Spaces functions work with tensor views, if you know what you're doing.

(scal! 100 (view y))
=>
#RealBlockVector[float, n:48, offset: 0, stride:1]
[   0.00  800.00 1600.00    ⋯   3900.00 4700.00 ]

After we have scaled \(y\), we can try out the reverse transformation, and move these values to \(x\).

((revert transf))
(view x)
=>
#RealBlockVector[float, n:48, offset: 0, stride:1]
[   0.00  100.00  200.00    ⋯   4600.00 4700.00 ]

The trainsfer! function works even when logical layouts are different, but the physical layout is the same, such as between :nchw, which is typically used for inputs of neural network layers, and the corresponding layout used for weights, :oihw.

(def w (tensor [2 3 2 4] :float :oihw))
(transfer! x w)
=>
#RealBlockVector[float, n:48, offset: 0, stride:1]
[   0.00  100.00  200.00    ⋯   4600.00 4700.00 ]
(= x w)
true

I planned to write a short article as an appetizer, but somehow it started getting too long, as usual. So, lets rest :). What are we supposed to be doing with these tensors anyway?

You can expect this to work:

(network input-tz
         [(fully-connected [1000 64] :relu)
          (fully-connected [1000 64] :elu)
          (fully-connected [1000 2] :logistic)])

Expect typical tensor operations: inner products, convolutions, neural network layers, learning optimization algorithms, optimized CPU execution, optimized GPU execution, and all that.

Deep Diamond is almost as lean as Neanderthal, with a high level Clojure API that does everything automatically. You rarely need to open the hood, plug into the lower levels, and get your hands dirty, but when you do, there is a progression of ways to do exactly what you need at the level of control that you need.

Everything in Clojure, and everything optimized to squeeze full performance of the state of the art low-level deep learning performance libraries provided by Intel and Nvidia.

So, you like it? Please mention the books online and offline and spread the word!

You're skeptical? Is Deep Diamond possible? I'd love to hear your questions!

Don't like it? Please voice your opinion and be specific. I'd love to hear how I can make this better!

So, where can you download it? Once it is released, Deep Diamond will be available on GitHub (), so you might star the project if you'd like to follow the progress. (There is more development, and API polishing that I'd like to do, and an upstream library is still in the snapshot phase, so please be patient).

I'm funding the work on this library with the books that you can subscribe to, and there is also Patreon where you can give the direct support for the software itself.

I am also thinking of applying to Clojurists Together with Deep Diamond, so if you're donating there, and you think that this project can be a valuable contribution to the Clojure ecosystem, it would be nice to mention Uncomplicate Deep Diamond in the surveys.

Anyway, Deep Diamond will be fully open sourced under the EPL license, for everyone. You do not have to support it to have full access to it. Your support would be highly appreciated, since it really does help.

Impatient to try this? Everything you learn through my series on Deep Learning from Scratch to GPU applies to Deep Diamond, so fire up your REPL and brush up your linear algebra and DL basics first, please :)

Fast Tensors in Clojure - a Sneak Peek - August 26, 2019 - Dragan Djuric