Clojure Linear Algebra Refresher (3) - Matrix Transformations

Need help with your custom Clojure software? I'm open to (selected) contract work.

June 13, 2017

Please share: .

These books fund my work! Please check them out.

A Clojure programmer will immediately feel at home with linear transformations - functions are also transformations! Linear transformations preserve the mathematical structure of a vector space. Just like functions define transformations, matrices define a kind of linear transformations - matrix transformations. I often spot programmers using matrices and vectors as dumb data structures and writing their own loops, or accessing elements one by one in a haphazard fashion. Matrices are useful data structures, but using them as transformations is what really gives them power. This is something that is very well understood in computer graphics, but is often neglected in other areas.

Before I continue, a few reminders:

This text covers the first part of chapter 6 of the alternate edition of the textbook.

Transformations

Consider a Clojure function (fn [x] (+ (* 3 x x) 4)). The set of allowed values for x is called the domain of the function. For value 2, for example, the result is 16. We say that the image of 2 is 16. In linear algebra, the term transformation is used, rather than the term function.

As an example, take transformation \(T\) from \(R^3\) to \(R^2\) defined by \(T(x,y,z) = (2x,y-z)\) that the textbook I'm using presents. The domain, \(R^3\) is the set of all real vectors with dimension 3, while the codomain, the set of valid results, is \(R^2\), the set of all real vectors with dimension 2. The image of any vector can be found by filling in concrete values in formula for T. The image of \((1,4,-2)\) is \(((2\times1),(4-(-2)))\), that is \((2,6)\).

We could write a function in Clojure that does exactly that:

(require '[uncomplicate.neanderthal
           [native :refer [dv dge]]
           [core :refer [mv mm nrm2 dot axpy]]
           [math :refer [sqrt sin cos pi]]])
(defn transformation-1 [x]
 (dv (* 2.0 (x 0)) (- (x 1) (x 2))))
(transformation-1 (dv 1 4 -2))
#RealBlockVector[double, n:2, offset: 0, stride:1]
[  2.00   6.00 ]

Although this way of working with vectors seems natural to a programmer, I'll show you a much easier method. And much faster if you have lots of those numbers!

In general, a transformation T of \(R^n\) (domain) into \(R^m\) (codomain) is a rule that assigns to each vector \(\mathbf{u}\) in \(R^n\) a unique vector \(\mathbf{v}\) (image) in \(R^m\) and write \(T(\mathbf{u})=\mathbf{v}\). We can also use the term mapping, which is more familiar to functional programmers.

Dilations, Contractions, Reflections, Rotations

Let's look at some useful geometric transformations, and find out that they can be described by matrices. For graphical representations of these examples, see the textbook, of course. Here, I concentrate on showing you the code.

Consider a transformation in \(R^2\) that simply scales a vector by some positive scalar \(r\). It maps every point in \(R^2\) into a point \(r\) times as far from the origin. If \(r>1\) it moves the point further from the origin (dilation), and if it is \(0< r < 1\), closer to the origin (contraction).

This equation can be written in a matrix form:

\begin{equation} T\left(\begin{bmatrix}x\\y\\\end{bmatrix}\right) = \begin{bmatrix} r & 0\\ 0 & r \end{bmatrix} \begin{bmatrix}x\\y\\\end{bmatrix} \end{equation}

For example, when \(r=3\):

(mv (dge 2 2 [3 0 0 3]) (dv 1 2))
#RealBlockVector[double, n:2, offset: 0, stride:1]
[  3.00   6.00 ]

By multiplying our transformation matrix with a vector, we dilated the vector!

Consider reflection: it maps every point in \(R^2\) into its mirror image in x-axis.

\begin{equation} T\left(\begin{bmatrix}x\\y\\\end{bmatrix}\right) = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix} \begin{bmatrix}x\\y\\\end{bmatrix} \end{equation}
(mv (dge 2 2 [1 0 0 -1]) (dv 3 2))
#RealBlockVector[double, n:2, offset: 0, stride:1]
[  3.00  -2.00 ]

Rotation about the origin through an angle \(\theta\) is a bit more complicated to guess. After recalling a bit of knowledge about trigonometry, we can convince ourselves that it can be described by a matrix of sines and cosines of that angle.

\begin{equation} T\left(\begin{bmatrix}x\\y\\\end{bmatrix}\right) = \begin{bmatrix} \cos\theta & -\sin\theta\\ \sin\theta & \cos\theta \end{bmatrix} \begin{bmatrix}x\\y\\\end{bmatrix} \end{equation}

If we'd like to do a rotation of \(\pi/2\), we'd use \(\sin\pi/2 = 1\) and \(\cos\pi/2 = 0\). For vector \((3,2)\):

(mv (dge 2 2 [0 1 -1 0]) (dv 3 2))
#RealBlockVector[double, n:2, offset: 0, stride:1]
[ -2.00   3.00 ]

Matrix transformations

Not only that we can construct matrices that represent transformations, it turns out that every matrix defines a transformation!

According to the textbook definition: Let \(A\) be a \(m\times{n}\) matrix, and \(\mathbf{x}\) an element of \(R^n\). \(A\) defines a matrix transformation \(T(\mathbf{x})=A\mathbf{x}\) of \(R^n\) (domain) into \(R^m\) (codomain). The resulting vector \(A\mathbf{x}\) is the image of the transformation.

Note that matrix dimensions m and n correspond to the dimensions of the codomain (m, number of rows) and domain (n, number of columns).

Matrix transformations have the following geometrical properties:

  • They map line segments into line segments (or points);
  • If \(A\) is invertible, they also map parallel lines into parallel lines.

Example 1 from page 248 illustrates this. Let \(T:R^2\rightarrow{R^2}\) be a transformation defined by the matrix \(A=\begin{bmatrix}4&2\\2&3\end{bmatrix}\). Define the image of the unit square under the transformation.

The code:

(let [a (dge 2 2 [4 2 2 3])
      p (dv 1 0)
      q (dv 1 1)
      r (dv 0 1)
      o (dv 0 0)]
  [(mv a p) (mv a q) (mv a r) (mv a o)])
'(#RealBlockVector(double  n:2  offset: 0  stride:1)
(  4.00   2.00 )
 #RealBlockVector(double  n:2  offset: 0  stride:1)
(  6.00   5.00 )
 #RealBlockVector(double  n:2  offset: 0  stride:1)
(  2.00   3.00 )
 #RealBlockVector(double  n:2  offset: 0  stride:1)
(  0.00   0.00 )
)

And, we got a parallelogram, since \(A\) is invertible. Check that as an exercise; a matrix is invertible if its determinant is non-zero (you can use the det function).

Something bugs the programmer in me, though: what if we wanted to transform many points (vectors)? Do we use this pedestrian way, or we put those points in some sequence and use good old Clojure higher order functions such as map, reduce, filter, etc.? Let's try this.

(let [a (dge 2 2 [4 2 2 3])
      points [(dv 1 0) (dv 1 1) (dv 0 1) (dv 0 0)]]
  (map (partial mv a) points))
'(#RealBlockVector(double  n:2  offset: 0  stride:1)
(  4.00   2.00 )
 #RealBlockVector(double  n:2  offset: 0  stride:1)
(  6.00   5.00 )
 #RealBlockVector(double  n:2  offset: 0  stride:1)
(  2.00   3.00 )
 #RealBlockVector(double  n:2  offset: 0  stride:1)
(  0.00   0.00 )
)

I could be pleased with this code. But I am not. I am not, because we only picked up the low-hanging fruit, and left a lot of simplicity and performance on the table. Consider this:

(let [a (dge 2 2 [4 2 2 3])
      square (dge 2 4 [1 0 1 1 0 1 0 0])]
  (mm a square))
#RealGEMatrix[double, mxn:2x4, order:column, offset:0, ld:2]

  4.00   6.00   2.00   0.00
  2.00   5.00   3.00   0.00

By multiplying matrix \(A\) with matrix \((\vec{p},\vec{q},\vec{r},\vec{o})\), we did the same operation as transforming each vector separately.

I like this approach much more.

  • It's simpler. Instead of maintaining disparate points of a unit square, we can treat them as one entity. If we still want access to points, we just call the col function.
  • It's faster. Instead of calling mv four times, we call mm once. In addition to that, our data is concentrated next to each other, in a structure that is cache-friendly. This can give huge performance yields when we work with large matrices.

This might be obvious in graphics programming, but I've seen too much data crunching code that uses matrices and vectors as dumb data structures, that I think this point is worth reiterating again and again.

Composition of Transformations

I hope that every Clojure programmer will immediately understand function composition with the comp function:

(let [report+ (comp (partial format "The result is: %f") sqrt +)]
  (report+ 2 3))
The result is: 2.236068

By composing the +, sqrt and format functions together, we got the function that transforms the input equivalently. Or more formally, \((f\circ{g}\circ{h})(x)=f(g(h(x)))\).

Matrices are transformations just like functions are transformations; it's no surprise that matrices (as transformations) can also be composed!

Let's consider \(T_1(\mathbf{x}) = A_1(\mathbf{x})\) and \(T_2(\mathbf{x}) = A_2(\mathbf{x})\). The composite transformation \(T=T_2\circ{T_1}\) is given by \(T(\mathbf{x})=T_2(T_1(\mathbf{x}))=T_2(A_1(\mathbf{x}))=A_2 A_1(\mathbf{x})\).

Thus, the composed matrix transformation \(A_2\circ A_1\) is defined by the matrix product \(A_2{A_1}\). It can be extended naturally to a composition of \(n\) matrix transformations. \(A_n\circ{A_{n-1}}\circ\dots\circ{A_1}=A_n A_{n-1}\dots A_1\)

In Clojure, I can compose matrices with the mm function. Not only that mm supports 2, 3, or more arguments, it also takes care of multiplying them optimally.

Take a look at the Clojure code for the example 2 (page 250) form the textbook. There are 3 matrices given. One defines reflection, the other rotation, and the third dilation. By composing them, we get one complex transformation that does all three transformations combined.

(let [pi2 (/ pi 2)
      reflexion (dge 2 2 [1 0 0 -1])
      rotation (dge 2 2 [(cos pi2) (sin pi2) (- (sin pi2)) (cos pi2)])
      dilation (dge 2 2 [3 0 0 3])]
  (def matrix-t (mm dilation rotation reflexion)))
matrix-t
#RealGEMatrix[double, mxn:2x2, order:column, offset:0, ld:2]

  0.00   3.00
  3.00  -0.00

The image of the point \((1,2)\) is:

(mv matrix-t (dv 1 2))
#RealBlockVector[double, n:2, offset: 0, stride:1]
[  6.00   3.00 ]

Orthogonal Transformations

Recall from Clojure Linear Algebra Refresher (1) - Vector Spaces that orthogonal matrix is an invertible matrix for which \(A^{-1} = A^t\). An orthogonal transformation is a matrix transformation \(T(\mathbf{x})=A\mathbf{x}\), where \(A\) is orthogonal. Orthogonal transformations preserve norms, angles, and distances. They preserve shapes of rigid bodies.

I'll illustrate this with example 3 (page 252):

(let [sqrt12 (sqrt 0.5)
      a (dge 2 2 [sqrt12 (- sqrt12) sqrt12 sqrt12])
      u (dv 2 0)
      v (dv 3 4)
      tu (mv a u)
      tv (mv a v)]
  [[(nrm2 u) (nrm2 tu)]
   [(/ (dot u v) (* (nrm2 u) (nrm2 v))) (/ (dot tu tv) (* (nrm2 tu) (nrm2 tv)))]
   [(nrm2 (axpy -1 u v)) (nrm2 (axpy -1 tu tv))]])
2.0 2.0
0.6 0.6000000000000001
4.123105625617661 4.1231056256176615

We can see that norms, angles and distances are preserves (with a rather tiny differences due to rounding errors in floating-point operations).

Translation and Affine Transformation

There are transformations in vector spaces that are not truly matrix transformations, but are useful nevertheless. I'll show you why these transformations are not linear transformations in the next post; for now, let's look at how they are done.

Translation is transformation \(T:R_n \rightarrow R_n\) defined by \(T(\mathbf{u}) = \mathbf{u} + \mathbf{v}\). This transformation slides points in the direction of vector \(\mathbf{v}\).

In Clojure, the translation is done with the good old axpy function.

(axpy (dv 1 2) (dv 4 2))
#RealBlockVector[double, n:2, offset: 0, stride:1]
[  5.00   4.00 ]

Affine transformation is a transformation \(T:R_n \rightarrow R_n\) defined by \(T(\mathbf{u}) = A\mathbf{u} + \mathbf{v}\). It can be interpreted as a matrix transformation followed by a translation. See the textbook for more details, and you can do a Clojure example as a customary trivial exercise left to the reader. Hint: the mv function can do affine transformations.

To be continued…

These were the basics of matrix transformations. In the next post, we will explore linear transformations more. I hope this was interesting and useful to you. If you have any suggestions, feel free to send feedback - this can make next guides better.

Happy hacking!

Clojure Linear Algebra Refresher (3) - Matrix Transformations - June 13, 2017 - Dragan Djuric