Consider a single linear layer in a neural network as a function \(f : \mathbb{R}^n \longrightarrow \mathbb{R}^m\) parameterized by a weight matrix \(W \in \mathbb{R}^{m \times n}\):

We wish to optimize the weight matrix to minimize some loss function \(\mathcal{L} (W)\). We can modify the standard gradient descent approach with a preconditioning matrix \(P \in \mathbb{R}^{mn \times mn}\):

\(\displaystyle \operatorname{vec} (W^{(t + 1)}) =\operatorname{vec} (W^{(t)}) - P\operatorname{vec} \left( \frac{\partial \mathcal{L}}{\partial W^{(t)}} \right)\) | (1) |

The \(\operatorname{vec} (\ldots)\) operator simply stacks the columns of a matrix into a vector, which we need to do to the gradient before multiplying by the preconditioning matrix.

Notice that \(P\) has \((mn)^2\) entries which for a modern neural network layer can be quite large. There are various approaches for approximating the preconditioning matrix. Here we will focus on one such approximation that “implicitly” implements preconditioning by reparameterizing the layer. In particular, we will construct the weight matrix as the product of three matrices:

Notice that only \(U^{(t)}\) has the superscript indexing optimization step because \(U^{(t)}\) is the only parameter we are optimizing here–the matrices \(T\) and \(V\) may be estimated from training statistics [1] or learned in the outer loop of some meta-learning procedure [2]. \(W^{(t)}\) has a superscript only because the updates to \(U^{(t)}\) will also change \(W^{(t)}\), but \(W^{(t)}\) is not itself a parameter now.

We will see that \(T\) and \(V\) *implicitly* act as
preconditioning for our gradient descent procedure. Recall that we are
minimizing some loss \(\mathcal{L} (W)\), then we can get the derivative
with respect to \(U\) in terms of \(\frac{\partial \mathcal{L}}{\partial
W}\) (see below for details):

Now consider running **standard** gradient descent on \(U\) (no
preconditioning). For simplicity, assume the learning rate is \(1\). The
\(U\) update is:

The update to \(U\) gives us the update to \(W\):

So we can see that \(W\) is updated by its gradient, but multiplied on the left and right sides by \(TT^T \) and \(V^T V\) .

Now we want to vectorize this update to compare with Eq. 1. We can do this using the Kronecker product of matrices, denoted “\(\otimes\)”. For matrices \(A, X, B\) with appropriate dimensions:

Applying the above trick to the update in Eq. 2, we obtain:

Comparing this with Eq. 1, we see that this is a special case of preconditioned gradient descent with Kronecker factored preconditioning matrix:

### 1Derivative of \(\mathcal{L}\) with respect to U

For some loss function \(\mathcal{L} (W)\) where the weights are reparameterized \(W (U) = TUV\), we obtain \(\frac{d\mathcal{L}}{d U}\) in terms of \(\frac{\partial \mathcal{L}}{\partial W}\) by inspecting the components. For simplicity we will be using Einstein notation throughout. First, we look at the derivative with respect to the \(s p\)'th entry of \(U\) using the chain rule:

From the definition \(W = TUV\) we have that:

Substituting this back into the previous expression:

By observing that \(\left( \frac{\partial \mathcal{L}}{\partial W} \right)_{i j} = \frac{\partial \mathcal{L}}{\partial W_{i j}}\) by definition, we realize the above is simply the component-wise way of writing:

## Bibliography

**[1] ** Guillaume Desjardins, Karen
Simonyan, Razvan Pascanu et al. Natural neural networks. In
*Advances in neural information processing systems*, pages
2071–2079. 2015.

**[2] ** Yoonho Lee and Seungjin Choi.
Gradient-based meta-learning with learned layerwise metric and
subspace. *ArXiv preprint arXiv:1801.05558*, 2018.