SOAP: IMPROVING AND STABILIZING SHAMPOO USING ADAM
Introduction
The success of Shampoo has drawn increasing attention from the deep learning community. Several works have explored ways to scale Shampoo by improving its memory and compute efficiency (Wang et al., 2024; Anil et al., 2020; Shi et al., 2023). Other research (Morwani et al., 2024) has examined the theoretical foundations of Shampoo and proposed minor adjustments (such as using power rather than ) that align with prior empirical findings (Anil et al., 2020).
We study SOAP (ShampoO with Adam in the Preconditioner's eigenbasis) an algorithm that runs AdamW in the eigenbasis provided by Shampoo. Our main contributions are as follows:
- We make a formal connection between the Shampoo and the Adafactor algorithm. This insight leads us to consider the SOAP algorithm, which runs AdamW in the preconditioned space provided by Shampoo.
- SOAP outperforms both Shampoo and Adam in language model pre-training tasks with model sizes 360m and 660m, even after extensive hyperparameter tuning of Shampoo.
- SOAP reduces the number of hyperparameters compared to Shampoo, resulting in only one additional hyperparameter compared to AdamW: preconditioning frequency.
- SOAP demonstrates greater robustness to large preconditioning frequency compared to Shampoo on language model pre-training tasks.
Notation and background
We denote the weight matrix of a neural network layer by , and the corresponding gradient by . At a given time step , these are denoted as and , respectively. For a batch of inputs at time , denoted by , the loss and its gradient evaluated at are represented as and , respectively.
Adam (Kingma & Ba, 2015), a widely used first-order optimization algorithm in deep learning is a diagonal approximation of Adagrad. It maintains an exponential moving average of the gradients (denoted as ) and of element-wise squared gradients (denoted as ) for a given weight matrix . Its update rule with learning rate is given by
where the division is performed element-wise.
Adafactor (Shazeer & Stern, 2018; Zhai et al., 2022), a variant of Adam, replaces with its best rank-1 approximation to reduce memory usage. While the original Adafactor paper (Shazeer & Stern, 2018) proposed additional modifications, such as changes to the learning rate schedule, we focus on the version of Adafactor proposed in recent works (Zhai et al., 2022; Zhao et al., 2024c), whose update with learning rate is given by
Shampoo (Gupta et al., 2018b) is a second-order optimization algorithm that approximates Adagrad and maintains two preconditioners, and , for a given weight matrix . The updates for the preconditioners and the weights with learning rate are as follows:
Algorithms
We begin by describing an equivalence between Shampoo and running Adafactor in the eigenbasis of the Shampoo preconditioner. For simplicity we omit momentum but the equivalence also holds with momentum. For this equivalence we use Shampoo with the following modifications from the original Shampoo optimizer (Gupta et al., 2018b):
- We use power instead of power . This was already recommended in practical implementations (Anil et al., 2020; Shi et al., 2023) and a theoretical connection between optimal Kronecker approximation of Adagrad (Duchi et al., 2011b) preconditioner and Shampoo with power was established in Morwani et al. (2024).
- We also use the scalar correction to per layer learning rates described in Ren & Goldfarb (2021); Morwani et al. (2024).
- Instead of the running average of and across time steps, we use dataset averages.
Algorithm 1 Single step of idealized Shampoo with power
- Sample batch .
- (Where the expectation is over a random batch B.)