Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Introduction
Large language models (LLMs) have achieved remarkable performance as their scale increases, but their pre-training process is extremely expensive in terms of time, compute, and cost. For example, training models like PaLM can take months and cost millions of dollars. Therefore, improving the efficiency of optimization algorithms is a critical bottleneck in scaling LLMs.
Currently, Adam and its variants dominate LLM training due to their robustness and effectiveness. However, designing faster optimizers is challenging for two main reasons:
- The theoretical understanding of Adam's preconditioning mechanism is still limited.
- More advanced second-order methods (Hessian-based) are typically too computationally expensive for large-scale models.
To address this, the paper introduces Sophia (Second-order Clipped Stochastic Optimization), a scalable second-order optimizer with the following key ideas:
- It uses a lightweight diagonal Hessian estimate as a preconditioner.
- It applies element-wise clipping to control update magnitudes and improve stability.
- It computes Hessian estimates infrequently (every k steps) to reduce overhead.
Empirically, Sophia achieves significant improvements:
- 2x speedup over Adam in terms of training steps, total compute, and wall-clock time.
- Shows better scaling behavior as model size increases.
Conceptually, Sophia improves optimization by:
- Better adapting to heterogeneous curvature across parameters (sharp vs flat directions).
- Penalizing updates more in sharp directions and less in flat ones.
- Avoiding instability from non-convexity via clipping.
Theoretically, the authors show that Sophia can achieve convergence rates independent of the condition number, highlighting its advantage over traditional first-order methods.
Method
We first instantiate gradient descent (GD) and Adam on a simplified 2D problem and motivate the use of second-order information and per-coordinate clipping. Then, we present Sophia in detail and the pseudo-code in Algorithm 3. We introduce two choices of estimators of diagonal Hessian used in Sophia.
Motivation
Heterogeneous curvatures. The loss functions of modern deep learning problems often have different curvatures across different parameter dimensions. E.g., on a 125M-parameter GPT-2 model, the distribution of positive diagonal entries of the Hessian is dispersed.
We demonstrate the limitations of Adam and GD on heterogeneous landscapes by considering a two-dimensional loss function where is much sharper than .[^1]
[^1]: Concretely, and

Recall that GD's update in this setting is:
A common simplification of Adam that is more amenable to analysis is SignGD, where Adam's update is simplified to (where all operations are entry-wise). Applying the update rule to our setting gives:
Limitation of GD and SignGD (Adam)
The optimal learning rate should be inversely proportional to the curvature. Let and be the curvatures of and at the optimum.
If (sharp vs. flat dimension):
- The optimal learning rate for is (small).
- The optimal learning rate for is (much larger).
GD must choose a small step size to remain stable in sharp directions. This leads to very slow convergence in flat dimensions.
The update size of SignGD is the learning rate in all dimensions.
This causes two problems:
- Flat directions: progress is slow because each step reduces the loss only slightly.
- Sharp directions: updates may overshoot and cause oscillations around the minimum.
To stabilize the sharp direction, the learning rate must decay toward zero, which further slows convergence in flat directions. The trajectory of Adam in this example is indeed similar to SignGD.
The behavior of SignGD and Adam above indicates that a more aggressive pre-conditioning is needed — sharp dimensions should have relatively smaller updates than flat dimensions so that the decrease of loss is equalized in all dimensions. As suggested by well-established literature on second-order optimization (Boyd & Vandenberghe, 2004) for convex functions, the optimal preconditioner should be the Hessian, which captures the curvature on each dimension; as in Newton's method, the update is the gradient divided by the Hessian in each dimension:
Limitations of Newton's Method
For non-convex functions, vanilla Newton's method could converge to a global maximum when the local curvature is negative. Newton's method quickly converges to a saddle point instead of a local minimum in some cases. The curvature might also change rapidly along the trajectory, making the second-order information unreliable.
To address these limitations, we propose considering only pre-conditioners that capture positive curvature, and introduce a per-coordinate clipping mechanism to mitigate the rapid change of Hessian (more detail in Section 2.2). Applying our algorithm on the toy case results in the following update:
where is a constant to control the worst-case update size, and is a small constant to prevent division by zero. When the curvature of some dimension is rapidly changing or negative and thus the second-order information is misleading and possibly leads to a huge update before clipping, the clipping mechanism kicks in and the optimizer defaults to SignGD.
As shown by the results, the update starts off similarly to SignGD due to the clipping mechanism in the non-convex region, making descent opposed to converging to a local maximum. Then, in the convex valley, it converges to the global minimum with a few steps.
Compared with SignGD and Adam, it makes much faster progress in the flat dimension (because the update is bigger in dimension ), while avoiding bouncing in the sharp dimension (because the update is significantly shrunk in the sharp dimension ).
Sophia: Second-order Clipped Stochastic Optimization
The motivation section demonstrates that Adam does not sufficiently adapt to the heterogeneous curvatures. On the other hand, vanilla Newton's method has a pre-conditioner optimal for convex functions, but is vulnerable to negative curvature and rapid change of Hessian. With these insights, we design a new optimizer, Sophia, which is more adaptive to heterogeneous curvatures than Adam, more resistant to non-convexity and rapid change of Hessian than Newton's method, and also uses a low-cost pre-conditioner.
We use to denote the parameter at time step . At each step, we sample a mini-batch from the data distribution and calculate the mini-batch loss, denoted by . We denote by the gradient of , i.e. . Let be the EMA of gradients, , which is the numerator of the update.
EMA of diagonal Hessian estimates. Similar to the gradient of the mini-batch loss function, the estimated diagonal Hessian can also have large noise. Inspired by the EMA of moments of gradients in Adam, we also denoise the diagonal Hessian estimates with EMA across iterations. We update the EMA every steps, resulting in the following update rule for the diagonal Hessian estimate:
Per-coordinate clipping. As discussed in Section 2.1, on nonconvex functions, vanilla Newton's method, which uses Hessian as the pre-conditioner, may converge to local maxima instead of local minima. In addition, the inaccuracy of Hessian estimates and the change of Hessian along the trajectory can make the second-order information unreliable. To this end, we (1) only consider the positive entries of the diagonal Hessian and (2) introduce per-coordinate clipping to the update. For a clipping threshold , let the clipping function be where all operations are applied entry-wise. The update rule is as follows: