Dimension-adapted Momentum Outscales SGD

Tags
Google
DeepMind
arxiv id
2505.16098
6 more properties

Abstract Summary

The research investigates the scaling laws for stochastic momentum algorithms with small batch on the power law random features model, exploring the impact of data complexity, target complexity, and model size.
The analysis reveals that dimension-adapted Nesterov acceleration (DANA) outperforms traditional stochastic gradient descent with momentum (SGD-M) by improving scaling law exponents through adjusting momentum hyperparameters based on model size and data complexity, leading to better compute-optimal scaling behavior.

Abstract

We investigate scaling laws for stochastic momentum algorithms with small batch on the power law random features model, parameterized by data complexity, target complexity, and model size. When trained with a stochastic momentum algorithm, our analysis reveals four distinct loss curve shapes determined by varying data-target complexities. While traditional stochastic gradient descent with momentum (SGD-M) yields identical scaling law exponents to SGD, dimension-adapted Nesterov acceleration (DANA) improves these exponents by scaling momentum hyperparameters based on model size and data complexity. This outscaling phenomenon, which also improves compute-optimal scaling behavior, is achieved by DANA across a broad range of data and target complexities, while traditional methods fall short. Extensive experiments on high-dimensional synthetic quadratics validate our theoretical predictions and large-scale text experiments with LSTMs show DANA's improved loss exponents over SGD hold in a practical setting.