Benchmarking Optimizers for Large Language Model Pretraining
Abstract:
The recent development of Large Language Models (LLMs) has been accompanied by an effervescence of novel ideas and methods
to better optimize the loss of deep learning models. Claims from those methods are myriad: from faster convergence to removing
reliance on certain hyperparameters. However, the diverse experimental protocols used to validate these claims make direct
comparisons between methods challenging. This study presents a comprehensive evaluation of recent optimization techniques across
5 views
Last edit 3 days ago