Scaling Large Language Model Training on Frontier with Low-B...

Scaling Large Language Model Training on Frontier with Low-B...

By Lang Xu, Quentin Anthony...

Abstract:

Scaling up Large Language Model(LLM) training involves fitting a tremendous amount of training parameters across a limited number of workers. However, methods like ZeRO-3 that drastically reduce GPU memory pressure often incur heavy communication to ensure global synchronization and consistency. Est...

Key points:

  • Research on large language models
  • Engineering application

Source: Scaling Large Language Model Training on Frontier with Low-Bandwidth Partitioning

146 | 521