Intro
Each ML engineer engaged on LLM coaching has confronted the query from a supervisor or product proprietor: ‘How lengthy will it take to coach this LLM?’
Once I first tried to seek out a solution on-line, I used to be met with many articles overlaying generic matters — coaching methods, mannequin analysis, and the like. However none of them addressed the core query I had: How do I really estimate the time required for coaching?
Annoyed by the dearth of clear, sensible steerage, I made a decision to create my very own. On this article, I’ll stroll you thru a easy, back-of-the-envelope technique to rapidly estimate how lengthy it would take to coach your LLM based mostly on its dimension, knowledge quantity, and accessible GPU energy
Strategy
The objective is to quantify the computational necessities for processing knowledge and updating mannequin parameters throughout coaching by way of FLOPs (floating level operations). Subsequent, we estimate the system’s throughput in FLOPS (floating-point operations per second) based mostly on the kind and variety of GPUs chosen. As soon as every part is expressed on the identical scale, we will simply calculate the time required to coach the mannequin.
So the ultimate formulation is fairly simple:
Let’s dive into figuring out estimate all these variables.
FLOPs for Information and Mannequin
The variety of add-multiply operations per token for the ahead go for Transformer based mostly LLM entails roughly the next quantity of FLOPs:
The place the issue of two comes from the multiply-accumulate operation utilized in matrix multiplication.
The backward go requires roughly twice the compute of the ahead go. It is because, throughout backpropagation, we have to compute gradients for every weight within the mannequin in addition to gradients with respect to the intermediate activations, particularly the activations of every layer.
With this in thoughts, the floating-point operations per coaching token might be estimated as:
A extra detailed math for deriving these estimates might be discovered within the paper from the authors here.
To sum up, coaching FLOPs for the transformer mannequin of dimension N and dataset of P tokens might be estimated as:
FLOPS of the coaching Infrastructure
At present, most LLMs are skilled utilizing GPU accelerators. Every GPU mannequin (like Nvidia’s H100, A100, or V100) has its personal FLOPS efficiency, which varies relying on the information kind (type issue) getting used. As an example, operations with FP64 are slower than these with FP32, and so forth. The height theoretical FLOPS for a selected GPU can normally be discovered on its product specification web page (e.g., here for the H100).
Nonetheless, the theoretical most FLOPS for a GPU is usually much less related in follow when coaching Giant Language Fashions. That’s as a result of these fashions are sometimes skilled on hundreds of interconnected GPUs, the place the effectivity of community communication turns into essential. If communication between gadgets turns into a bottleneck, it could possibly drastically cut back the general pace, making the system’s precise FLOPS a lot decrease than anticipated.
To deal with this, it’s necessary to trace a metric known as mannequin FLOPS utilization (MFU) — the ratio of the noticed throughput to the theoretical most throughput, assuming the {hardware} is working at peak effectivity with no reminiscence or communication overhead. In follow, because the variety of GPUs concerned in coaching will increase, MFU tends to lower. Reaching an MFU above 50% is difficult with present setups.
For instance, the authors of the LLaMA 3 paper reported an MFU of 38%, or 380 teraflops of throughput per GPU, when coaching with 16,000 GPUs.
To summarize, when performing a back-of-the-envelope calculation for mannequin coaching, observe these steps:
- Determine the theoretical peak FLOPS for the information kind your chosen GPU helps.
- Estimate the MFU (mannequin FLOPS utilization) based mostly on the variety of GPUs and community topology, both by way of benchmarking or by referencing open-source knowledge, similar to experiences from Meta engineers (as proven within the desk above).
- Multiply the theoretical FLOPS by the MFU to get the common throughput per GPU.
- Multiply the consequence from step 3 by the full variety of GPUs concerned in coaching.
Case research with Llama 3 405B
Now, let’s put our back-of-the-envelope calculations to work and estimate how lengthy it takes to coach a 405B parameter mannequin.
LLaMA 3.1 (405B) was skilled on 15.6 trillion tokens — a large dataset. The entire FLOPs required to coach a mannequin of this dimension might be calculated as follows:
The authors used 16,000 H100 GPUs for coaching. Based on the paper, the common throughput was 400 teraflops per GPU. This implies the coaching infrastructure can ship a complete throughput of:
Lastly, by dividing the full required FLOPs by the accessible throughput and changing the consequence into days (since what we actually care about is the variety of coaching days), we get:
Bonus: How a lot does it value to coach Llama 3.1 405B?
As soon as you realize the FLOPS per GPU within the coaching setup, you’ll be able to calculate the full GPU hours required to coach a mannequin of a given dimension and dataset. You possibly can then multiply this quantity by the associated fee per GPU hour out of your cloud supplier (or your individual value per GPU hour).
For instance, if one H100 GPU prices roughly $2 per hour, the full value to coach this mannequin can be round $52 million! The formulation under explains how this quantity is derived:
References
[1] Scaling Laws for Neural Language Models by Jared Kaplan et al.
[2]The Llama 3 Herd of Models by Llama Group, AI @ Meta