Multi-Token Prediction Paper9036
Multi-Token Prediction Paper9036
Fabian Gloeckle * 1 2 Badr Youbi Idrissi * 1 3 Baptiste Rozière 1 David Lopez-Paz + 1 Gabriel Synnaeve + 1
1
Better & Faster Large Language Models via Multi-token Prediction
2
Better & Faster Large Language Models via Multi-token Prediction
Pass@1
2 5 2 3 7
5 13 14
-1.7
7 11 24 26 -0.6
+3.9
+5.0
Pass@10
10 21
27 36 54 57
5 9 13
Figure 2: Order of the forward/backward in an n-token
prediction model with n = 2 heads. By performing the -5.4 -1.0 17 29 34
forward/backward on the heads in sequential order, we avoid
materializing all unembedding layer gradients in memory +2.2 30 45 51
+7.5
Pass@100
simultaneously and reduce peak GPU memory usage. 60 75 77
11 17 24
-9.8 -2.3 30 52 56
2023) without the need for an additional draft model—and
0.3B
0.6B
1.3B
3B
6.7B
13B
0.3B
0.6B
1.3B
3B
6.7B
13B
speculative decoding with Medusa-like tree attention (Cai
et al., 2024).
Figure 3: Results of n-token prediction models on MBPP
3. Experiments on real data by model size. We train models of six sizes in the range
or 300M to 13B total parameters on code, and evaluate
We demonstrate the efficacy of multi-token prediction losses
pass@1,10,100 on the MBPP (Austin et al., 2021) and Hu-
by seven large-scale experiments. Section 3.1 shows how
manEval (Chen et al., 2021) benchmark with 1000 samples.
multi-token prediction is increasingly useful when grow-
Multi-token prediction models are worse than the baseline
ing the model size. Section 3.2 shows how the additional
for small model sizes, but outperform the baseline at scale.
prediction heads can speed up inference by a factor of 3×
Error bars are confidence intervals of 90% computed with
using speculative decoding. Section 3.3 demonstrates how
bootstrapping over dataset samples.
multi-token prediction promotes learning longer-term pat-
terns, a fact most apparent in the extreme case of byte-level
tokenization. Section 3.4 shows that 4-token predictor leads
to strong gains with a tokenizer of size 32k. Section il- 3.1. Benefits scale with model size
lustrates that the benefits of multi-token prediction remain To study this phenomenon, we train models of six sizes
for training runs with multiple epochs. Section 3.6 show- in the range 300M to 13B parameters from scratch on at
cases the rich representations promoted by pretraining with least 91B tokens of code. The evaluation results in Fig-
multi-token prediction losses by finetuning on the Code- ure 3 for MBPP (Austin et al., 2021) and HumanEval (Chen
Contests dataset (Li et al., 2022). Section 3.7 shows that et al., 2021) show that it is possible, with the exact same
the benefits of multi-token prediction carry to natural lan- computational budget, to squeeze much more performance
guage models, improving generative evaluations such as out of large language models given a fixed dataset using
summarization, while not regressing significantly on stan- multi-token prediction.
dard benchmarks based on multiple choice questions and
negative log-likelihoods. We believe this usefulness only at scale to be a likely reason
why multi-token prediction has so far been largely over-
To allow fair comparisons between next-token predictors looked as a promising training loss for large language model
and n-token predictors, the experiments that follow always training.
compare models with an equal amount of parameters. That
is, when we add n − 1 layers in future prediction heads, we
3.2. Faster inference
remove n − 1 layers from the shared model trunk. Please
refer to Table S14 for the model architectures and to Ta- We implement greedy self-speculative decoding Stern et al.
ble S13 for an overview of the hyperparameters we use in (2018) with heterogeneous batch sizes using xFormers
our experiments. (Lefaudeux et al., 2022) and measure decoding speeds of our
3
Better & Faster Large Language Models via Multi-token Prediction
Table 1: Multi-token prediction improves performance and unlocks efficient byte level training. We compare models
with 7B parameters trained from scratch on 200B and on 314B bytes of code on the MBPP (Austin et al., 2021), HumanEval
(Chen et al., 2021) and APPS (Hendrycks et al., 2021) benchmarks. Multi-token prediction largely outperforms next token
prediction on these settings. All numbers were calculated using the estimator from Chen et al. (2021) based on 200 samples
per problem. The temperatures were chosen optimally (based on test scores; i.e. these are oracle temperatures) for each
model, dataset and pass@k and are reported in Table S12.
best 4-token prediction model with 7B parameters on com- model by nearly two times. The 8-byte prediction model
pleting prompts taken from a test dataset of code and natural is a strong byte-based model, approaching the performance
language (Table S2) not seen during training. We observe a of token-based models despite having been trained on 1.7×
speedup of 3.0× on code with an average of 2.5 accepted less data.
tokens out of 3 suggestions on code, and of 2.7× on text.
On an 8-byte prediction model, the inference speedup is 3.4. Searching for the optimal n
6.4× (Table S3). Pretraining with multi-token prediction
allows the additional heads to be much more accurate than To better understand the effect of the number of predicted
a simple finetuning of a next-token prediction model, thus tokens, we did comprehensive ablations on models of scale
allowing our models to unlock self-speculative decoding’s 7B trained on 200B tokens of code. We try n = 1, 2, 4, 6
full potential. and 8 in this setting. Results in table 1 show that training
with 4-future tokens outperforms all the other models con-
sistently throughout HumanEval and MBPP for pass at 1,
3.3. Learning global patterns with multi-byte prediction
10 and 100 metrics: +3.8%, +2.1% and +3.2% for MBPP
To show that the next-token prediction task latches to local and +1.2%, +3.7% and +4.1% for HumanEval. Interestingly,
patterns, we went to the extreme case of byte-level tokeniza- for APPS/Intro, n = 6 takes the lead with +0.7%, +3.0%
tion by training a 7B parameter byte-level transformer on and +5.3%. It is very likely that the optimal window size
314B bytes, which is equivalent to around 116B tokens. depends on input data distribution. As for the byte level
The 8-byte prediction model achieves astounding improve- models the optimal window size is more consistent (8 bytes)
ments compared to next-byte prediction, solving 67% more across these benchmarks.
problems on MBPP pass@1 and 20% more problems on
HumanEval pass@1. Al-Rfou et al. (2019) also show that 3.5. Training for multiple epochs
muti target prediction has a positive effect on character level
language modeling. Multi-token training still maintains an edge on next-token
prediction when trained on multiple epochs of the same
Multi-byte prediction is therefore a very promising avenue data. The improvements diminish but we still have a
to unlock efficient training of byte-level models. Self- +2.4% increase on pass@1 on MBPP and +3.2% increase
speculative decoding can achieve speedups of 6 times for on pass@100 on HumanEval, while having similar perfor-
the 8-byte prediction model, which would allow to fully mance for the rest. As for APPS/Intro, a window size of 4
compensate the cost of longer byte-level sequences at infer- was already not optimal with 200B tokens of training.
ence time and even be faster than a next-token prediction
4
Better & Faster Large Language Models via Multi-token Prediction
Average accuracy
perform next-token models for use in finetunings. We evalu- 47.5
4
ate this by finetuning 7B parameter models from Section 3.3 45.0
on the CodeContests dataset (Li et al., 2022). We compare
42.5
the 4-token prediction model with the next-token prediction
baseline, and include a setting where the 4-token prediction 40.0
model is stripped off its additional prediction heads and 37.5
finetuned using the classical next-token prediction target.
35.0
According to the results in Figure 4, both ways of finetuning
5000 10000 15000 20000 25000
the 4-token prediction model outperform the next-token pre- Training step
diction model on pass@k across k. This means the models
are both better at understanding and solving the task and Figure 5: Multi-token training with 7B models doesn’t
at generating diverse answers. Note that CodeContests is improve performance on choice tasks. This figure shows
the most challenging coding benchmark we evaluate in this the evolution of average accuracy of 6 standard NLP bench-
study. Next-token prediction finetuning on top of 4-token marks. Detailed results in Appendix G for 7B models
prediction pretraining appears to be the best method overall, trained on 200B tokens of language data. The 2 future
in line with the classical paradigm of pretraining with auxil- token model has the same performance as the baseline and
iary tasks followed by task-specific finetuning. Please refer the 4 future token model regresses a bit. Larger model sizes
to Appendix F for details. might be necessary to see improvements on these tasks.
10.0
token prediction loss, respectively. In Figure S12, we evalu-
5.0 ate the resulting checkpoints on 6 standard NLP benchmarks.
pass@k (%)
5
Better & Faster Large Language Models via Multi-token Prediction
Induction success
26.5 0.3
26.0 0.2
25.5 0.1 n=1 (baseline)
n=2 (ours)
25.0 0.0
200 500 1 3 10 30 100 300 1000
Training tokens (B) Parameters (M)
Figure 6: Performance on abstractive text summariza- Figure 7: Induction capability of n-token prediction mod-
tion. Average ROUGE-L (longest common subsequence els. Shown is accuracy on the second token of two token
overlap) F1 score for 7B models trained on 200B and 500B names that have already been mentioned previously. Shown
tokens of natural language on eight summarization bench- are numbers for models trained with a next-token and a
marks. We finetune the respective models on each task’s 2-token prediction loss, respectively, with two independent
training data separately for three epochs and select the check- runs each. The lines denote per-loss averages. For small
points with highest ROUGE-L F1 validation score. Both model sizes, next-token prediction models learn practically
n = 2 and n = 4 multi-token prediction models have an no or significantly worse induction capability than 2-token
advantage over next-token prediction models. Individual prediction models, with their disadvantage disappearing at
scores per dataset and more details can be found in Ap- the size of 100M nonembedding parameters.
pendix H.
6
Better & Faster Large Language Models via Multi-token Prediction
7
Better & Faster Large Language Models via Multi-token Prediction
8
Better & Faster Large Language Models via Multi-token Prediction
effects of such a loss during pretraining. Pal et al. (2023) use In future work we would like to better understand how to au-
probing methods to show that next-token prediction models tomatically choose n in multi-token prediction losses. One
are able to predict additional consecutive tokens to a certain possibility to do so is to use loss scales and loss balanc-
extent, but less so than our models which are specifically ing (Défossez et al., 2022). Also, optimal vocabulary sizes
trained for this task. Jianyu Zhang (2024) observe improve- for multi-token prediction are likely different from those for
ments in language modelling tasks with multi-label binary next-token prediction, and tuning them could lead to better
classification over the occurrence of vocabulary words in results, as well as improved trade-offs between compressed
the future as an auxiliary learning task. sequence length and compute-per-byte expenses. Finally,
we would like to develop improved auxiliary prediction
Self-speculative decoding Stern et al. (2018) are, to the losses that operate in embedding spaces (LeCun, 2022).
best of our knowledge, the first to suggest a speculative
decoding scheme for faster inference. Our architecture re- Impact statement
places their linear prediction heads by transformer layers,
but is otherwise similar. By reorganizing the order of the for- The goal of this paper is to make language models more
ward/backward, we can use all loss terms instead of stochas- compute and data efficient. While this may in principle
tically picking one head for loss computation. Cai et al. reduce the ecological impact of training LLMs, we shall be
(2024) present a more elaborate self-speculative decoding careful about rebound effects. All societal advantages, as
scheme that uses the top-k predictions of each head instead well as risks, of LLMs should be considered while using
of the best one only. It can be used with the multi-token this work.
prediction models we train. Santilli et al. (2023) Propose an
alternative parallel decoding algorithm for encoder/decoder Environmental impact
architectures where the decoded block is refined iteratively.
In aggregate, training all models reported in the paper re-
Multi-target prediction Multi-task learning is the quired around 500K GPU hours of computation on hardware
paradigm of training neural networks jointly on several tasks of type A100-80GB and H100. Estimated total emissions
to improve performance on the tasks of interest (Caruana, were around 50 tCO2eq, 100% of which were offset by
1997). Learning with such auxiliary tasks allows models to Meta’s sustainability program.
exploit dependencies between target variables and can even
be preferable in the case of independent targets (Waegeman Acknowledgements
et al., 2019). While more specifically tailored architectures
for multi-target prediction are conceivable (Spyromitros- We thank Jianyu Zhang, Léon Bottou, Emmanuel Dupoux,
Xioufis et al., 2016; Read et al., 2021), modern deep learn- Pierre-Emmanuel Mazaré, Yann LeCun, Quentin Garrido,
ing approaches usually rely on large shared model trunks Megi Dervishi, Mathurin Videau and Timothée Darcet and
with separate prediction heads for the respective tasks (Caru- other FAIR PhD students and CodeGen team members for
ana, 1997; Silver et al., 2016; Lample et al., 2022) like we helpful discussions. We thank Jonas Gehring for his tech-
do. Multi-target prediction has been shown to be a suc- nical expertise and the original Llama team and xFormers
cessful strategy in various domains, e.g. for learning time team for enabling this kind of research.
series prediction with more distant time steps in the future
as auxiliary targets (Vapnik and Vashist, 2009) or for learn-
ing from videos with several future frames (Mathieu et al.,
2016; Srivastava et al., 2016) or representations of future
frames (Vondrick et al., 2016) as auxiliary targets.
7. Conclusion
We have proposed multi-token prediction as an improvement
over next-token prediction in training language models for
generative or reasoning tasks. Our experiments (up to 7B pa-
rameters and 1T tokens) show that this is increasingly useful
for larger models and in particular show strong improve-
ments for code tasks. We posit that our method reduces
distribution mismatch between teacher-forced training and
autoregressive generation. When used with speculative de-
coding, exact inference gets 3 times faster.
9
Better & Faster Large Language Models via Multi-token Prediction
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Alexander R. Fabbri, Irene Li, Tianwei She, Suyi Li, and
Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Dragomir R. Radev. Multi-news: a large-scale multi-
Carrie Cai, Michael Terry, Quoc Le, et al. Program document summarization dataset and abstractive hierar-
synthesis with large language models. arXiv preprint chical model, 2019.
arXiv:2108.07732, 2021.
Mehrdad Farahani. Summarization using bert2bert model on
Gregor Bachmann and Vaishnavh Nagarajan. The pitfalls wikisummary dataset. [Link]
of next-token prediction, 2024. summary, 2020.
10
Better & Faster Large Language Models via Multi-token Prediction
Diederik Kingma and Jimmy Ba. Adam: A method for Ilya Loshchilov and Frank Hutter. Decoupled weight decay
stochastic optimization. ICLR, 2015. regularization, 2019.
Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Michael Mathieu, Camille Couprie, and Yann LeCun. Deep
Zae Myung Kim, and Dongyeop Kang. Benchmarking multi-scale video prediction beyond mean square error,
cognitive biases in large language models as evaluators. 2016.
arXiv preprint arXiv:2309.17012, 2023.
Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos san-
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, tos, Caglar Gulcehre, and Bing Xiang. Abstractive text
Michael Collins, Ankur Parikh, Chris Alberti, Danielle summarization using sequence-to-sequence rnns and be-
Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, yond, 2016.
Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-
Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Don’t
Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and
give me the details, just the summary! topic-aware con-
Slav Petrov. Natural questions: a benchmark for question
volutional neural networks for extreme summarization,
answering research. Transactions of the Association of
2018.
Computational Linguistics, 2019.
Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas
Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril,
Joseph, Nova DasSarma, Tom Henighan, Ben Mann,
Xavier Martinet, Amaury Hayat, Gabriel Ebner, Au-
Amanda Askell, Yuntao Bai, Anna Chen, Tom Con-
rélien Rodriguez, and Timothée Lacroix. Hypertree proof
erly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds,
search for neural theorem proving, 2022.
Danny Hernandez, Scott Johnston, Andy Jones, Jack-
Yann LeCun. A path towards autonomous machine intelli- son Kernion, Liane Lovitt, Kamal Ndousse, Dario
gence version 0.9. 2, 2022-06-27. Open Review, 62(1), Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam
2022. McCandlish, and Chris Olah. In-context learning
and induction heads. Transformer Circuits Thread,
Benjamin Lefaudeux, Francisco Massa, Diana Liskovich, 2022. [Link]
Wenhan Xiong, Vittorio Caggiano, Sean Naren, Min Xu, learning-and-induction-heads/[Link].
Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut,
OpenAI. Gpt-4 technical report, 2023.
and Daniel Haziza. xformers: A modular and hack-
able transformer modelling library. [Link] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L.
com/facebookresearch/xformers, 2022. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini
Agarwal, Katarina Slama, Alex Ray, John Schulman, Ja-
Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast cob Hilton, Fraser Kelton, Luke Miller, Maddie Simens,
inference from transformers via speculative decoding, Amanda Askell, Peter Welinder, Paul Christiano, Jan
2023. Leike, and Ryan Lowe. Training language models to
follow instructions with human feedback, 2022.
Yujia Li, David Choi, Junyoung Chung, Nate Kush-
man, Julian Schrittwieser, Rémi Leblond, Tom Eccles, Koyena Pal, Jiuding Sun, Andrew Yuan, Byron C. Wallace,
James Keeling, Felix Gimeno, Agustin Dal Lago, et al. and David Bau. Future lens: Anticipating subsequent
Competition-level code generation with alphacode. Sci- tokens from a single hidden state, 2023.
ence, 378(6624):1092–1097, 2022.
Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan
Chin-Yew Lin. ROUGE: A package for automatic evalu- Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou.
ation of summaries. In Text Summarization Branches Prophetnet: Predicting future n-gram for sequence-to-
Out, pages 74–81, Barcelona, Spain, July 2004. Asso- sequence pre-training, 2020.
ciation for Computational Linguistics. URL https:
//[Link]/W04-1013. Jesse Read, Bernhard Pfahringer, Geoffrey Holmes, and
Eibe Frank. Classifier chains: A review and perspectives.
Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Journal of Artificial Intelligence Research, 70:683–718,
Shen, Ruochen Xu, Chen Lin, Yujiu Yang, Jian Jiao, Nan 2021.
Duan, and Weizhu Chen. Rho-1: Not all tokens are what
you need, 2024. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S
Gordon. Choice of plausible alternatives: An evaluation
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient of commonsense causal reasoning. In 2011 AAAI Spring
descent with warm restarts, 2017. Symposium Series, 2011.
11
Better & Faster Large Language Models via Multi-token Prediction
Andrea Santilli, Silvio Severino, Emilian Postolache, Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell,
Valentino Maiorca, Michele Mancusi, Riccardo Marin, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Gen-
and Emanuele Rodolà. Accelerating transformer infer- eralized autoregressive pretraining for language under-
ence for translation via parallel decoding. arXiv preprint standing. In Advances in neural information processing
arXiv:2305.10427, 2023. systems, pages 5753–5763, 2019.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi,
and Yejin Choi. Socialiqa: Commonsense reasoning and Yejin Choi. Hellaswag: Can a machine really finish
about social interactions, 2019. your sentence?, 2019.
12
Better & Faster Large Language Models via Multi-token Prediction
3.0 1.0
Throughput (relative)
2.5 0.8
Latency (relative)
2.0 0.6
1.5
k=1 0.4 k=1
1.0 k=2 k=2
0.5 k=3 0.2 k=3
k=4 k=4
0.0 0.0
1 8 16 24 32 40 1 8 16 24 32 40
Batch size Batch size
Figure S10: Decoding speeds and latencies with self-speculative decoding relative to standard autoregressive decoding.
We use k heads of a 4-token prediction model and evaluate decoding speeds of a code model as explained in Table S2. All
numbers are relative to the autoregressive (k = 1) baseline with the same batch size.
Table S2: Relative speedups with self-speculative decoding. For wikipedia and books we prompt a 7B parameter model
trained on 500B tokens, and for code we prompt a 7B parameter model trained on 1T tokens of code on 4200 sequences of
512 tokens from a test dataset not seen during training, and generate completions consisting of 512 tokens using greedy
self-speculative decoding (Stern et al., 2018) using the indicated number of heads from a 4-token prediction model. Note
that the maximal speedup that can be obtained with self-speculative decoding using k heads is k. The last column shows the
average number of tokens retrieved from a forward containing this sequence (both verification and prediction). The speedup
was evaluated at the maximal batch size of 42, but is constant across batch sizes (Figure S10).
Table S3: Relative speedups with self-speculative decoding with byte-level models on code. We prompt the 7B parameter
models from Section 3.3 on 4096 sequences of 1024 bytes of code not seen during training, and generate completions
consisting of 1024 bytes using greedy self-speculative decoding (Stern et al., 2018) as in Table S2. The speedup was
evaluated at a batch size of 16.
n=8 n = 16 n = 32
# Heads used Rel. speedup Tokens / forward Rel. speedup Tokens / forward Rel. speedup tokens / forward
1 1.00 1.00 1.00 1.00 1.00 1.00
2 1.94 1.98 1.94 1.98 1.93 1.97
4 3.67 3.84 3.63 3.81 3.62 3.80
8 6.39 7.04 6.25 6.92 6.22 6.89
12 − − 8.07 9.36 8.01 9.30
16 − − 9.24 11.20 9.15 11.15
20 − − − − 9.83 12.61
24 − − − − 10.34 13.67
28 − − − − 10.55 14.58
32 − − − − 10.84 15.35
13
Better & Faster Large Language Models via Multi-token Prediction
B. Alternative architectures
Table S4: Alternative architectures improve on baseline but not as consistently. Alternative architectures for multi-token
prediction are worth exploring to improve efficiency. Here we tried Anticausal, causal and linear and showed no significant
improvement with respect to Parallel architecture.
The architecture described in Section 2 is not the only sensible option, but proved technically viable and well-performing in
our experiments. We describe and compare alternative architectures in this section.
Replicated unembeddings Replicating the unembedding matrix n times is a simple method for implementing multi-token
prediction architectures. However, it requires matrices with shapes (d, nV ) in the notation of Section 2, which is prohibitive
for large-scale trainings.
Linear heads Apart from using a single transformer layer for the heads Hi , other architectures are conceivable. We
experimented with a single linear layer without any nonlinearity as heads, amounting to linear probing of the model’s
residual representation z. Architectures with more than one layer per head are also possible, but we did not pursue this
direction further.
Causal and anticausal variant Instead of making the prediction heads Pi (xt+i | zt:1 ) architecturally independent of each
other, we can also allow them to rely on other heads’ (pre-unembedding) outputs. In a causal variant, later prediction heads
are applied on top of the previous ones, i.e. the i-th prediction head Pi is given by
In another anticausal variant, the network starts by predicting the most distant tokens before gradually refining up to the
following token:
Pθ (xt+i |·) = softmax ◦ fu ◦ fhi ◦ fhi+1 · · · ◦ fhn ◦ fs .
These architectures likewise allow a sequential forward/backward order as the parallel architecture from Section 2. This is
described in Figure S11.
14
Better & Faster Large Language Models via Multi-token Prediction
4
Head 2 Loss 2
5
3 6
7
Head 1 Loss 1
8
2 9
Trunk
1 10
Input
Figure S11: Order of the forward/backward in a causal n-token prediction model with n = 2 heads. Like in the
forward/backward depicted for parallel prediction heads in Figure 2, we avoid materializing all unembedding layer gradients
in memory simultaneously and reduce peak GPU memory usage significantly. The iteration over the heads starts with the
one furthest to the trunk. At each head, a gradient from the succeeding prediction heads and from the head’s own loss are
accumulated for both the head’s output and its weights.
C. Training speeds
Table S5: Training time relative to next-token prediction training. The slight overhead when using multi-token prediction
here is explained by a suboptimal use of Fully Sharded Data Parallel. In our implementation, when doing separate backward
passes for each head, we lose the overlap of layer weight communication and computation, therefore it incurs a very slight
overhead that can be removed if reimplemented correctly.
D. Finetuning
Table S6: Finetuning LLama 2 with multi-token prediction does not significantly improve performance. We tried to
finetune LLama 2 with 4-token prediction but this did not yield significant improvements compared to the baseline. We
suppose that this new loss changes the initialization too brutally and never really recovers. We still some improvements for
example on MBPP Pass@1. All runs use 200B tokens of code.
15
Better & Faster Large Language Models via Multi-token Prediction
Table S7: Scaling model size Full results of scaling model size with n=1,2 and 4.
MBPP HumanEval
Model Size Fut @1 @10 @100 @1 @10 @100
1 1.8 10.4 29.9 1.9 5.0 10.9
0.3B 2 1.7 10.1 27.2 1.5 4.4 10.3
4 1.0 6.3 20.1 1.2 4.0 8.6
1 4.7 21.0 45.2 2.9 8.5 16.7
0.6B 2 4.6 21.0 44.7 3.2 8.9 16.2
4 3.0 15.6 38.0 2.7 7.7 15.5
1 6.8 27.0 51.0 4.6 13.1 24.3
1.3B 2 7.3 27.5 51.7 5.4 13.6 23.3
4 7.4 27.6 50.1 4.8 12.3 22.5
1 11.1 36.4 60.4 7.2 17.2 29.8
3B 2 11.8 37.2 60.5 8.0 18.2 31.2
4 12.7 37.6 61.1 7.2 18.5 33.3
1 23.9 54.2 74.7 12.8 29.3 51.7
6.7B 2 24.7 54.8 76.4 13.2 32.2 53.9
4 26.0 55.8 76.0 13.8 33.2 58.5
1 26.0 57.1 77.0 14.1 33.6 56.0
13B 2 30.5 60.5 79.4 15.2 36.9 60.0
4 30.5 61.0 79.2 15.8 38.6 63.5
16
Better & Faster Large Language Models via Multi-token Prediction
30 70 50
25 40
nq piqa siqa
15
75 46
10 n
value
70 44 1
2
5 4
65 42
10000 20000 10000 20000
tqa global_step global_step
40
30
value
20
10
10000 20000
global_step
Figure S12: Multiple token training with 7B models doesn’t improve performance on choice tasks. This figure shows
the evolution of average accuracy of some standard NLP benchmarks (ARC Challenge COPA Hellaswag MMLU Natural
Questions PIQA SIQA and TriviaQA. For the 7B models trained on 200B tokens of language data, the 2 future token
model has the same performance as the baseline and the 4 future token model regresses a bit. Larger model sizes might be
necessary to see improvements on these tasks.
17
Better & Faster Large Language Models via Multi-token Prediction
Table S8: Comprehensive evaluation on abstractive text summarization. ROUGE-n (n-gram overlap) and ROUGE-L
(longest common subsequence overlap) F1 scores for 7B models trained on 200B and 500B tokens of natural language,
respectively. The last three columns correspond to models trained on 500B tokens, the previous three to models trained on
200B tokens. Shown are numbers of the n = 1 baseline and the absolute difference of n = 2 and n = 4 models trained
on the same number of tokens. Summary-level ROUGE-L (“ROUGE-Lsum ”) is reported where it differs from ROUGE-L.
Model checkpoints with maximal validation ROUGE-L F1 are selected separately for each model dataset and model type
and reported in the first row corresponding to each dataset. Boldface for numbers within 0.05 difference to the best one for
each dataset size separately.
Task Metric Baseline 200B ∆n=2 ∆n=4 Baseline 500B ∆n=2 ∆n=4
evaluation epoch 2 2 2 2 2 2
ROUGE-1 42.88 +0.74 +0.74 43.77 +0.55 +0.50
ROUGE-2 19.56 +0.52 +0.53 20.34 +0.52 +0.34
CNN/Dailymail (Nallapati et al., 2016)
ROUGE-3 11.11 +0.39 +0.35 11.69 +0.36 +0.19
ROUGE-L 29.72 +0.66 +0.49 30.51 +0.48 +0.37
ROUGE-Lsum 40.18 +0.72 +0.68 41.02 +0.56 +0.52
evaluation epoch 1 3 3 2 3 2
ROUGE-1 44.48 +1.70 +1.72 45.87 +1.05 +0.69
Multi-News (Fabbri et al., 2019) ROUGE-2 16.88 +0.44 +0.70 17.56 +0.42 +0.40
ROUGE-3 9.63 -0.06 +0.17 9.91 +0.22 +0.18
ROUGE-L 23.82 +0.17 +0.40 24.22 +0.20 +0.26
evaluation epoch 2 2 3 2 1 3
ROUGE-1 32.95 +0.41 +0.35 33.37 +0.32 +0.78
OrangeSum (Eddine et al., 2021) ROUGE-2 13.90 +0.31 +0.36 14.22 +0.25 +0.53
ROUGE-3 8.01 +0.19 +0.21 8.12 +0.22 +0.48
ROUGE-L 23.62 +0.36 +0.51 23.91 +0.23 +0.66
evaluation epoch 1 1 1 1 2 3
ROUGE-1 1.03 +0.02 0.00 0.92 +0.09 +0.05
pn-summary (Farahani et al., 2021) ROUGE-2 0.13 +0.02 +0.03 0.15 0.00 0.00
ROUGE-3 0.02 0.00 +0.02 0.02 0.00 +0.02
ROUGE-L 1.02 +0.03 +0.01 0.91 +0.09 +0.05
evaluation epoch 3 3 3 3 3 3
ROUGE-1 51.39 +0.70 +0.63 52.54 -0.24 +0.69
SAMSum (Gliwa et al., 2019) ROUGE-2 26.46 +0.76 +0.30 27.74 -0.20 +0.82
ROUGE-3 16.40 +0.91 +0.28 17.56 -0.30 +0.71
ROUGE-L 42.59 +0.90 +0.51 43.92 -0.10 +0.63
evaluation epoch 2 3 3 3 3 3
ROUGE-1 45.08 +0.63 +1.12 45.48 +0.77 +0.91
ThaiSum (Chumpolsathien, 2020) ROUGE-2 27.85 +0.30 +0.73 28.07 +0.74 +0.64
ROUGE-3 15.73 +0.04 +0.43 15.82 +0.50 +0.30
ROUGE-L 44.92 +0.64 +1.12 45.31 +0.76 +0.89
evaluation epoch 3 3 3 3 3 3
ROUGE-1 10.16 +0.67 -0.23 12.80 -0.17 -0.99
WikiSummary (Farahani, 2020) ROUGE-2 4.46 -0.03 -0.09 6.17 -0.11 -0.69
ROUGE-3 1.31 +0.21 +0.13 1.98 -0.08 -0.33
ROUGE-L 10.11 +0.65 -0.28 12.69 -0.17 -0.99
evaluation epoch 2 2 3 2 2 3
ROUGE-1 42.16 +0.71 +1.07 43.42 +0.78 +0.67
XSum (Narayan et al., 2018) ROUGE-2 19.19 +0.54 +0.55 20.32 +0.68 +0.34
ROUGE-3 10.43 +0.38 +0.28 11.23 +0.48 +0.20
ROUGE-L 34.03 +0.67 +0.92 35.18 +0.79 +0.63
18
Better & Faster Large Language Models via Multi-token Prediction
Table S9: Performance on abstractive text summarization. ROUGE-L (longest common subsequence overlap) F1 score
for 7B models trained on 200B and 500B tokens of natural language. We finetune the respective models on each task’s
training data separately for a given number of epochs and select the checkpoints with maximal ROUGE-L F1 on the
validation dataset. The second and fifth column report the numbers for a next-token prediction model, while the third, fourth,
sixth and seventh one report the absolute improvements for 2-token and 4-token prediction models trained on the same
amount of data, respectively. Boldface for numbers within 0.05 difference to the best one for each dataset size separately.
Table S10: Summary statistics for abstractive text summarization evaluations. Reported are averages for ROUGE-n and
ROUGE-L metrics across all datasets from Table S8, separately for precision, recall and F1 score. Both 2-token and 4-token
prediction models outperform the next-token prediction baseline. Trained on 500B tokens, 4-token prediction models appear
better at recall metrics while 2-token prediction models appear better at precision metrics. Model checkpoints are selected
as described in Table S8. Boldface for numbers within 0.05 difference to the best one for each dataset size separately.
Metric Aspect Baseline 200B ∆n=2 ∆n=4 Baseline 500B ∆n=2 ∆n=4
F1 33.77 +0.70 +0.68 34.77 +0.39 +0.41
ROUGE-1 precision 35.76 +0.88 +0.83 37.03 +0.42 -0.04
recall 34.37 +0.45 +0.45 35.14 +0.35 +0.68
F1 16.06 +0.36 +0.39 16.82 +0.29 +0.30
ROUGE-2 precision 16.97 +0.40 +0.43 17.91 +0.29 +0.03
recall 16.34 +0.28 +0.35 16.99 +0.32 +0.48
F1 9.08 +0.26 +0.23 9.54 +0.18 +0.22
ROUGE-3 precision 9.59 +0.29 +0.28 10.17 +0.18 +0.05
recall 9.26 +0.21 +0.20 9.65 +0.21 +0.35
F1 26.23 +0.51 +0.46 27.08 +0.28 +0.31
ROUGE-L precision 27.79 +0.62 +0.55 28.85 +0.28 -0.09
recall 26.71 +0.37 +0.32 27.40 +0.28 +0.57
F1 27.53 +0.52 +0.48 28.40 +0.29 +0.33
ROUGE-Lsum precision 29.07 +0.64 +0.58 30.15 +0.29 -0.08
recall 28.13 +0.35 +0.33 28.81 +0.29 +0.60
19
Better & Faster Large Language Models via Multi-token Prediction
20 30
pass@10 (%)
15 20
60
pass@100 (%)
50 60
40
40
0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.2 0.4 0.6 0.8 1.0 1.2 1.4
Temperature Temperature
Figure S13: Performance on the mathematical reasoning benchmark GSM8K (Cobbe et al., 2021). We evaluate
pretrained next-token and multi-token prediction models trained on 200B and 500B tokens of natural language in 8-shot
mode using nucleus sampling (Holtzman et al., 2020) with probability mass 0.95 and various sampling temperatures.
Reported are the frequencies of the correct final answer to appear among k samples, for k = 1, 10, 100, estimated from
200 samples like in code generation benchmarks (Chen et al., 2021). After 200B tokens, the 2-token prediction model
has a clear advantage over the next-token baseline but the order reverses after 500B tokens. The 4-token prediction model
is worse throughout. We interpret this similarly to the findings in Section 4.1: the follow-your-nose chains-of-thought
required for GSM8K may be difficult to learn from a limited amount of data, attesting to the data efficiency of multi-token
prediction training. Once the correct circuits for correct autoregressive chains-of-thought in this domain have formed,
however, multi-token prediction comes at a cost.
20
Better & Faster Large Language Models via Multi-token Prediction
1.000
0.975
Induction success
0.950
0.925
0.900
0.875 n=1 (baseline)
n=2 (ours)
0.850
1 3 10 30 100 300 1000
Parameters (M)
Figure S14: Induction capability of n-token prediction models trained on higher-quality data. Shown is accuracy on the
second token of two token names that have already been mentioned previously. Training on a 9:1 mix of a books dataset and
the children storiy dataset, we observe that induction capability forms significantly earlier in training (not shown here) and to
a higher degree. We believe that this is explained both because our evaluation dataset no longer contains out-of-distribution
tokens (Section 4.1) and because the higher-quality data contained in the books dataset makes induction necessary earlier on
(especially for small models, cf. Singh et al. (2023)). In particular, by enforcing the formation of induction capability in the
model by means of the dataset – instead of the loss – the advantage of 2-token prediction models on this task disappears
except for the smallest models: feature learning converts the task into a pure next-token prediction task.
21
Better & Faster Large Language Models via Multi-token Prediction
The prediction difficulty of different tokens in natural text varies greatly. Some tokens may be the continuations
of partial words that are uniquely determined from their preceding context without any effort, while others may
require to predict theorem names in difficult mathematical proofs or the correct answer to an exam question.
Language models with residual connections have been shown to refine their output token distribution with each
successive layer, and can be trained with early exit strategies that spend variable amounts of computational
resources per token position. Multi-token prediction losses explicitly encourage information-sharing between
adjacent token positions and can thus be viewed as a method to learn allocating computational resources in
language models more efficiently to the tokens that benefit most of it.
To check the truth of this hypothesis, we augment the polynomial arithmetic task from Section 4.2 with a varying number of
pause tokens (Goyal et al., 2023) inserted between the question and a token that denotes the beginning of the answer. Pause
tokens introduce additional computational resources that can be expended for computations that are expected to be useful
later on in the sequence, in other words: to start thinking about the answer. According to the computation-sharing hypothesis,
multi-token prediction models learn information-sharing and thus computation-sharing between token positions more easily,
and may be better at making use of these additional computational resources than next-token prediction models are. In
Figure S15, we show the evaluation results on the polynomial arithmetic task with a fixed number of pause tokens inserted
both at training and evaluation time. Multi-token prediction models likewise outperform next-token prediction models
on these task variants across task difficulties and model sizes. However, we do not see strong evidence of a widening or
shrinking of this gap i.e. we cannot conclude from these experiments on the veracity of the computation-sharing hypothesis.
In Table S11, we report results from another experiment in the same spirit: by adding spaces and newlines to HumanEval
and MBPP prompts, we add “pause tokens” in a somewhat natural way. According to these results, multi-token prediction
models have a slight advantage at using this additionally provided compute, but the effect is marginal.
Accuracy (%)
60 60
40 40
20 20
0 0
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
# operations # operations
in-domain out-of-domain in-domain out-of-domain
(a) 5 pause tokens (b) 10 pause tokens
Figure S15: Accuracy on a polynomial arithmetic task with varying number of operations per expression and pause
tokens. We train and evaluate models on the polynomial arithmetic task described in Section 4.2, modified by the addition
of pause tokens (Goyal et al., 2023): between the question and the equality sign that indicates the beginning of the answer,
we add a constant number of pause tokens both in training and evaluation. For both a variant with five and with ten pause
tokens, respectively, we observe comparable improvements from using multi-token prediction to the ones obtained in the
case without pause tokens (Figure 8).
22
Better & Faster Large Language Models via Multi-token Prediction
30M, n=1
100 30M, n=2
30M, n=4
100M, n=1
100M, n=2
100M, n=4
80
60
Accuracy (%)
40
20
0
1 2 3 4 5 6 7 8 9 10
# operations
in-domain out-of-domain
Figure S16: Accuracy on a polynomial arithmetic task for two model sizes. We train and evaluate models with 30M and
100M parameters on the polynomial arithmetic task described in Section 4.2. Tripling the model size has a smaller effect
on performance than replacing next-token prediction loss by multi-token prediction. Shown are two independent runs per
configuration and their means, the 100M parameter models being identical to the ones in Figure 8.
23
Better & Faster Large Language Models via Multi-token Prediction
Let us explain each of the terms. The entropy terms denote the uncertainty contained in the ground-truth random variables
X and Y . 2 The term H(Y | X) is a classical next-token entropy for the prefix (C, X). The conditional entropy H(X | Y )
is a more theoretical entity not modelled by causal models. It describes the uncertainty about X given the prefix C and suffix
Y , and therefore captures the local variations of X that do not affect the continuation of the text Y . The mutual information
I(X; Y ) on the other hand describes the information about Y contained in X (and vice versa) and therefore captures the
variations of X which constrain the continuation of the text.
However, the argument given in Section 5.2 relies on the assumption that multi-token prediction losses obey a similar
decomposition as the sum of the ground-truth entropies themselves. Let us make this rigorous. Denote by p(x, y) the
joint distribution of X and Y , by p(x) (short for pX (x)) the marginal distribution of X and by p(y) the one of Y . Denote
the densities of the model’s predictions by q(x, y), q(x) and q(y), respectively, conditional distributions by p(x | y) and
Kullback-Leibler divergence from q to p by D(p ∥ q) and cross-entropy from q to p by H(p, q).
Definition L.1. The conditional cross-entropy H(pX|Y , qX|Y ) of X conditioned on Y from q to p is defined as the
2
In particular, they do not refer to model predictions.
24
Better & Faster Large Language Models via Multi-token Prediction
expectation under y of the cross-entropy between the distributions pX and qX conditioned on y, in formulas:
Definition L.2. The relative mutual information Ip∥q (X; Y ) of X and Y from q relative to p is defined by
We have Ip∥q (X; Y ) = H(pX , qX ) + H(pY , qY ) − H(p, q), Ip∥p (X; Y ) = Ip (X; Y ) reduces to standard mutual informa-
tion under the distribution p and Ip∥q (X; Y ) is symmetric in X and Y but can be negative.
We have the following relative version of the decomposition H(X) = H(X | Y ) + I(X; Y ).
Lemma L.3. H(pX , qX ) = H(pX|Y , qX|Y ) + Ip∥q (X; Y ).
Proof. We calculate
X
H(pX , qX ) = − p(x) log q(x)
x
X
=− p(x, y) log q(x)
x,y
X q(x)q(y) p(x, y) q(x, y)
=− p(x, y) log
x,y
p(x, y) q(x, y) q(y)
X
= D(p ∥ qX ⊗ qY ) − D(p ∥ q) − p(y)p(x | y) log q(x | y)
x,y
X
= Ip∥q (X; Y ) + p(y)H(pX|y , qY |y )
y
Symmetrizing, we get the desired relative version of H(X) + H(Y ) = H(X | Y ) + 2I(X; Y ) + H(Y | X):
Setting p to be the empirical distribution of the training data, the left-hand side describes the cross-entropy loss used to
train 2-token prediction models. The right-hand side gives the decomposition into a local cross-entropy term, a mutual
information term with weight two and a shifted next-token cross-entropy term. We interpret this as follows: by adding the
term H(pY , qY ) to the loss, 2-token prediction incentivizes models to precompute features which will become useful for
predicting Y in the next step and increases the weight of the relative mutual information term in the loss. What does relative
mutual information actually mean? By interpreting Kullback-Leibler divergence D(p ∥ q) as the average number of bits
needed in addition to send data from p with a code optimized for q instead of p, we see that minimizing
means minimizing the average number of additional bits needed to send data from p with a code optimized for q that treats
X and Y as independent compared to one that does not. If this number is small, q managed to exploit the mutual information
of X and Y under p.
25
Better & Faster Large Language Models via Multi-token Prediction
(a)
(c)
(b)
Figure S17: Example of a sequential prediction task with derailing. The goal is to go from the arrow to the trophy.
Turning around is not allowed. Most transitions are unique, but there are two turns to be taken correctly, the consequential
decisions (a) and (c). Turn (b) is an inconsequential decision: the paths join right after it. Next to transitions (a) and (b),
we sketch how a 4-step prediction loss can place more emphasis on consequential transitions than inconsequential ones
during teacher-forced training. Next to transition (c), we sketch how a 4-step lookahead can prevent models from taking
irreversible suboptimal decisions during autoregressive decoding.
More formally, assume that the language model is deployed in a reinforcement learning setting like in reinforcement learning
from human feedback (Ouyang et al., 2022) (states are prompts followed by the partial sequence of tokens xt:1 generated so
far, actions are single tokens xt+1 to generate, rewards are external R(xt:1 )). The quantity
X
Vπ (xt:1 ) = Ext+i ∼π(xt+i−1:1 ),i≥1 R(xt+i:1 )
i≥0
quantifies the importance of the decision xt+1 on the value thereafter. Choice points can formally be viewed as steps t for
which σπ (xt:1 ) is large, while inconsequential points are steps where it is low. Note that for completion models, there is no
explicit reward, and our argument is merely meant to illustrate what we mean by choice points.
Derailing denotes a situation where autoregressive generation of trajectories from M at inference time results in bad
outcomes after M made a mistake on a choice point. Even if subsequently, M acts optimally given this choice, the final
outcome can be significantly worse than the outcome of the optimal trajectory.
Staying in the teacher-forced setting, we ask: What is the impact of training M with n-step prediction instead of next-
step prediction on this task? Say xt → xt+1 is a choice point in an optimal trajectory with the suboptimal choice
being xt → x̃t+1 (Figure S17 (a)). Assume that the trajectories preceding xt and succeeding xt+1 and x̃t+1 consist of
inconsequential transitions, the latter denoted by x̃t+j → x̃t+j+1 . We will compare the losses of a teacher-forced next-step
prediction model and a teacher-forced n-step prediction model on the partial trajectory (xt−n+1 , . . . xt ). For the next-step
prediction model, the predictions are (xt−n+2 , . . . , xt , x̃t+1 ) with a single wrong prediction. The predictions of an n-step
prediction model at time t − n + i, i = 1, . . . , n are (xt−n+i+1 , . . . , xt , x̃t+1 , . . . , x̃t+i ) with i wrong predictions. In other
words, an n-step prediction model receives 1 + . . . + n = n(n+1) 2 loss terms pertaining to such a choice point and its
consequences, while each inconsequential transition (Figure S17 (b)) is only reinforced n times as often as in a next-step
prediction model. In other words, choice points receive on average n+1 2 times more importance in the loss of n-step
prediction models than in next-step prediction models.
26
Better & Faster Large Language Models via Multi-token Prediction
As argued in Section 5.1, we believe that this model captures important features of training and inference with language
models: choice points are semantically important turning points in the generated texts, such as the final answer to a question
or a specific line of code, while inconsequential decisions can be a choice among synonyms or of variable names in code.
Apart from this training dynamics point of view, we hypothesize that n-step prediction also allows the formation of circuits
that specifically spot inconsistencies between predictions for earlier and later steps. For instance, if in an early layer of
the model, it can be predicted that a decision xt → x̃t+1 leads to suboptimal outcomes x̃t+n (Figure S17 (c)), subsequent
layers can reduce the probability of xt → x̃t+1 in the model’s next-step prediction. Such behaviors also happen in next-step
prediction models given enough capacity, but our experiments in Section 4.2 point to the fact that circuits of this kind are
formed more easily in multi-step architectures that enforce the required information x̃t+n to be available to the model when
predicting x̃t+1 . We believe that this situation appears frequently in natural language and code modelling, for instance where
an initial answer to a question contradicts the results of the chain of thought brought forward with the intention to justify it.
In more general terms, this situation arises whenever predicting first x̃n+i for some 1 < i ≤ n and then x̃n+1 based on x̃n+i
is easier than predicting x̃n+1 directly. We discuss this phenomenon of factorization orders in the next section and present a
specific instance of it that frequently appears in modelling natural language.
While moving forward in time is certainly the most natural choice of factorization order, there exist cases where it is
suboptimal. In inflectional languages, for instance, agreement between related sentence parts is a frequent pattern with one
word directing the grammatical forms of others. Consider the German sentence
where "genügen" requires a dative case object and then "Seele" requires the possessive pronoun "mein" to be in female
singular dative form "meiner" and the participle "durstend" to be in female singular dative form in weak declination
"durstenden" because it follows "meiner". In other words, the factorization order
is arguably an easier one for constructing the above sentence. Humans as well as language models therefore have to perform
this factorization (which deviates from the causal order in which predictions take place!) within their latent activations, and
a 4-token prediction loss makes this easier as it explicitly encourages models to have all information about the successive 4
tokens in its latent representations.
3
roughly: How could words be enough for my thirsty soul?
27
Better & Faster Large Language Models via Multi-token Prediction
M. Training hyperparameters
Table S13: Overview of all training hyperparameters used. We schedule all learning rates with a linear warmup and
cosine decay (Loshchilov and Hutter, 2017) to a fraction of the peak learning rate which is depicted in the last column
(“decay ratio”). All experiments use the Adam (Kingma and Ba, 2015) optimizer with β1 = 0.9, β2 = 0.95 and decoupled
L2 weight decay (Loshchilov and Hutter, 2019) coefficient 0.1. We clip gradients to a maximal Euclidean norm of 1.0 in all
experiments except CodeContests finetunings, where we use 0.1 instead. Summarization finetunings correspond to three
epochs on all datasets except BigPatent (1 epoch). Byte-level models use the architecture with replicated unembeddings
from Appendix B.
Model Batch size (220 ) Steps Tokens (B) Warmup steps Peak LR Context length Decay ratio
Model scaling (Section 3.1)
0.3B 8 10,850 91.0 1000 3 × 10−4 4096 0.03
0.6B 8 10,850 91.0 1000 3 × 10−4 4096 0.03
1.3B 8 10,850 91.0 1000 3 × 10−4 4096 0.03
3B 8 10,850 91.0 1000 3 × 10−4 4096 0.03
7B 8 25,000 209.7 2000 3 × 10−4 4096 0.03
13B 8 25,000 209.7 1000 3 × 10−4 4096 0.03
Code models (Section 3)
7B 200B 8 25,000 209.7 2000 3 × 10−4 4096 0.03
7B 500B 7 68,570 503.3 2000 3 × 10−4 4096 0.03
7B 1T 7 136,240 1000.0 2000 3 × 10−4 4096 0.03
Byte-level models (Section 3.3)
7B 314GB 12 25,000 314.6 2000 3 × 10−4 8192 0.03
Language models (Section 3.7)
7B 200B 8 25,000 209.7 2000 3 × 10−4 4096 0.10
7B 500B 8 60,000 503.3 2000 3 × 10−4 4096 0.10
Induction task (Section 4.1)
1M – 1B 0.25 100,000 26.2 2000 10−4 2048 0.03
1M – 1B (Appendix J) 0.5 50000 26.2 2000 10−4 2048 0.03
Arithmetic task (Section 4.2)
30M 0.25 100,000 26.2 2000 10−4 1024 0.03
100M 0.25 100,000 26.2 2000 10−4 2048 0.03
Summarization (Section 3.7)
BigPatent 0.125 76,680 10.1 100 3 × 10−5 4096 0.03
CNN/Dailymail 0.125 7,140 0.9 100 3 × 10−5 4096 0.03
Multi-News 0.125 3,330 0.4 100 3 × 10−5 4096 0.03
OrangeSum 0.125 360 0.0 100 3 × 10−5 4096 0.03
pn-summary 0.125 3,450 0.5 100 3 × 10−5 4096 0.03
SAMSum 0.125 60 0.0 100 3 × 10−5 4096 0.03
ThaiSum 0.125 23,640 3.1 100 3 × 10−5 4096 0.03
WikiSummary 0.125 2,550 0.3 100 3 × 10−5 4096 0.03
XSum 0.125 2,760 0.4 100 3 × 10−5 4096 0.03
CodeContests (Section 3.6)
7B 0.25 13,000 3.6 400 5 × 10−5 4096 0.004
28
Better & Faster Large Language Models via Multi-token Prediction
29