Scaling Behavior for Large Language Models regarding Numeral Systems: An Example using Pythia
Though Large Language Models (LLMs) have shown remarkable abilities in mathematics
reasoning, they are still struggling with performing numeric operations accurately, such as
addition and multiplication. Numbers can be tokenized into tokens in various ways by
different LLMs and affect the numeric operations performance. Currently, there are two
representatives: 1) Tokenize into $1 $-digit, and 2) Tokenize into $1\sim 3$ digit. The
difference is roughly equivalent to using different numeral systems (namely base $10 $ or …
reasoning, they are still struggling with performing numeric operations accurately, such as
addition and multiplication. Numbers can be tokenized into tokens in various ways by
different LLMs and affect the numeric operations performance. Currently, there are two
representatives: 1) Tokenize into $1 $-digit, and 2) Tokenize into $1\sim 3$ digit. The
difference is roughly equivalent to using different numeral systems (namely base $10 $ or …
Though Large Language Models (LLMs) have shown remarkable abilities in mathematics reasoning, they are still struggling with performing numeric operations accurately, such as addition and multiplication. Numbers can be tokenized into tokens in various ways by different LLMs and affect the numeric operations performance. Currently, there are two representatives: 1) Tokenize into -digit, and 2) Tokenize into digit. The difference is roughly equivalent to using different numeral systems (namely base or base ). In light of this, we study the scaling behavior of different numeral systems in the context of transformer-based large language models. We empirically show that a base system is consistently more data-efficient than a base or system across training data scale, model sizes under from-scratch training settings, while different number systems have very similar fine-tuning performances. We attribute this to higher token frequencies of a base system. Additionally, we reveal extrapolation behavior patterns on addition and multiplication. We identify that base and base systems struggle on token-level discernment and token-level operations. We also sheds light on the mechanism learnt by the models.
arxiv.org
Showing the best result for this search. See all results