Token Usage
Tokens are the fundamental units used by language models to represent text. They are also the units used for metering and billing. A token typically corresponds to a short word, a symbol, a number, or a character—depending on the language and encoding.
For intuitive reference:
A Chinese character usually counts as ~0.6 tokens
An English character typically equals ~0.3 tokens
A single English word or number often maps to 1 token
⚠️ Note: The exact token count depends on the model’s tokenizer. Different models may tokenize the same input differently. Final token usage is always based on what the model returns per request.
Last updated