Models & Usage Costs
The pricing below is calculated per 1 million tokens. A token represents the smallest unit of text the model processes—it could be a word, number, or punctuation mark. Charges are based on the total number of input and output tokens generated per request.
Model & Pricing Details
MODEL(1)
dmind-1
dmind-1-mini
CONTEXT LENGTH
33K
33K
MAX OUTPUT TOKENS(2)
16K
16K
STANDARD PRICE 1M TOKENS INPUT (CACHE MISS)
$0.3
$0.2
STANDARD PRICE
1M TOKENS OUTPUT(4)
$0.6
$0.4
The
dmind-1model currently points to the latest stable version of our Web3-native LLM.If
max_tokensis not specified, the default maximum output length is 4,096 tokens. You can manually increase this with themax_tokensparameter.For advanced session handling, caching, or reuse, please refer to our Context Management section.
For reasoning models, all tokens used—including intermediate reasoning and final output—are counted equally toward usage.
Deduction Rules
Token usage is calculated as: cost = number of tokens × price per token
Fees are automatically deducted from your available balance, with free or promotional credits used first when available.
Pricing may be adjusted periodically. We recommend checking the pricing dashboard regularly for the latest updates based on your usage needs.
Last updated