|
<< Click to Display Table of Contents >> Navigation: 3. Script Language > AI - Artificial Intelligence Commands > AIM. - Mistral AI > Modules |
MiniRobotLanguage (MRL)
AIM.Set Max Token
Set the maximum token limit for Mistral AI
Intention
The AIM.Set Max Token command is used to define the maximum token limit for the Mistral AI instance.
This command provides the ability to manage the output length of the Mistral AI model, useful for controlling response brevity and managing potential usage costs when connected to the Mistral AI API.
The parameter P1 represents the maximum token limit, which should be a positive integer.
The token limit for Mistral AI assumed to be 512 - 1024 tokens.
The command will set it to 15000 if P1 is empty.
The command itself will not limit the used parameter.
When connected to the Mistral AI API, Mistral AI could utilize a set of models such as:
Model |
Description |
API Name |
Notes |
|---|---|---|---|
Tiny |
Best for large batch processing tasks where cost is significant but reasoning capabilities are not crucial. |
mistral-tiny |
Powered by Mistral-7B-v0.2, a fine-tuning of the initial Mistral-7B. |
Small |
Offers higher reasoning capabilities and supports multiple languages and code production/reasoning. |
mistral-small |
Powered by Mixtral-8X7B-v0.1, a sparse mixture of experts model with 12B active parameters. |
Medium |
Relies on an internal prototype model. |
mistral-medium |
- |
Each Model may have its own token limits and cost structures.
As of the last update, usage of the Mistral AI API is not free and costs can add up depending on token usage, so it's essential to manage the maximum token limit effectively.
The token count not only includes visible words and punctuation but also invisible characters such as spaces and newlines.
Tokens in English have an average of around 4 bytes, but this can vary with the inclusion of special characters or different languages.
The token limit does not guarantee a specific length of content but only sets an upper boundary to the response size.
If a prompt requires more tokens than the current limit for a satisfactory response, the AI model might produce cut-off or incomplete responses.
Also, the token count is a reliable measure for controlling output length and ensuring costs remain within budget when using the Mistral AI API through Mistral AI.
Here are some additional things to keep in mind about the max_tokens parameter:
•The max_tokens parameter is not a hard limit. The model may generate more tokens than the specified value if it is unable to generate a complete sentence or phrase within the specified number of tokens.
•The max_tokens parameter does not affect the quality of the generated text. The model will still try to generate the best possible text, even if it is forced to generate fewer tokens.
Syntax
AIM.Set Max Token[|P1]
Parameter Explanation
P1 - (optional) A positive integer defining the maximum token count. Typically between 256 and 512.
if you omit P1 the command will use 15000.
Example
'***********************************
'
'***********************************
Remarks
When using the Mistral AI API, remember that increasing the maximum token count will lead to larger responses and hence higher costs.
Ensure you're aware of the pricing model for the specific GPT model you're utilizing.
Monitor your usage closely to prevent unexpected charges. Consider implementing safeguards to limit high-cost outputs.
If the maximum token count is set too low, it might restrict the quality and context of responses, potentially rendering the output unhelpful or nonsensical.
Hence, select an appropriate token limit that balances cost and response quality.
Limitations:
-
See also:
•