AIU. - Artificial Intelligence Utility

<< Click to Display Table of Contents >>

Navigation:  3. Script Language > AI - Artificial Intelligence Commands > AIU. - OpenAI API > !Response Behavior Settings >

AIU. - Artificial Intelligence Utility

AIU.SetMaxToken

Previous Top Next


MiniRobotLanguage (MRL)

 

AIU.SetMaxToken
Set the Maximum Token Limit for AI Responses

 

Intention

 

SetMaxToken Command: Define Response Length Limit
 
The SetMaxToken command sets the maximum number of tokens the AI can generate in a response, controlling output length and cost.

This helps manage resource usage and ensures responses fit your needs.

It’s part of the AIU - OpenAI API suite.

 

What is the SetMaxToken Command?

 

The SetMaxToken command configures the maximum token limit for AI responses in operations like AIU.Chat or AIU.Responses.

Tokens are units of text (words, punctuation, etc.) processed by the AI. For example, "Hello, world!" is roughly 3 tokens. This limit caps the output, affecting both content length and processing cost.

 

Why Do You Need It?

 

Setting a maximum token limit is essential for:

Cost Control: Limit token usage to manage API expenses (see pricing below).

Response Size: Ensure outputs are concise for specific applications.

Performance: Reduce processing time by capping lengthy responses.

 

How to Use the SetMaxToken Command?

 

Provide an integer value representing the maximum number of tokens for AI responses.

Tokens include both input and output in API calls, and costs vary by model. Available models and their prices (as of March 18, 2025, from OpenAI’s pricing page) include:

gpt-4o: $5.00/1M input tokens, $15.00/1M output tokens (multimodal, 128K context).

gpt-4o-mini: $0.15/1M input tokens, $0.60/1M output tokens (cost-effective, 128K context).

gpt-4-turbo: $10.00/1M input tokens, $30.00/1M output tokens (high performance, 128K context).

gpt-3.5-turbo: $0.50/1M input tokens, $1.50/1M output tokens (dialog-optimized, 16K context).

o1-preview: $15.00/1M input tokens, $60.00/1M output tokens (advanced reasoning, 128K context).

o1-mini: $3.00/1M input tokens, $12.00/1M output tokens (reasoning, cost-effective, 128K context).

For example, setting a 100-token limit with gpt-4o-mini costs approximately $0.00006 for output.

 

Example Usage

 

AIU.SetMaxToken|50

AIU.Chat|Describe a sunset|$$RES

DBP.Short Sunset Description: $$RES

 

Limits the response to 50 tokens, ensuring a concise description.

 

Illustration

 

┌────────────────────┬─────────────┐

│ Max Tokens Set     │ 50          │

├────────────────────┼─────────────┤

│ Sample Response    │ The sun sets│

│                    │ in a blaze  │

│                    │ of orange.  │

├────────────────────┼─────────────┤

│ Token Count (est.) │ ~10         │

└────────────────────┴─────────────┘

Illustration of setting a 50-token limit and a sample constrained response.

 

Syntax

 

AIU.SetMaxToken|P1

AIU.Set_MaxToken|P1

 

Parameter Explanation

 

P1 - An integer specifying the maximum token limit. Required.

 

Example

 

AIU.SetMaxToken|100

AIU.Chat|Explain quantum physics|$$EXP

DBP.Brief Explanation: $$EXP

ENR.

 

Remarks

 

- The limit applies to output tokens only, but total token usage (input + output) affects cost.

- Must be within the model’s context window (e.g., 128K for gpt-4o).

 

Limitations

 

- Requires exactly one parameter; omitting or adding extra parameters causes an error.

- Does not validate against the model’s maximum context length.

 

See also:

 

AIU.Get_MaxToken

AIU.Chat

AIU.Responses

AIU.Get_TotalTokens

Max Token Configuration