|
<< Click to Display Table of Contents >> Navigation: »No topics above this level« AIG. - AI Google Gemini Integration |
MiniRobotLanguage (MRL)
AIG.Get MaxToken
Retrieve Current Maximum Token Limit
Intention
GetMaxToken Command: Verify Output Limit
The GetMaxToken command retrieves the current maximum output token limit set for Google Gemini API responses.
It aids in configuration verification and debugging.
It’s part of the AIG - Google Gemini API suite.
This command returns the current value of AIG_MaxToken, which caps the output tokens in AI responses (1-8192).
The result can be stored in a variable or placed on the Top of Stack (TOS).
Retrieving the output token limit is valuable for:
•Configuration Check: Ensures the desired limit is active.
•Debugging: Identifies truncation issues in responses.
•Script Control: Enables dynamic adjustments based on the limit.
Use with an optional variable to store the result; otherwise, it’s pushed to the TOS.
Defaults to 2048 unless modified by AIG.Set MaxToken.
Google Gemini API model token limits (March 19, 2025):
•Gemini 1.0 Pro: 32,768 input, 2,048 output.
•Gemini 1.5 Pro: 2M input (128K stable), 8,192 output.
•Gemini 1.5 Flash: 1M input, 8,192 output.
•Gemini 2.0 Flash: 2M input (experimental), 8,192 output.
Latest Google Gemini API Updates
As of March 19, 2025:
•Gemini 2.0 Series: New experimental models with enhanced reasoning and 2M token support.
•Code Execution: Enabled for 1.5 models, improving math and reasoning tasks.
•Free Tier: 60 RPM for 1.5 Flash, no charge for tuning.
Example Usage
AIG.Set MaxToken|500
AIG.Get MaxToken|$$MAX
DBP.Current Limit: $$MAX
Returns 500 if set, or 2048 if unchanged.
Illustration
┌───────────────┐
│ Max Tokens │
├───────────────┤
│ 500 │
└───────────────┘
Shows the current output token limit.
Syntax
AIG.GetMaxToken[|P1]
AIG.Get_MaxToken[|P1]
Parameter Explanation
P1 - (Optional) Variable to store the output token limit (1-8192). If omitted, placed on TOS.
Example
AIG.Set MaxToken|300
AIG.Get MaxToken|$$LIM
DBP.Max Limit: $$LIM
ENR.
Remarks
- Reflects the output limit set by AIG.Set MaxToken.
- Input context limits are model-specific and can reach 2M tokens.
Limitations
- Shows only the output limit (max 8192), not input context.
See also: