|
<< Click to Display Table of Contents >> Navigation: 3. Script Language > AI - Artificial Intelligence Commands > LMS. - LM-Studio Commands > LMS. - LM Studio Interface |
MiniRobotLanguage (MRL)
LMS.Set MaxTokens Command
Set Max Generation Tokens
Intention
This command sets the maximum number of "tokens" that the AI model is allowed to generate in its response. A token is roughly equivalent to a word or part of a word.
Setting this limit is crucial for controlling the length of the AI's answer. It prevents overly long or "runaway" responses, which can be slow and resource-intensive. Most models have a maximum context limit (e.g., 4096, 8192, or 32768 tokens) that includes both your prompt and the generated response.
If you do not provide a parameter (or set it to 0), the token limit is reset to the system default (2048).
Syntax
LMS.Set Max Tokens|[P1]
Parameter Explanation
P1 - (Optional) An integer number for the maximum tokens (e.g., 500, 4096). If omitted or 0, resets to the default (2048).
Example
'**********************************************
' LMS.setmaxtokens - Sample
'**********************************************
'
' Set a very short limit for a summary
LMS.setmaxtokens|50
LMS.ask|"Summarize the plot of the movie 'Inception'"|$$Summary
MBX.Info|Short Summary: $$Summary
'
' Set a much larger limit for a creative story
LMS.setmaxtokens|4000
LMS.ask|"Write a story about a robot who discovers music"|$$Story
'
' Reset to default
LMS.setmaxtokens
ENR.
Remarks
This setting is persistent and affects all future LMS.ask and LMS.askex commands until it is changed again. The default value is 2048 tokens. The initial default in the library is 32768, but using LMS.setmaxtokens|0 sets it to 2048.
See also:
• LMS.ask
• LMS.settemp - Sets the generation temperature (creativity).
• LMS.settopk - Sets the Top-K sampling parameter.
• LMS.settopp - Sets the Top-P (nucleus) sampling parameter.