AIU. - Artificial Intelligence Utility

<< Click to Display Table of Contents >>

Navigation:  3. Script Language > AI - Artificial Intelligence Commands > AIU. - OpenAI API > !Model and Endpoint Configuration >

AIU. - Artificial Intelligence Utility

AIU.GetModel

Previous Top Next


MiniRobotLanguage (MRL)

 

AIU.GetModel
Retrieve the Current AI Model Setting

 

Intention

 

GetModel Command: Accessing the Current AI Model
 
The GetModel command retrieves the currently configured AI model used by the AIU system for operations like chat completions or embeddings.

This allows you to verify or utilize the model setting in your scripts.

It’s part of the AIU - OpenAI API suite.

 

What is the GetModel Command?

 

The GetModel command queries the AIU system to return the name of the current AI model, as set by AIU.SetModel or initialized by default.

You can store the result in a variable or, if unspecified, it will be placed on the Top of Stack (TOS).

 

Why Do You Need It?

 

Retrieving the current AI model is useful for:

Verification: Confirm which model is active before performing operations.

Dynamic Configuration: Adapt your script based on the model in use.

Debugging: Ensure the correct model is set for your intended task.

 

How to Use the GetModel Command?

 

The command can be called with an optional variable to store the model name.

If no variable is provided, the result is pushed onto the stack. The model is typically set via AIU.SetModel and defaults to an OpenAI model like gpt-3.5-turbo unless changed.

Available models and their prices (as of March 18, 2025, from OpenAI’s pricing page) include:

gpt-4o: $5.00/1M input tokens, $15.00/1M output tokens (multimodal, 128K context).

gpt-4o-mini: $0.15/1M input tokens, $0.60/1M output tokens (cost-effective, 128K context).

gpt-4-turbo: $10.00/1M input tokens, $30.00/1M output tokens (high performance, 128K context).

gpt-3.5-turbo: $0.50/1M input tokens, $1.50/1M output tokens (dialog-optimized, 16K context).

o1-preview: $15.00/1M input tokens, $60.00/1M output tokens (advanced reasoning, 128K context).

o1-mini: $3.00/1M input tokens, $12.00/1M output tokens (reasoning, cost-effective, 128K context).

 

Example Usage

 

AIU.SetModel|gpt-4o

AIU.GetModel|$$MOD

DBP.Current Model: $$MOD

AIU.GetModel

POP.$$MOD

 

This example sets the model to gpt-4o and retrieves it using both variable and stack methods.

 

Illustration

 

┌───────────────┐

│ Model         │

├───────────────┤

│ gpt-4o        │

└───────────────┘

A simple representation of retrieving the current AI model.

 

Syntax

 

AIU.GetModel[|P1]

AIU.Get_Model[|P1]

 

Parameter Explanation

 

P1 - (Optional) The variable where the current model name will be stored. If omitted, the model name is placed on the Top of Stack (TOS).

 

Example

 

AIU.SetModel|gpt-3.5-turbo

AIU.GetModel|$$MOD

DBP.Current Model: $$MOD

ENR.

 

Remarks

 

- The returned value is a string representing the model name (e.g., gpt-3.5-turbo).

- The default model is set during initialization unless overridden by AIU.SetModel.

 

Limitations

 

- The command only retrieves the model name and does not validate its availability or compatibility.

- Additional parameters beyond P1 will trigger an error.

 

See also:

 

AIU.Set_Model

Model Configuration

AIU.Chat

AIU.Embed