<< Click to Display Table of Contents >> Navigation: 3. Script Language > AI - Artificial Intelligence Commands > AIC. - Artificial Intelligence Command > Set_Model Commands > Set Model Commands |
MiniRobotLanguage (MRL)
AIC.Set_Model_Completion
Choose one of the Open AI Models that are available for the "Completion endpoint".
The "Completion" Models try to calculate the Possibility of several Tokens/Words to complete a given Text.
Intention
The AIC.Set_Model_Completion command is used for specifying the OpenAI model you want to use for Completion-based conversations.
"Completion based" means that the Model will evaluate the answer "to complete your question" with the highest possibility.
The syntax of this command is
AIC.Set_Model_Completion|<Modelname>, where <Modelname> is the name of the OpenAI model you want to use.
The default model set by this command is text-ada-001.
OpenAI offers several Completion models that can be used with the Completion endpoint, see Table below.
These models can be used in various applications, such as
•drafting emails,
•writing Python code,
•answering questions,
•creating conversational agents,
•tutoring, language translation,
•and even simulating characters for video games
•among others.
The AIC.Set_Model_Completion command will select an Model which can then be used with the
AIC.Ask Completion-Command
Alternative to giving a Model-Name, you can specify a Model number, like this:
' Here we would specify "text-babbage.003"
AIC.Set_Model_Completion|3
Case |
Model ID |
Description |
Token Limit (as of Sep 2021) |
Cost per 1K tokens (as of July 2023) |
1 |
text-davinci-001 |
A version of the Davinci model that can understand and generate text in various languages, including German. Likely used for complex tasks and detailed responses. |
4096 |
$0.0200 |
2 |
text-davinci-002 |
Similar to text-davinci-001, more advanced. Generates text, |
4096 |
$0.0200 |
3 |
text-davinci-003 |
Most advanced version of the Davinci model. Overall, the best Model for the Completion Endpoint. |
4096 |
$0.0200 |
4 |
text-babbage-003 |
A version of the Babbage model, which is likely less powerful than Davinci but still very capable. Used for generating text. |
4096 |
$0.0005 |
5 |
davinci |
The base Davinci model, known for its ability to generate high-quality text and perform complex tasks. |
4096 |
$0.0200 |
6 |
text-curie-001 |
A version of the Curie model, which is smaller than Davinci and is generally used for tasks that don't require as much depth. Capable of generating text in various languages. |
2048 |
$0.0020 |
7 |
text-babbage-001 |
Similar to text-babbage-003, but possibly a different version or configuration of the Babbage model. Used for generating text. |
4096 |
$0.0005 |
8 |
text-ada-001 |
A version of the Ada model, which is smaller than Curie and is generally used for simpler tasks. Capable of generating text. |
2048 |
$0.0004 |
9 |
babbage |
The base Babbage model, smaller than Davinci but still powerful and capable of generating text. |
4096 |
$0.0005 |
10 |
text-similarity-davinci-001 |
A specialized version of the Davinci model used for text similarity analysis. It can understand and compare text to gauge how similar they are. |
4096 |
$0.0200 |
11 |
babbage-code-search-code |
A specialized version of the Babbage model, likely used for searching through code or understanding programming-related queries. |
4096 |
$0.0005 |
Else |
text-ada-001 |
Similar to Case 8, a version of the Ada model used for generating text. |
2048 |
$0.0004 |
Temporary Model for 2023:
The following two Model has been announced by OpenAI in 11/2023 and are temporary, why we did not hard-code this.
To select this model, use directly the name.
Updated GPT-3.5 Turbo: "gpt-3.5-turbo-1106"
The new gpt-3.5-turbo-1106 supports 16K context by default and that 4x longer context is available at lower prices: $0.001/1K input, $0.002/1K output. Fine-tuning of this 16K model is available.
Fine-tuned GPT-3.5 is much cheaper to use: with input token prices decreasing by 75% to $0.003/1K and output token prices by 62% to $0.006/1K.
gpt-3.5-turbo-1106 joins GPT-4 Turbo with improved function calling and reproducible outputs.
Syntax
AIC.Set Model Completion|P1
AIC.SMC|P1
Parameter Explanation
P1 - Model-Name, can be a number (see Table above) or directly the name of the model to use.
Example
'*****************************************************
' EXAMPLE 1: AIC.-Commands
'*****************************************************
' Set OpenAI API-Key from the saved File
AIC.SetKey|File
' Set Model
AIC.SetModel_Completion|4
' Set Model-Temperature
AIC.Set_Temperature|0
' Set Max-Tokens (Possible lenght of answer, depending on the Model up to 2000 Tokens which is about ~6000 characters)
' The more Tokens you use the more you need to pay.
AIC.SetMax_Token|25
' Ask Question and receive answer to $$RET
AIC.Ask_Completion|What is a "Windows Button"?|$$RET
MBX.$$RET
:enx
ENR.
Note that the Answer-Text is cut off at the end because i have specified a maximum of 25 Tokens in the Script which is to low for the complete answer.
Remarks
-
Limitations:
-
See also:
• Set_Key
•