Ask AI Commands

<< Click to Display Table of Contents >>

Navigation:  3. Script Language > AI - Artificial Intelligence Commands > AIM. - Mistral AI >

Ask AI Commands

AIM.Ask History Clipboard

Previous Top Next


MiniRobotLanguage (MRL)

 

AIM.Ask History Clipboard
Ask or instruct the most advanced Mistral models using a history-Array and expect Return also in the Clipboard.

 

clip1010

In the WEB-UI, Mistral-AI uses the history of previous answers to generate a new answer.

Using this command you can do exactly this also. Here we  have taught the AI that her name is "Paul".

 

 

 

Intention

 

Generally this Command is identical to AIM.Ask_with_History but will also return the result in the Clipboard.

This is important if you expect Unicode Characters, like Emojis as result.

 

This command allows you to add a complete "Chat History" to the prompt, in the same way as if you would have been chatting with the AI in the Chat-Window.

This way you can reconstruct and get the same Answers that you may have gotten while chatting with the AI (at low temperature values, else there is a high random factor).

 

The AIM.Ask_with_History command is the most advanced AI-Command to send a Question or Instruction to the most advanced Mistral models and receive an answer.

Using this command you can achieve the same results like if you chat with the AI in a Browser Window.

You can attach a "chat history" to the Prompt, which is done using the new Array-Commands.

 
You will need to get an API-Key here: AI - Artificial Intelligence Commands before being able to use this command.

 

Use the AIM.Set_Model command to specifying the Mistral model you want to use for chat-based conversations.

There are multiple other commands which can be used to change the environment for the AI.

 

Once all preconditions are set, the usage of this command is as simple as:

 

 

'#SPI:NoAPIKey

 

' Set Mistral API-Key from the saved File

AIM.SetKey|File

' Set Model

AIM.SetModel|mistral-medium

' Set Model-Temperature

AIM.Set_Temperature|0.6

AIM.SetMax_Token|300

 

' Using the ARR.-Command we buld a Chat history

'Each Array element must constist of 2 Parts:

'<Role>:<Content>

 

' You can use as many Array Elements as you like, the Array is "Auto-Dim"

'

'     <Array-No.>|<Array-Element No.>|<Text to assign to the Array Element>

'  

$$TXT=Was die schon wieder machen ist der größte Käse

$$PRO=Make professional Rhyme in german about "$$TXT" $crlf$

$$PRO+ DO only return the rhyme in german,  do not return any explanations.

$$PRO+ Add Emojis to the final Text.

 

ARR.Set|1|0|system:You are an helpful assistant

ARR.Set|1|1|assistant:Hallo,I am Paul your helpful assistant.

ARR.Set|1|2|user:Mache einen Reim zum Thema Baum

GSB.Rhb

ARR.Set|1|3|ass:$$PRA

GSB.Rha

ARR.Set|1|4|user:Mache einen Reim zum Thema Ausgleich

ARR.Set|1|5|ass:$$PRA

ARR.Set|1|6|user:$$PRO

 

' Now we send the Array to the Model.

'  The first Parameter is the Array Number

AIM.Ask History Clipboard|1|$$RET

PRT.$$RET

 

'AIC.Show Error

AIC.Get Raw Output|$$RET

DBP.RAW:$$RET

MBX.!

ENR.

'===========================================================

 

:Rha

$$PRA=Ausgleich$crlf$

$$PRA+ So mancher hat sich wohl die Welt$crlf$

$$PRA+ Bedeutend besser vorgestellt - $crlf$

$$PRA+ Getrost! Gewiß hat sich auch oft $crlf$

$$PRA+ Die Welt viel mehr von ihm erhofft!$crlf$

RET.

 

:Rhb

$$PRA= Der Baum

$$PRA+ Zu fällen einen schönen Baum,$crlf$

$$PRA+ braucht’s eine Viertelstunde kaum.$crlf$

$$PRA+ Zu wachsen, bis man ihn bewundert,$crlf$

$$PRA+ braucht er, bedenkt es, ein Jahrhundert.$crlf$

RET.

'-----------------------------------------------------------
 
 

This is the Result of the above Script. Chat GPT apologizes for a mistake we told him he may have done using the Chat-History.

 

 

 

Using the  "AIM.Ask wit history" Command is equal to chatting in ChatGPT (using the Mistral AI) via the Browser GUI. 

 
Before we can discuss the details, you need to know the concept of Tokens. In natural language processing (NLP), a "token" typically refers to a unit of text.

In the simplest sense, tokens can be thought of as words.

For example, the sentence "I love AI" can be broken down into three tokens: "I", "love", and "AI".

However, tokens can also represent smaller units such as characters or subwords, or larger units like sentences, depending on the context.
See picture below.

 

When it comes to AI LLM-models, tokens play a crucial role in how text is processed. 

These models read text in chunks called tokens.

Managing tokens is an important aspect of using chat models. Tokens are chunks of text that models read, and the total number of tokens in an API call affects the cost, time taken, and whether the API call works at all.
For Mistral, the maximum token limit is 4096? tokens (This is an assumption, we need to wait until its public). Both input and output tokens count toward these quantities.

 

Now, let's talk about Mistral's language models and tokens.

Mistral’s models, like "Mistral-medium", are not just limited to English; they can process text in multiple languages.

Additionally, a single token can represent a whole word, a part of a word, or even a single character, depending on the language and context.

For example, the word "chatbot" might be a single token, but in some languages or contexts, it might be split into multiple tokens like "chat" and "bot".

 

There's also a concept of "maximum available tokens" for Mistral models. 

Do you remember times when first computers had a maximum capacity of 4 KB?

This is where we are in terms of AI now.

This is essentially the maximum number of tokens that a model can process in a single request or operation.

 

This means that if you want "mistral-medium" to process a text, the total number of tokens in that text must not exceed 4096.

The Token-Limits includes both the input and output tokens.

 

If the text is too long, you would need to truncate or shorten it to fit within this limit.

Otherwise the Model will forget the start of the text when reading the end.

 

clip0641

If you want to experiment with Tokens, you can use the Mistral Online Tokenizer.

 

It's important to note that token limits are not necessarily fixed and may change over time as models are updated or new models are released.

Additionally, different models may have different token limits.

 

 

What is the difference between using the SPR and using the WEB-Interface from ChatGPT?

 

Using the WEB-GUI, the whole dialog is been used as Input/context for each next answer.

Until the maximum amount of usable Tokens has been exceeded. If that happens, then the AI will forget the start of the dialog and may even be completely lost.

If that happens in the WEB-GUI, you will recognize that you get surprisingly wrong answers.

 

Using Mistral via the SPR this is generally NOT the case.
First you can set the maximum amount of Tokens to use using the Command AIM.Set MaxToken.

 And then every AIM.Ask_with_History - Command is a completely new command and does by itself not remember anything that was before.

 This way saving you a lot of Token.

 To overcome this issue, using this Command you can "attach" a previous dialog or full history to the prompt.
 This is exactly what happens also in the WEB-GUI.

 

Using the SPR you can use more Tokens, because any Chat is generally "NEW" and starts with the Full Amount of Tokens that are available,

and is only limited by using the Command AIM.Set MaxToken and the maximum Tokens of the used Model.

Therefore you can choose which parts of the history you really need, and only attach these parts to the History-Array.

 

You can access the Chat-History using the AIM.Get History - Command and other AI - History Commands and this way

include parts or all of earlier chats into the current prompt. But mostly this does not make sense here.

 

However, the best rule here is:

 

Include all needed Instruction and Samples into the current Prompt.

 

Using the History Array you can also provide Sample-Answers to the AI and get your final answer in the way you want.

 
You can get the history of the chat, and the last Question, or the last Answer using the Commands:

AIM.Get_History|$$HIS

AIM.Get_Last_Question|$$QUE
AIM.Get_Last_Answer|$$ANS

 

Mistral currently offers 3 chat models that can be used,

namely

 

Model

Description

P1-Name

Notes

Tiny (1)

Best for large batch processing tasks where cost is significant but reasoning capabilities are not crucial.

mistral-tiny

Powered by Mistral-7B-v0.2, a fine-tuning of the initial Mistral-7B.

Small (2)

Offers higher reasoning capabilities and supports multiple languages and code production/reasoning.

mistral-small

Powered by Mixtral-8X7B-v0.1, a sparse mixture of experts model with 12B active parameters.

Medium (3)

Relies on an internal prototype model.

mistral-medium

-

 

 

 

Alternative to giving a Model-Name, you can specify a Model using a number, like this:

 

' Here we would specify "mistal-small"

AIM.Set_Model|2

 

Here is a Test-Script that you can use to see the answers of the models to a Problem.
Due to the complexity i have increased the number of maximum Tokens.

 

' Set Mistral API-Key from the saved File

AIM.SetKey|File

 

' Set Model

AIM.SetModel|3

 

' Set Model-Temperature

AIM.Set_Temperature|0.2

 

' Set Max-Tokens (Possible lenght of answer, depending on the Model up to 2000 Tokens which is about ~6000 characters)

' The more Tokens you use the more you need to pay. But the longer Input and Output can be.

AIM.SetMax_Token|300

 

ARR.Set|1|0|system:You are a technical advisor. Your name is Paul

ARR.Set|1|1|user:What is a PowerPlug?

ARR.Set|1|2|assistant:Hallo, I am Sidney A Powerplug is ...

ARR.Set|1|3|user:How about mechanical value?

ARR.Set|1|4|assistant:Hallo I am Sidney A mechanical valve is ...

ARR.Set|1|5|user:Tell me about this formula: $z^n = \sqrt(r^n(\cos(n\theta) + \sqrt(i\sin(n\theta))))$

ARR.Send to AI|1

AIM.Ask History Clipboard|1|$$RET

' The Get RAW Output must be borrowed from "AIC."

AIC.gro|$$REA

DBP.$$RET

DBP.$crlf$$crlf$

DBP.JSON Output

DBP.$$REA

ENR.

 

clip1011

Using the Script above we tell the AI to use the name Paul and is a technical advisor. 

clip1009
Picture is from the official Mistral Page comparing the abilities of the Mistral Models.

All Mistral models can be used for most Text-Tasks.

 

Tokens:

Language models read text in chunks called tokens. A token can be as short as one character or as long as one word​​.

Both input and output tokens count toward the total tokens used in an API call.
The total number of tokens affects the cost, time, and whether the API call works at all​​.

 

Pricing (per 27.01.2024 - prices are subject to change at any time):
 
Chat Completions API

Model                        Input                        Output

mistral-tiny                0.14€ / 1M tokens        0.42€ / 1M tokens

mistral-small                0.6€ / 1M tokens        1.8€ / 1M tokens

mistral-medium                2.5€ / 1M tokens        7.5€ / 1M tokens

 

Embeddings API

Model                        Input

mistral-embed                0.1€ / 1M tokens

 

All 3 models are powerful tools for natural language processing and can be used for a wide range of applications.

"Mistral-Medium" is the newest model and is larger compared to "mistral-tiny".

However, "mistral-tiny" is Ok for most tasks and much more cost-effective, especially for applications that don't require the absolute cutting edge in language model performance.

 
 

Syntax

 

 

AIM.Ask_with_History[|P1][|P2][|P3]

AIM.Awh[|P1][|P2][|P3]

 

 

Parameter Explanation

P1 -  <value 0-32>: This is the number of the Array that contains the "Chat History".

P2 - opt. Variable to return the result / answer from the AI.

P3 - opt.  0/1 - Flag:  This flag is optional and is used to specify how the results should be returned when multiple results are expected. If you have set the number of expected results to a value higher than 1 using AIM.Set Number, this flag determines how the results are returned. If set to "1", only the last result will be returned. If set to "0" (or left as the default), all results will be returned.

 

 

 

Example

 

'*****************************************************

' EXAMPLE 1: AIM.-Commands

' Here we let the AI Calculate x for the formula "5*x^3=1450"

'

'*****************************************************

' Set Mistral API-Key from the saved File

AIM.SetKey|File

FOR.$$LEE|1|3

 

' Set Model

  AIM.SetModel_Chat|$$LEE

 

' Set Model-Temperature

  AIM.Set_Temperature|0

 

' Set Max-Tokens (Possible lenght of answer, depending on the Model up to 2000 Tokens which is about ~6000 characters)

' The more Tokens you use the more you need to pay. But the longer Input and Output can be.

  AIM.SetMax_Token|1000

 

' Ask Question and receive answer to $$RET

  $$QUE=Act as a mathematician.Calculate x for the formula "5*x^3=1450". Do it step-by-step

  AIM.Ask History Clipboard|$$QUE|$$RET

  CLP.$$RET

  MBX.Model: $$LEE $crlf$$$RET

NEX.

:enx

ENR.

 

 

Note that the Answer-Text is cut off at the end if you have specified a too small number of maximum Tokens in the Script.

 

 

Remarks

In your Prompts, ensure Clarity and Precision: Articulate your prompt in a way that unambiguously communicates the desired output from the model. Refrain from using vague or open-ended language, as this can yield unpredictable outcomes.

Incorporate Pertinent Keywords: Embed keywords in the prompt that are directly associated with the subject matter. This guides the model in grasping the context and subsequently producing more precise content.

Supply Contextual Information: Should it be necessary, furnish the model with background information or context. This equips the model to formulate more informed and contextually relevant responses.

Engage in Iterative Refinement: Embrace the process of experimentation with a variety of prompts to ascertain which is most effective. Continuously refine your prompts in response to the output generated, making adjustments until the desired results are achieved.

 

 

Limitations:

-

 

 

See also: