AIL. - AI-Local Systems (GPT4All)

<< Click to Display Table of Contents >>

Navigation:  3. Script Language > AI - Artificial Intelligence Commands > AIL. -  AI-Local Systems (GPT4All) >

AIL. - AI-Local Systems (GPT4All)

AIL.Set Top P

Previous Top Next


MiniRobotLanguage (MRL)

 

AIL.Set Top P

Defines the "top_p" (nucleus) sampling strategy in GPT4All AI model

 

clip0819

 

Intention

 

The "AIL.SetTop P" command allows users to control the "top_p" or nucleus sampling parameter, a powerful tool that influences the randomness and diversity of the GPT4All AI model's outputs.

 

The parameter P1 is a floating-point number that sets the "top_p" value.
This parameter guides the model to consider, at each generation step, only the most probable tokens whose combined probabilities reach up to the "top_p" value.

 

A P1 value closer to 1 allows the model to consider a wider range of tokens, leading to more diverse and potentially less predictable outputs.

A P1 value closer to 0 constrains the model to only consider the most probable tokens, leading to safer and more predictable responses.

However, it might restrict the creativity and diversity of the outputs.

 

If P1 is missing or set to "0", it will default to considering all tokens, effectively turning off the "top_p" sampling strategy.

 

USAGE EXAMPLES

 

' Set "top_p" to a low value for more predictable outputs:

' This command will make the AI lean towards the most likely tokens, resulting in more predictable outputs.

AIL.SetTop P|0.2

 

 

' Set "top_p" to a higher value for more diverse outputs:

' This command increases the AI's diversity and unpredictability in the outputs.

AIL.SetTop P|0.9

 

 

The top-p parameter in GPT-4All controls the fluency of the generated text. A higher value of the top-p parameter will result in more fluent text, while a lower value will result in less fluent text.

 

The top-p parameter works by limiting the probability of the model generating a token. For example, if the top-p parameter is set to 0.9, the model will only be able to generate tokens that have a probability of 90% or higher. This will make the generated text more fluent, as the model will be less likely to generate tokens that are out of place or grammatically incorrect.

 

The range of the top-p parameter in GPT-4All is from 0 to 1.0. A value of 0 means that the model will only be able to generate tokens that have a probability of 0%, while a value of 1.0 means that the model will be able to generate any token, regardless of its probability.

 

The default value of the top-p parameter is 0.8. This means that the model will only be able to generate tokens that have a probability of 80% or higher. However, you can adjust the value of the top-p parameter to achieve the desired level of fluency in your generated text.

 

For example, if you want to generate text that is very fluent, you can set the value of the top-p parameter to a higher value, such as 0.95 or 0.99. This will ensure that the model is very unlikely to generate tokens that are out of place or grammatically incorrect.

 

On the other hand, if you want to generate text that is more creative and allows for some errors, you can set the value of the top-p parameter to a lower value, such as 0.7 or 0.6. This will allow the model to generate more text that is similar to the prompt, even if it is not grammatically perfect.

 

The default value of the top-p parameter is 0.8. This means that the model will only be able to generate tokens that have a probability of 80% or higher. However, you can adjust the value of the top-p parameter to achieve the desired level of fluency in your generated text.

 

For example, if you want to generate text that is very fluent, you can set the value of the top-p parameter to a higher value, such as 0.95 or 0.99. This will ensure that the model is very unlikely to generate tokens that are out of place or grammatically incorrect.

 

On the other hand, if you want to generate text that is more creative and allows for some errors, you can set the value of the top-p parameter to a lower value, such as 0.7 or 0.6. This will allow the model to generate more text that is similar to the prompt, even if it is not grammatically perfect.

 

The best way to determine the optimal value of the top-p parameter for your needs is to experiment with different values and see what works best for you.

 

Here are some additional things to keep in mind about the top-p parameter:

 

The top-p parameter does not affect the quality of the generated text. The model will still try to generate the best possible text, even if it is limited by the top-p parameter.

The top-p parameter can be used to control the latency of the model. The model will be slower if it is limited by the top-p parameter, as it will have to search through a larger set of tokens to find the best one.

The top-p parameter can be used to control the memory usage of the model. The model will use more memory if it is allowed to generate more tokens, as it will not have to store as many tokens in memory.

 

 

Syntax

 

 

AIL.Set Top P[|P1]

 

 

Parameter Explanation

 

P1 - (optional) This is a floating-point number representing the "top_p" sampling value.
         If P1 is missing or set to "0", the model will consider all tokens in each decision, effectively disabling "top_p" sampling.

 

 

 

 

Example

 

'***********************************

'

'***********************************

 

 

 

Remarks

 

-

 

 

Limitations:

 

-

 

 

See also:

 

    1.6.1. Program Flow Control

    ! Smart Package Robot 's Parallel Robot Operations

    1.5. Features and Hints