|
<< Click to Display Table of Contents >> Navigation: 3. Script Language > AI - Artificial Intelligence Commands > AIG. - Google AI > 4. Configuration (Generation Parameters) > AIG. - AI Google Gemini Integration |
MiniRobotLanguage (MRL)
AIG.SetSafetySettings
Set Content Safety Filter Settings
Intention
Configure the thresholds for blocking potentially harmful content generated by the AI model or present in prompts. This allows you to adjust the safety level for different categories like harassment, hate speech, sexually explicit content, and dangerous content.
This Command sets the AIG_SafetySettings global variable. This variable should contain a string representing one or more JSON objects, separated by commas, defining the safety settings according to the Google AI API schema. Each object specifies a `category` and a `threshold`.
If you provide an empty string ("") or omit the parameter, the safety settings are reset to the library's default value, which typically blocks none of the standard categories (using the pre-defined `$AIG_Safety_*` constants).
Valid Categories:
�HARM_CATEGORY_HARASSMENT
�HARM_CATEGORY_HATE_SPEECH
�HARM_CATEGORY_SEXUALLY_EXPLICIT
�HARM_CATEGORY_DANGEROUS_CONTENT
Valid Thresholds:
�BLOCK_NONE: Always show content, regardless of probability of harm.
�BLOCK_LOW_AND_ABOVE: Block content with low, medium, or high probability of harm.
�BLOCK_MEDIUM_AND_ABOVE: Block content with medium or high probability of harm.
�BLOCK_ONLY_HIGH: Block content with high probability of harm.
Use this Command to:
�Adjust Safety Levels: Increase or decrease the strictness of content filtering based on your application's needs.
�Allow Specific Content: Set thresholds to `BLOCK_NONE` for categories where you need less filtering (use with caution).
�Ensure Compliance: Configure stricter settings if required for specific audiences or regulations.
Provide a string (P1) containing the valid JSON object(s) for the safety settings. Remember that string values within the JSON (like category and threshold names) need to be enclosed in double quotes (represented by $BC in the constants).
Example Usage
// Example: Block medium/high hate speech, block only high dangerous content
// Construct the string carefully. Using variables might be easier.
$$Setting1 = $"""{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"}"""
$$Setting2 = $"""{"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_ONLY_HIGH"}"""
AIG.SetSafetySettings|@$$Setting1,@$$Setting2
DBP. Custom safety settings applied.
// Reset to library default (likely BLOCK_NONE for all)
AIG.SetSafetySettings|""
DBP. Safety settings reset to default.
Syntax
AIG.SetSafetySettings|P1
AIG.sss|P1
Parameter Explanation
P1 - (Optional) A string containing the JSON definitions for the safety settings.
� �- Format: `{"category": "CATEGORY_NAME", "threshold": "THRESHOLD_NAME"}`
� �- Multiple objects should be separated by commas.
� �- Example: `{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"},{"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_ONLY_HIGH"}`
� �- If an empty string (`""`) is provided or the parameter is omitted, settings reset to the library default.
Remarks
- You must provide the JSON structure as a single string parameter P1, including the necessary double quotes within the JSON itself.
- The library does not validate the correctness of the JSON structure provided; errors will be reported by the Google API if the format is invalid.
- Refer to the official Google Gemini API documentation for the most up-to-date list of categories and thresholds.
- Use AIG.GetSafetySettings to retrieve the currently configured settings string.
See also: