AIU. - Artificial Intelligence Utility

<< Click to Display Table of Contents >>

Navigation:  3. Script Language > AI - Artificial Intelligence Commands > AIU. - OpenAI API > !Output Formatting and Delivery >

AIU. - Artificial Intelligence Utility

AIU.SetStream

Previous Top Next


MiniRobotLanguage (MRL)

 

AIU.SetStream
Set the Streaming Mode for AI Responses

 

Intention

 

SetStream Command: Enable Real-Time Response Delivery
 
The SetStream command toggles streaming mode, allowing AI responses to be delivered incrementally as they are generated, rather than all at once.

This enhances interactivity and reduces perceived latency in applications.

It’s part of the AIU - OpenAI API suite.

 

What is the SetStream Command?

 

The SetStream command enables or disables streaming mode for AI operations like AIU.Chat.

In streaming mode (set to 1), responses are sent in chunks as tokens are generated, ideal for real-time applications. When disabled (set to 0), the full response is delivered only after completion. Tokens are text units (e.g., words or punctuation), and streaming does not alter total token usage or cost.

 

Why Do You Need It?

 

Streaming mode is valuable for:

Interactivity: Display text as it’s generated for a responsive user experience.

Efficiency: Process partial responses without waiting for the full output.

Flexibility: Toggle between streaming and batch modes based on use case.

 

How to Use the SetStream Command?

 

Set the parameter to 1 to enable streaming or 0 to disable it.

Streaming affects how responses are delivered, not their content or token count. Costs are based on total tokens (input + output), as shown below (prices as of March 18, 2025, from OpenAI’s pricing page):

gpt-4o: $5.00/1M input tokens, $15.00/1M output tokens (multimodal, 128K context).

gpt-4o-mini: $0.15/1M input tokens, $0.60/1M output tokens (cost-effective, 128K context).

gpt-3.5-turbo: $0.50/1M input tokens, $1.50/1M output tokens (dialog-optimized, 16K context).

Streaming incurs the same per-token cost but delivers output incrementally.

 

Example Usage

 

AIU.SetStream|1

AIU.Chat|Tell me a long story|$$STO

DBP.Streaming Story: $$STO

 

Enables streaming, delivering the story in real-time chunks.

 

Illustration

 

┌──────────────┬────────────────────┐

│ Stream Mode  │ Response Delivery  │

├──────────────┼────────────────────┤

│ 0 (Off)      │ [Full Response]    │

├──────────────┼────────────────────┤

│ 1 (On)       │ [Chunk 1][Chunk 2] │

└──────────────┴────────────────────┘

Illustration of streaming mode’s effect on response delivery.

 

Syntax

 

AIU.SetStream|P1

AIU.Set_Stream|P1

 

Parameter Explanation

 

P1 - An integer: 1 to enable streaming, 0 to disable it. Required.

 

Example

 

AIU.SetStream|0

AIU.Chat|What’s the news today?|$$NEW

DBP.Full News Response: $$NEW

ENR.

 

Remarks

 

- Streaming mode requires compatible operations (e.g., AIU.Chat).

- Does not affect token count or pricing, only delivery timing.

 

Limitations

 

- Requires exactly one parameter; omitting or adding extra parameters causes an error.

- Not all models or endpoints may support streaming.

 

See also:

 

AIU.Get_Stream

AIU.Chat

AIU.Set_ChatEndpoint

AIU.Get_TotalTokens

Stream Configuration