|
<< Click to Display Table of Contents >> Navigation: 3. Script Language > AI - Artificial Intelligence Commands > AID. - DeepSeek > !Chat Operation > AID. - Artificial Intelligence DeepSeek Utility |
MiniRobotLanguage (MRL)
AID.Ask
Query the DeepSeek API with Structured JSON Response
Intention
Ask Command: Structured AI Interaction
The AID.Ask command sends a user-defined prompt to the DeepSeek API and retrieves a response in JSON format, ideal for applications requiring structured data processing.
It leverages the DeepSeek API’s cost-effective pricing (e.g., $0.27/1M input tokens and $1.10/1M output tokens for DeepSeek Chat as of February 2025) to enable efficient AI-driven automation.
It’s part of the AID - DeepSeek API suite, designed for seamless integration into MiniRobotLanguage scripts.
The AID.Ask command interacts with the DeepSeek API by submitting a prompt, forcing the response format to JSON via AID_SetRespFormat "json", and returning a structured response parsed by HTP_AnalyzeDeepSeekResponse.
It supports storing the response in a variable and optionally copying it to the clipboard, making it suitable for workflows needing data extraction or further processing.
The AID.Ask command is critical for:
•Data Processing: JSON responses allow easy parsing for structured data (e.g., extracting specific fields like "content").
•Cost Efficiency: At $1.10/1M output tokens, it’s significantly cheaper than competitors like OpenAI’s GPT-4o ($5.00/1M input, $15.00/1M output).
•Automation: Ideal for scripts needing AI-generated data in a machine-readable format.
Provide a prompt (P1), an optional variable to store the JSON response (P2), and an optional flag to copy it to the clipboard (P3).
The DeepSeek API supports models like 'deepseek-chat' (general-purpose) and 'deepseek-reasoner' (reasoning-focused), with pricing at $0.27/$1.10 per 1M tokens for Chat and $0.55/$2.19 for Reasoner (as of February 2025).
Use this command when you need structured output for programmatic use rather than human-readable text.
Cost Model and Usage
The DeepSeek API bills based on token usage. For 'deepseek-chat', costs are $0.27/1M input tokens and $1.10/1M output tokens; for 'deepseek-reasoner', $0.55/1M input and $2.19/1M output (February 2025 rates).
A typical query (e.g., 100 input tokens, 200 output tokens) costs ~$0.00025 for Chat, making it ideal for high-volume applications like chatbots or data extraction.
Example Usage
AID.Set Model|deepseek-chat
AID.Set Temperature|0.5
AID.Ask|What is the population of Tokyo?|$$POP|CLIP
DBP.JSON Response: $$POP
Sets the model and temperature, queries the population, stores the JSON response (e.g., {"content": "13.96 million"}) in $$POP, and copies it to the clipboard.
AID.Ask|List 5 planets|$$PLN
DBP.Planets: $$PLN
Stores a JSON list (e.g., {"content": "Mercury, Venus, Earth, Mars, Jupiter"}) in $$PLN.
Illustration
┌────────────────────┐
│ Prompt: "What is │
│ the capital?" │
├──────┬─────────────┤
│ JSON │ {"content": │
│ Resp │ "Paris"} │
├──────┼─────────────┤
│ Cost │ ~$0.0003 │
└──────┴─────────────┘
Flow of a prompt yielding a JSON response with estimated cost.
Syntax
AID.Ask|P1[|P2][|P3]
Parameter Explanation
P1 - Mandatory prompt sent to the DeepSeek API.
P2 - Optional variable to store the JSON response.
P3 - Optional flag (e.g., "CLIP") to copy the response to the clipboard.
Example
AID.Set Model|deepseek-reasoner
AID.Ask|Solve 2x + 3 = 7|$$SOL|CLIP
DBP.Solution: $$SOL
ENR.
Uses 'deepseek-reasoner' to solve an equation, storing the JSON response (e.g., {"content": "x = 2"}) in $$SOL.
Comparison: AID.Ask vs. AID.AskT
Use AID.Ask when you need structured JSON output for parsing or data-driven applications (e.g., extracting fields like "content" or "usage").
Choose AID.AskT for plain text output suitable for direct display or simpler workflows. AID.Ask incurs slightly higher processing overhead due to JSON formatting.
Remarks
- Returns "DeepSeek did not answer!" if the API fails.
- Response depends on prior settings (e.g., temperature, max tokens).
Limitations
- Requires a valid API key and endpoint.
- Limited by max tokens (default 1024).
See also:
• AID.AskT