|
<< Click to Display Table of Contents >> Navigation: »No topics above this level« AIN. - AnythingLLM AI |
MiniRobotLanguage (MRL)
AIN.SetModel
Set the AI Model for AnythingLLM API Operations
Intention
SetModel Command: Selecting Your AI Engine
The SetModel command enables you to specify the AI model that powers AnythingLLM API operations, supporting a wide range of text and vision tasks.
Choose from local models like Llama 3 or cloud-based options like OpenAI’s GPT-4o to tailor performance, privacy, and capabilities to your needs.
It’s a core component of the AIN - AnythingLLM AI suite, offering flexibility across use cases.
This command configures the AIN_Model global variable, determining which Large Language Model (LLM) handles API requests for text generation, vision processing, or Retrieval-Augmented Generation (RAG).
Unlike many AI frameworks, AnythingLLM has no default model ($AIN_DefModel = ""); you must explicitly set one—e.g., "llama3" for local text processing or "gemini-1.5-pro" for vision tasks. If left empty, API calls require model specification each time.
Choosing the right model is essential for optimizing your AI experience:
•Task Specificity: Select models like "mistral-7b" for text or "gemini-1.5-flash" for vision and multimodal inputs.
•Privacy: Use local models (e.g., "phi-3-mini") to keep data offline, or cloud models (e.g., "gpt-4o") for advanced features.
•Performance: Balance speed and accuracy—e.g., "llama3" for fast local inference or "claude-3-opus" for high-quality output.
Set the model prior to operations like AIN.Ask (text chat) or AIN.AskV (vision chat) to define the processing engine.
AnythingLLM supports a variety of LLMs, configurable via providers like Ollama, LM Studio, or cloud services (OpenAI, Anthropic, Google Gemini). For local setups, download models in GGUF format (e.g., from Hugging Face) and import them into AnythingLLM Desktop.
Supported models (as of March 20, 2025) include:
•Local (via Ollama): "llama3" (8B, text), "mistral-7b" (7B, text), "phi-3-mini" (3.8B, text).
•Cloud (OpenAI): "gpt-4o" (multimodal, 128K context), "gpt-3.5-turbo" (text, 16K context).
•Cloud (Google): "gemini-1.5-pro" (vision/text, 1M context), "gemini-1.5-flash" (fast multimodal).
•Cloud (Anthropic): "claude-3-opus" (text, 200K context), "claude-3.5-sonnet" (text/vision).
Example Usage
AIN.SetModel|gemini-1.5-pro
AIN.AskV|Describe this image|https://example.com/photo.jpg|$$DES
DBP.Image Description: $$DES
This configures "gemini-1.5-pro" for vision tasks, leveraging its multimodal capabilities.
Illustration
┌────────────────────┬──────────────┐
│ Model │ Capability │
├────────────────────┼──────────────┤
│ llama3 │ Text │
│ gemini-1.5-pro │ Text/Vision │
└────────────────────┴──────────────┘
Model selection impacts task support and performance.
Syntax
AIN.SetModel|P1
AIN.Set_Model|P1
Parameter Explanation
P1 - The model identifier (e.g., "llama3", "gpt-4o"). If empty, resets to no default, requiring per-call specification.
Example
AIN.SetModel|llama3
AIN.Ask|What is machine learning?|$$DEF
DBP.ML Definition: $$DEF
ENR.
Remarks
- Use AIN.GetModel to check the active model.
- Local models require sufficient hardware (e.g., 8GB RAM for "llama3"); cloud models need API keys.
- Model details and availability are documented at http://localhost:3001/api/docs for your instance.
Limitations
- Unsupported model names trigger API errors; validate against your provider’s offerings.
- Local model performance varies with hardware; large models (e.g., "claude-3-opus") are cloud-only.
See also:
• AIN.Ask
• AIN.AskV