AI Models
Review the AI models available in OpenAnalyst, understand their capabilities and costs, and configure your default model and per-agent settings.
Supported Models
OpenAnalyst integrates with multiple leading AI providers. The table below lists all available models, their provider, and their primary strengths.
| Provider | Model | Strengths | Context Window |
|---|---|---|---|
| OpenAI | GPT-4o | Balanced reasoning and speed; strong at SQL generation and explanation. | 128K tokens |
| OpenAI | o1 | Deep reasoning with chain-of-thought; best for complex analytical planning. | 200K tokens |
| OpenAI | o3-mini | Fast and cost-effective reasoning; suitable for high-volume tasks. | 128K tokens |
| Anthropic | Claude 3.5 Sonnet | Excellent code generation, data interpretation, and nuanced writing. | 200K tokens |
| Anthropic | Claude Opus | Highest-capability model for complex multi-step analysis and report writing. | 200K tokens |
| DeepSeek | DeepSeek V3 | Strong at data-intensive reasoning; competitive with frontier models at lower cost. | 128K tokens |
| DeepSeek | DeepSeek R1 | Chain-of-thought reasoning model; excellent for mathematical and statistical tasks. | 64K tokens |
| Alibaba | Qwen QwQ | Long-context reasoning with multilingual support. | 128K tokens |
| Alibaba | Qwen 2.5 | Fast general-purpose model; strong coding and multilingual capabilities. | 128K tokens |
Selecting a Default Model
The workspace default model is used for all agents and natural language queries unless an agent-specific override is set. To change the default, navigate to Settings > AI Models and select from the dropdown. Your selection is saved per workspace and applies to all members.
Note: Personal model preferences (set in your profile settings) override the workspace default for queries and conversations you initiate, but not for scheduled agent pipelines, which always use the workspace default or the model configured on that specific agent.
Model-Specific Settings
Advanced model parameters can be tuned per agent in the agent configuration panel. These settings affect the character of model responses.
- Temperature — Controls randomness. Lower values (0.0–0.3) produce more deterministic, fact-focused outputs. Higher values (0.7–1.0) produce more varied, creative outputs. For analytical tasks, temperatures in the 0.0–0.2 range are recommended.
- Max tokens — Sets the maximum length of a single response. Increase this for agents expected to write long reports. Keep it low for quick query agents to control latency and cost.
- Top-p (nucleus sampling) — An alternative to temperature for controlling output diversity. Not available on all models.
- System prompt — A persistent instruction prepended to every conversation turn. Use it to establish domain context, output format requirements, or persona constraints.
Cost by Plan
Model usage is governed by AI credits, which are allocated per billing cycle based on your plan. The table below shows the included credits and relative costs per model tier.
| Plan | Monthly AI Credits | Models Available |
|---|---|---|
| Free | 100 credits | GPT-4o, DeepSeek V3, Qwen 2.5 |
| Basic ($29/mo) | 500 credits | All models except Claude Opus and o1 |
| Pro ($79/mo) | 2,000 credits | All models |
| Max ($149/mo) | 10,000 credits | All models with priority throughput |
| Enterprise | Custom / unlimited options | All models, custom fine-tuning available |
Choosing the Right Model
Different tasks benefit from different model characteristics. Use this guidance to match tasks to models:
- Ad-hoc data questions — GPT-4o or Claude 3.5 Sonnet for a good balance of speed and accuracy.
- Complex analytical reports — Claude Opus or o1 for depth and coherent long-form output.
- High-volume automated pipelines — o3-mini, DeepSeek V3, or Qwen 2.5 for cost efficiency at scale.
- Statistical and mathematical tasks — DeepSeek R1 for structured step-by-step reasoning.
- Multilingual workspaces — Qwen QwQ or Qwen 2.5 for robust multilingual support.