AI Models

Review the AI models available in OpenAnalyst, understand their capabilities and costs, and configure your default model and per-agent settings.

Supported Models

OpenAnalyst integrates with multiple leading AI providers. The table below lists all available models, their provider, and their primary strengths.

ProviderModelStrengthsContext Window
OpenAIGPT-4oBalanced reasoning and speed; strong at SQL generation and explanation.128K tokens
OpenAIo1Deep reasoning with chain-of-thought; best for complex analytical planning.200K tokens
OpenAIo3-miniFast and cost-effective reasoning; suitable for high-volume tasks.128K tokens
AnthropicClaude 3.5 SonnetExcellent code generation, data interpretation, and nuanced writing.200K tokens
AnthropicClaude OpusHighest-capability model for complex multi-step analysis and report writing.200K tokens
DeepSeekDeepSeek V3Strong at data-intensive reasoning; competitive with frontier models at lower cost.128K tokens
DeepSeekDeepSeek R1Chain-of-thought reasoning model; excellent for mathematical and statistical tasks.64K tokens
AlibabaQwen QwQLong-context reasoning with multilingual support.128K tokens
AlibabaQwen 2.5Fast general-purpose model; strong coding and multilingual capabilities.128K tokens

Selecting a Default Model

The workspace default model is used for all agents and natural language queries unless an agent-specific override is set. To change the default, navigate to Settings > AI Models and select from the dropdown. Your selection is saved per workspace and applies to all members.

Note: Personal model preferences (set in your profile settings) override the workspace default for queries and conversations you initiate, but not for scheduled agent pipelines, which always use the workspace default or the model configured on that specific agent.

Model-Specific Settings

Advanced model parameters can be tuned per agent in the agent configuration panel. These settings affect the character of model responses.

  • Temperature — Controls randomness. Lower values (0.0–0.3) produce more deterministic, fact-focused outputs. Higher values (0.7–1.0) produce more varied, creative outputs. For analytical tasks, temperatures in the 0.0–0.2 range are recommended.
  • Max tokens — Sets the maximum length of a single response. Increase this for agents expected to write long reports. Keep it low for quick query agents to control latency and cost.
  • Top-p (nucleus sampling) — An alternative to temperature for controlling output diversity. Not available on all models.
  • System prompt — A persistent instruction prepended to every conversation turn. Use it to establish domain context, output format requirements, or persona constraints.

Cost by Plan

Model usage is governed by AI credits, which are allocated per billing cycle based on your plan. The table below shows the included credits and relative costs per model tier.

PlanMonthly AI CreditsModels Available
Free100 creditsGPT-4o, DeepSeek V3, Qwen 2.5
Basic ($29/mo)500 creditsAll models except Claude Opus and o1
Pro ($79/mo)2,000 creditsAll models
Max ($149/mo)10,000 creditsAll models with priority throughput
EnterpriseCustom / unlimited optionsAll models, custom fine-tuning available

Choosing the Right Model

Different tasks benefit from different model characteristics. Use this guidance to match tasks to models:

  • Ad-hoc data questions — GPT-4o or Claude 3.5 Sonnet for a good balance of speed and accuracy.
  • Complex analytical reports — Claude Opus or o1 for depth and coherent long-form output.
  • High-volume automated pipelines — o3-mini, DeepSeek V3, or Qwen 2.5 for cost efficiency at scale.
  • Statistical and mathematical tasks — DeepSeek R1 for structured step-by-step reasoning.
  • Multilingual workspaces — Qwen QwQ or Qwen 2.5 for robust multilingual support.