Skip to main content

LLM Configuration

LLM configurations link a model with a credential — telling the platform which AI model to use and how to authenticate with its provider.

Models

ORQO supports models from multiple providers. Models are synced from provider APIs, so the available list stays current as providers release new models.

Supported providers include:

  • OpenAI — GPT-4o, GPT-4, o1, o3, and more.
  • Anthropic — Claude Opus, Sonnet, Haiku.
  • Google — Gemini models.
  • Additional providers — Custom models can be added manually.

Platform-Provided Configurations

ORQO comes with pre-configured LLM configurations backed by platform credits. These use ORQO's own API keys via OpenRouter, giving you access to models from all major providers without needing your own API keys. Usage is deducted from your subscription's credit balance.

This means you can start building and running workflows immediately after signing up.

Creating a Custom LLM Configuration

To use your own API key instead of platform credits, go to Settings → LLM Configs:

  1. Select a model from the synced list.
  2. Select the credential that holds your API key for that provider.
  3. Give it a descriptive name (e.g., "Claude Sonnet - Production").

Assigning LLM Configs

LLM configurations are assigned at two levels:

  • Team level — All agents in the team use this config by default.
  • Agent level — An individual agent can override the team's config with a different one.

This lets you mix models within a team. For example, use a fast model for research agents and a more capable model for the final review agent.