Configure an LLM
Set up an LLM configuration that links a model to your API credential. LLM configurations are assigned to teams (as a default) or to individual agents (as an override).
Prerequisites
- A credential for your LLM provider stored in your organization (see Add a Credential)
- Models synced from the provider (ORQO syncs models automatically from provider APIs)
Steps
1. Navigate to LLM Configurations
Open the sidebar and click Settings, then LLMs. You see a grid of LLM configuration cards, organized by provider.
2. Click "New LLM Configuration"
Click the New LLM Configuration button. The configuration form opens.
3. Select a model
Choose a model from the grouped dropdown. Models are organized by provider (OpenAI, Anthropic, Google, OpenRouter, etc.). Each model entry shows the model name and identifier.
Models are synced from provider APIs automatically. If you do not see a model you expect, verify that you have a credential for that provider and that the model is available in your account.
4. Link a credential
Select the credential that provides your API key for this provider. The dropdown shows credentials matching the provider's expected key (e.g., OPENAI_API_KEY for OpenAI models, ANTHROPIC_API_KEY for Anthropic models).
If no matching credential exists, you need to add one first.
5. Set an optional label
Add a label to distinguish multiple configurations using the same model. For example, if you have two OpenAI GPT-4o configs with different credentials (production vs. development), the label differentiates them.
The display name follows the pattern: Provider: Model Name - Label (e.g., "OpenAI: GPT-4o - Production").
6. Save the configuration
Click Create LLM Configuration. The new config appears as a card in the grid, showing the provider, model name, label, and linked credential status.
7. Assign to a team
Open a project and navigate to a team's Builder tab. In the sidebar palette, the LLM Library section shows all organization LLM configurations.
Drag the LLM card onto the Team node (the large card at the bottom of the canvas). This sets it as the team's default LLM -- all agents on the team use this model unless they have an individual override.
Every team needs a default LLM before it can execute workflows. Assign one before running any workflow with that team.
8. Assign to an individual agent (optional)
To give a specific agent a different model than the team default, drag an LLM card from the palette onto that agent node in the Team Builder. An LLM node appears in the top row, connected to the agent.
This is useful when:
- A research agent needs a model with a larger context window
- A code agent works better with a coding-specialized model
- You want to use a cheaper model for simple tasks and a premium model for complex ones
9. Review configuration details
Click an LLM configuration card in Settings to see:
- Provider and model identifier sent to the API
- Base URL for the provider's API endpoint
- Linked credential status
- Pricing information (if available from the provider)
- Context window size
10. Delete an LLM configuration
Click the card and then Delete. Teams and agents using this configuration will have their LLM assignment cleared (set to null). You need to assign a new LLM before those teams can execute workflows.
What's next
- Build a Team to assign the LLM to a team
- Configure an Agent to override the team LLM for specific agents
- Add a Credential if you need to add provider API keys