Core Concepts
These are the building blocks of ORQO. Understanding them is key to using the platform effectively.
Essentials
The fundamental building blocks you'll work with every day.
Project
A workspace that groups everything related to a particular goal: Teams, Workflows, Tools, and documents. Projects are how you organize different initiatives.
Team
A collection of Agents that work together. A team has a shared LLM configuration (which model and API key to use). Agents inherit the team's LLM configuration by default, but can override it with their own — meaning a single team can mix models from different providers. A Claude agent can collaborate with a GPT agent and a Llama agent in the same workflow.
Agent
An individual AI worker within a team. Each agent has:
- A role — a description of what it does (e.g., "Research Analyst", "Editor", "Code Reviewer").
- Skills — the tools and capabilities it can use during execution.
- LLM configuration — inherited from the team, or overridden per agent.
- Context window settings — how aggressively the agent compacts its conversation history.
The LLM configuration is the fuel, not the identity. An agent's role, skills, and behavior stay the same — you can swap the underlying model at any time. Run a few workflows with GPT-4o, then switch to Claude Sonnet to compare results, without reconfiguring anything else.
Workflow
An ordered sequence of Stages assigned to a team. When you run a workflow, the engine executes each stage in order, passing context forward.
Stage
A single step within a workflow. Each stage has:
- Agent assignments — which agents from the team participate, in what order, and in which phase (start, default, or completion).
- Task description — what the agents should accomplish in this stage.
- Hooks — entry and exit hooks that run when the stage begins or ends.
- Outcomes and routing — define possible outcomes for a stage and route the workflow to different stages based on the result, enabling branching and looping.
Stages aren't just linear steps — with outcome routing, a workflow can branch, loop back, or skip ahead based on what the agents produce. See Workflows for details.
Workflow Run
A single execution of a workflow. Tracks status (pending, running, completed, stopped, error), timing, and per-stage results. You can browse run history per project or per workflow.
Resources
The tools, credentials, and configurations that power your agents.
Credential
An encrypted secret stored in your organization — API keys, OAuth tokens, SMTP passwords, SSH keys. Credentials are injected as environment variables when the engine runs a workflow, so agents and tools can authenticate with external services.
LLM Configuration
Links an LLM Model (e.g., gpt-4o, claude-sonnet-4-20250514) with a Credential (your API key for that provider). You assign LLM configs to teams so agents know which model and key to use.
MCP Server
A connection to an external tool server using the Model Context Protocol. ORQO discovers available tools from the server and makes them available to agents. This is how agents interact with databases, APIs, file systems, and other external systems.
Skill
A self-contained capability package that gives an agent everything it needs to do something specific — tools, credentials, knowledge, and an optional runtime. Unlike prompt-only skill systems, ORQO skills carry the full stack: assign a "Slack" skill and the agent gets the Slack tools, the API credential, and the knowledge of how to use them. Skills are the unit of capability.
Connectivity
How ORQO communicates with the outside world.
App
A bidirectional connection to an external platform — Slack, email, a custom webhook endpoint. Apps handle both inbound messages (receiving events from the platform) and outbound delivery (sending agent output back to users or services).
Doorkeeper
Your organization's AI front desk. Doorkeeper receives inbound messages from all connected Apps — a Slack message, an email, a webhook — and decides what to do with them: respond directly, delegate to a workflow, or route to the right contact. It maintains its own knowledge graph memory across conversations, so it learns your organization's context over time.