This page gives an overview of n8n's three integrated AI subsystems: the AI Workflow Builder (canvas assistant), the Chat Hub (user-facing conversational interface), and the AI Agent execution model (tool-calling loop inside workflow runs). It covers how these systems are organized across packages and how they relate to each other.
For detailed documentation on each subsystem, see the child pages:
For the LangChain node definitions that power agent execution inside workflows, see AI and LangChain Nodes.
The AI features span four packages:
| Package | Role |
|---|---|
packages/@n8n/ai-workflow-builder.ee | LangGraph-based multi-agent workflow generator |
packages/cli/src/controllers/ai.controller.ts | REST endpoints for the workflow builder assistant |
packages/cli/src/modules/chat-hub/ | Chat Hub backend ā sessions, models, streaming |
packages/frontend/editor-ui/src/features/ai/chatHub/ | Chat Hub frontend ā Vue 3 SPA components and store |
packages/@n8n/api-types/src/chat-hub.ts | Shared DTOs and Zod schemas for both systems |
Sources: packages/@n8n/ai-workflow-builder.ee/src/workflow-builder-agent.ts1-50 packages/cli/src/controllers/ai.controller.ts1-42 packages/cli/src/modules/chat-hub/chat-hub.controller.ts1-56 packages/frontend/editor-ui/src/features/ai/chatHub/ChatView.vue1-98
The following diagram maps each AI subsystem to the concrete code entities that implement it.
AI subsystems and their code entities
Sources: packages/cli/src/controllers/ai.controller.ts34-42 packages/cli/src/services/ai-workflow-builder.service.ts1-30 packages/@n8n/ai-workflow-builder.ee/src/workflow-builder-agent.ts152-200 packages/cli/src/modules/chat-hub/chat-hub.controller.ts48-56 packages/cli/src/modules/chat-hub/chat-hub.service.ts52-72 packages/frontend/editor-ui/src/features/ai/chatHub/ChatView.vue76-97
The AI Workflow Builder lets users describe a workflow in natural language from the canvas. The backend processes requests through a LangGraph-based multi-agent system.
API surface (AiController at /rest/ai/):
| Endpoint | DTO | Description |
|---|---|---|
POST /ai/chat | AiBuilderChatRequestDto | Send a builder message; streams response |
POST /ai/ask | AiAskRequestDto | Ad-hoc AI question (non-builder) |
POST /ai/apply-suggestion | AiApplySuggestionRequestDto | Apply a code suggestion |
GET /ai/free-credits | ā | Claim free AI credits |
POST /ai/free-credits | AiFreeCreditsRequestDto | Claim credits with credential name |
The WorkflowBuilderAgent.chat() method is the central entry point. It dispatches to one of three backends based on feature flags in the request:
CodeWorkflowBuilder ā code-generation path (featureFlags.codeBuilder = true)createMultiAgentWorkflowWithSubgraphs() (legacy default)TriageAgent ā routes between assistant and builder (featureFlags.mergeAskBuild = true)The LLM configuration is per-stage via StageLLMs:
StageLLMs {
supervisor, responder, discovery,
builder, parameterUpdater, planner
}
Session continuity is managed by SessionManagerService.generateThreadId(workflowId, userId), which produces a stable LangGraph thread ID so conversation context persists across requests.
Sources: packages/@n8n/ai-workflow-builder.ee/src/workflow-builder-agent.ts62-302 packages/cli/src/controllers/ai.controller.ts34-120 packages/cli/src/services/ai-workflow-builder.service.ts1-50
The Chat Hub is a standalone conversational UI. Unlike the workflow builder, it is a full chat product that can connect to multiple LLM providers or invoke n8n workflows as agents.
The chatHubProviderSchema (defined in @n8n/api-types/src/chat-hub.ts) lists all supported providers:
| Category | Providers |
|---|---|
| Direct LLM | openai, anthropic, google, azureOpenAi, azureEntraId, ollama, awsBedrock, vercelAiGateway, xAiGrok, groq, openRouter, deepSeek, cohere, mistralCloud |
| n8n Workflow | n8n (any workflow with a ChatTrigger node) |
| Custom Agent | custom-agent (a saved agent configuration) |
Each LLM provider maps to a credential type via PROVIDER_CREDENTIAL_TYPE_MAP and to a LangChain node type via PROVIDER_NODE_TYPE_MAP in chat-hub.constants.ts.
A conversation is a session (ChatHubSession) containing a linked list of messages (ChatHubMessage). Messages have a previousMessageId forming a tree structure that supports branching (retry, edit, regenerate).
Key status values for ChatHubMessageStatus: running, success, error, cancelled, waiting.
The waiting status indicates a Chat node is paused for Human-in-the-Loop input ā the next human message resumes the existing execution rather than starting a new one.
Sources: packages/@n8n/api-types/src/chat-hub.ts17-82 packages/cli/src/modules/chat-hub/chat-hub.constants.ts30-88 packages/cli/src/modules/chat-hub/chat-hub.service.ts85-130 packages/cli/src/modules/chat-hub/chat-hub.types.ts1-100
Chat Hub message send flow
Sources: packages/cli/src/modules/chat-hub/chat-hub.service.ts362-533 packages/cli/src/modules/chat-hub/chat-hub.controller.ts1-120 packages/frontend/editor-ui/src/features/ai/chatHub/chat.store.ts499-568 packages/frontend/editor-ui/src/features/ai/chatHub/ChatView.vue86-97
When the selected provider is a direct LLM (not n8n or custom-agent), ChatHubWorkflowService dynamically assembles a complete n8n workflow in memory at request time. The assembled nodes are:
| Node constant | Purpose |
|---|---|
NODE_NAMES.CHAT_TRIGGER | Entry trigger for the chat execution |
NODE_NAMES.REPLY_AGENT | Tools-calling AI Agent (version ā„ 2.2) |
NODE_NAMES.CHAT_MODEL | LLM node, type from PROVIDER_NODE_TYPE_MAP |
NODE_NAMES.MEMORY | Window buffer memory |
NODE_NAMES.RESTORE_CHAT_MEMORY | Injects conversation history |
NODE_NAMES.CLEAR_CHAT_MEMORY | Removes old history after restore |
NODE_NAMES.MERGE | Synchronizes memory restore path |
This assembled workflow is persisted as an archived WorkflowEntity so it can be executed by the standard WorkflowRunner infrastructure. The session's selected tools are injected as sub-nodes connected to the agent.
Sources: packages/cli/src/modules/chat-hub/chat-hub-workflow.service.ts89-165 packages/cli/src/modules/chat-hub/chat-hub.constants.ts89-100
ChatHubController is registered at /rest/chat. All endpoints require the chatHub:message global scope.
| Method | Path | Description |
|---|---|---|
POST | /chat/models | List available models for given credentials |
GET | /chat/conversations | Paginated session list |
GET | /chat/conversations/:id | Session with full message history |
POST | /chat/conversations/:id/messages | Send a human message |
POST | /chat/conversations/:id/messages/:msgId/regenerate | Retry AI response |
PATCH | /chat/conversations/:id/messages/:msgId | Edit a human message |
DELETE | /chat/conversations/:id/messages/:msgId | Stop generation |
GET | /chat/conversations/:id/messages/:msgId/attachments/:idx | Download attachment |
PATCH | /chat/conversations/:id | Update session (title, model, tools) |
DELETE | /chat/conversations/:id | Delete session |
GET | /chat/agents | List custom agents |
POST | /chat/agents | Create custom agent |
PATCH | /chat/agents/:id | Update custom agent |
DELETE | /chat/agents/:id | Delete custom agent |
GET | /chat/tools | List configured tools |
POST | /chat/tools | Create a tool |
PATCH | /chat/tools/:id | Update a tool |
DELETE | /chat/tools/:id | Delete a tool |
Sources: packages/cli/src/modules/chat-hub/chat-hub.controller.ts48-300
The frontend lives in packages/frontend/editor-ui/src/features/ai/chatHub/.
Chat Hub frontend component hierarchy
Sources: packages/frontend/editor-ui/src/features/ai/chatHub/ChatView.vue730-902 packages/frontend/editor-ui/src/features/ai/chatHub/components/ChatPrompt.vue1-50 packages/frontend/editor-ui/src/features/ai/chatHub/components/ChatMessage.vue1-50
useChatStore)useChatStore (Pinia store, id STORES.CHAT_HUB) holds all client-side chat state:
| State field | Type | Purpose |
|---|---|---|
agents | ChatModelsResponse | null | Available models by provider |
sessions | { byId, ids, hasMore, nextCursor } | Paginated session list |
conversationsBySession | Map<ChatSessionId, ChatConversation> | Full message trees by session |
streaming | ChatStreamingState | undefined | Active stream info |
configuredTools | ChatHubToolDto[] | User's configured tools |
settings | Record<ChatHubLLMProvider, ChatProviderSettingsDto> | Per-provider settings |
Messages are stored as a graph: each ChatMessage has responses[] (children) and alternatives[] (retries/edits). computeActiveChain() walks this graph to build the currently displayed message chain.
Push events from the backend drive streaming. The composable useChatPushHandler subscribes to the push connection and calls store methods like handleWebSocketStreamBegin, handleWebSocketStreamChunk, and handleWebSocketStreamEnd.
Sources: packages/frontend/editor-ui/src/features/ai/chatHub/chat.store.ts85-570 packages/frontend/editor-ui/src/features/ai/chatHub/chat.types.ts53-65
The ModelSelector.vue component builds a grouped dropdown from ChatModelsResponse. The menu is built by buildModelSelectorMenuItems() in model-selector.utils.ts. Each selectable item maps to a ChatHubConversationModel discriminated union:
{ provider: 'openai', model: string } ā direct LLM{ provider: 'n8n', workflowId: string } ā workflow-backed agent{ provider: 'custom-agent', agentId: string } ā saved custom agentThe selected model is stored per-user in localStorage under the key LOCAL_STORAGE_CHAT_HUB_SELECTED_MODEL(userId), serialized with chatHubConversationModelWithCachedDisplayNameSchema.
Sources: packages/frontend/editor-ui/src/features/ai/chatHub/components/ModelSelector.vue1-130 packages/frontend/editor-ui/src/features/ai/chatHub/ChatView.vue153-240
Responses are delivered as server push events, not as a direct HTTP response body. The sequence of push event types is:
| Push event type | Emitted by | Purpose |
|---|---|---|
chatHubHumanMessageCreated | ChatStreamService | Confirms human message was stored |
chatHubStreamBegin | ChatStreamService | AI response message ID is assigned |
chatHubStreamChunk | ChatStreamService | Incremental text content |
chatHubStreamEnd | ChatStreamService | Response complete |
chatHubStreamError | ChatStreamService | Execution failed |
chatHubExecutionBegin | ChatStreamService | Workflow execution started |
chatHubExecutionEnd | ChatStreamService | Workflow execution finished |
The frontend useChatPushHandler composable handles reconnection. On page refresh during an active stream, chatStore.reconnectToStream() calls GET /chat/conversations/:id/reconnect which returns missed chunks from ChatStreamService.getPendingChunks().
Sources: packages/@n8n/api-types/src/index.ts73-88 packages/frontend/editor-ui/src/features/ai/chatHub/ChatView.vue441-458 packages/cli/src/modules/chat-hub/chat-hub.service.ts752-777
When an AI Agent node (from @n8n/nodes-langchain) invokes a tool during a workflow run, it does not call the tool directly. Instead, it emits an EngineRequest that pauses the agent node, triggers the tool sub-nodes through the workflow engine, and resumes with an EngineResponse. This is the mechanism that allows standard n8n nodes to act as LLM tools.
For full detail on this protocol, see AI Agent and Tool Execution.
The shared Zod schemas in packages/@n8n/api-types/src/chat-hub.ts define the wire format for all Chat Hub interactions:
| Schema / Class | Direction | Contents |
|---|---|---|
ChatHubSendMessageRequest | client ā server | messageId, sessionId, message, model, credentials, attachments, previousMessageId |
ChatHubEditMessageRequest | client ā server | message, messageId, model, credentials, newAttachments, keepAttachmentIndices |
ChatHubRegenerateMessageRequest | client ā server | model, credentials |
ChatHubUpdateConversationRequest | client ā server | title?, credentialId?, agent?, toolIds? |
ChatHubSessionDto | server ā client | Session metadata including provider, model, workflowId, agentId, toolIds |
ChatHubMessageDto | server ā client | Message with content: ChatMessageContentChunk[], status, linked IDs |
ChatModelDto | server ā client | Model descriptor with metadata.capabilities, allowFileUploads, suggestedPrompts |
ChatMessageContentChunk is a discriminated union of: text, hidden, artifact-create, artifact-edit, with-buttons. The with-buttons variant enables Human-in-the-Loop approval flows within chat messages.
Sources: packages/@n8n/api-types/src/chat-hub.ts316-436 packages/@n8n/api-types/src/chat-hub.ts227-260
Refresh this wiki
This wiki was recently refreshed. Please wait 2 days to refresh again.