This document describes the FastAPI-based backend architecture of Langflow, which serves as the runtime execution layer and API gateway. The backend is responsible for:
lfx libraryChild pages cover specific subsystems in detail: application startup (page 4.1), services (page 4.2), API endpoints (page 4.3), flow execution (page 4.4), component loading (page 4.5), graph processing (page 4.6), event streaming (page 4.7), database (page 4.8), authentication (page 4.9), and message management (page 4.10).
For component definitions and low-level execution, see page 3 (Component System). For frontend integration, see page 5 (Frontend Architecture).
The langflow-base package bridges user requests to the lfx execution engine, handling concerns like authentication, API schemas, event streaming, and data serialization specific to web service deployment.
Package Dependency Chain:
| Package | Module root | Depends on | Role |
|---|---|---|---|
lfx | src/lfx | (none) | Graph engine, component base, inputs |
langflow-base | src/backend/base | lfx | FastAPI app, services, database, auth |
langflow | src/backend/langflow | langflow-base | CLI entrypoint, frontend serving |
Backend module overview diagram:
Sources: pyproject.toml1-21 src/backend/base/pyproject.toml1-21 src/lfx/pyproject.toml1-20
The FastAPI application lifecycle is managed through an async context manager that handles service initialization and cleanup. The get_lifespan() function in src/backend/base/langflow/main.py147-280 creates a lifespan context that orchestrates startup and shutdown.
Startup Steps:
configure() from lfx.log.loggerinitialize_services() in dependency order (see Service Layer section)setup_llm_caching() configures LangChain cachingcopy_profile_pictures() copies defaults to storageinitialize_auto_login_default_superuser() if AUTO_LOGIN=truesync_flows_from_fs() if LANGFLOW_LOAD_FLOWS_PATH is setinit_mcp_servers() starts Model Context Protocol serversShutdown Steps:
teardown_services() — gracefully stops all servicescleanup_mcp_sessions() — closes MCP server sessionsSources: src/backend/base/langflow/main.py147-280 src/backend/base/langflow/services/utils.py1-100
The backend is organized around FastAPI routers, each handling a specific domain of functionality. The main application is configured in src/backend/base/langflow/main.py368-500 and delegates to specialized routers.
Sources: src/backend/base/langflow/api/v1/chat.py55 src/backend/base/langflow/api/v1/endpoints.py68 src/backend/base/langflow/api/v1/flows.py46 src/backend/base/langflow/api/v1/login.py16 src/backend/base/langflow/main.py368-500
| Router | URL prefix | Primary purpose | Auth method | Location |
|---|---|---|---|---|
ChatRouter | /api/v1/build | Flow building, vertex execution | Session or API key | chat.py55 |
EndpointsRouter | /api/v1/run | Simplified flow execution | API key or session | endpoints.py68 |
FlowsRouter | /api/v1/flows | Flow CRUD | Session | flows.py46 |
LoginRouter | /api/v1/login | Auth and token management | None / Session | login.py16 |
StoreRouter | /api/v1/store | Component marketplace | Session | store.py20 |
ApiKeyRouter | /api/v1/api_key | API key management | Session | api_key.py14 |
Sources: src/backend/base/langflow/api/v1/chat.py55 src/backend/base/langflow/api/v1/endpoints.py68 src/backend/base/langflow/api/v1/flows.py46 src/backend/base/langflow/api/v1/login.py16 src/backend/base/langflow/api/v1/store.py20 src/backend/base/langflow/api/v1/api_key.py14
The application uses several middleware layers to handle cross-cutting concerns. Middleware is configured in src/backend/base/langflow/main.py421-478
| Middleware class | Purpose | Key setting |
|---|---|---|
ContentSizeLimitMiddleware | Reject oversized request bodies | max_size=100*1024*1024 (100 MB) |
CORSMiddleware | CORS headers for cross-origin requests | LANGFLOW_CORS_ORIGINS env var |
RequestCancelledMiddleware | Abort work on client disconnect | Polls request.is_disconnected() every 100 ms |
JavaScriptMIMETypeMiddleware | Fix MIME type for static .js files | Checks request.url.path.endswith(".js") |
CORS defaults: allow_origins=["*"], allow_credentials=True. Set LANGFLOW_CORS_ORIGINS in production.
Sources: src/backend/base/langflow/main.py79-124 src/backend/base/langflow/main.py421-478
The backend uses a service-oriented architecture with centralized dependency injection. Services are initialized during application startup and accessed via the get_service() function or specialized getters like get_auth_service().
Service initialization order:
SettingsService — configuration from env vars and settings filesDatabaseService — connection pool, runs Alembic migrations database/service.py42AuthService — JWT key pairs, password hashing (bcrypt)CacheService — Redis or in-memory cacheStorageService — verify and create storage directoriesVariableService — load encrypted global variables from databaseChatService — graph cache and session storageTelemetryService — telemetry collectionTracingService — LangSmith / LangFuse / Opik integrations tracing/service.pySessionService — conversation state managementJobQueueService — async job queue for build eventsMCPComposerService — Model Context Protocol serversDependency injection functions (from services/deps.py):
| Function | Returns | Where used |
|---|---|---|
get_settings_service() | SettingsService | Configuration access |
get_auth_service() | AuthService | Token verification |
get_chat_service() | ChatService | Graph cache |
get_queue_service() | JobQueueService | Async job management |
session_scope() | AsyncSession | Database transaction context |
get_telemetry_service() | TelemetryService | Usage logging |
get_storage_service() | StorageService | File operations |
get_tracing_service() | TracingService | Trace spans |
Sources: src/backend/base/langflow/services/utils.py1-150 src/backend/base/langflow/services/tracing/service.py1-100 src/backend/base/langflow/services/database/service.py42-70
The backend implements two execution models: full flow builds (vertex-by-vertex with streaming events) and simplified runs (direct execution with final results).
The build flow model executes components sequentially, emitting SSE events for each vertex. This enables real-time UI updates and granular error handling.
Sources: src/backend/base/langflow/api/v1/chat.py138-203 src/backend/base/langflow/api/v1/chat.py206-221
| Class | Location | Responsibility |
|---|---|---|
FlowDataRequest | schemas.py356-360 | Request model for flow graph data (nodes, edges) |
InputValueRequest | lfx.schema.schema | Input values to pass to specific components |
VertexBuildResponse | schemas.py307-332 | Response model for individual vertex execution |
ResultDataResponse | schemas.py266-305 | Execution results with outputs, logs, artifacts |
JobQueueService | services/job_queue/service.py | Async job management with event streaming |
Sources: src/backend/base/langflow/api/v1/schemas.py266-332 src/backend/base/langflow/api/v1/schemas.py356-360
The simplified run model executes the entire flow and returns the final results. This is used by the /run and /webhook endpoints for API-first integrations.
Sources: src/backend/base/langflow/api/v1/endpoints.py487-542 src/backend/base/langflow/api/v1/endpoints.py135-196
Before execution, the backend performs graph validation and prepares the execution order through topological sorting. This logic is handled by the Graph class from lfx, with backend-specific wrappers.
Sources: src/backend/base/langflow/api/v1/chat.py92-96 src/backend/base/langflow/api/v1/endpoints.py155-156
The Graph.prepare() method performs the following operations:
vertices_to_run based on start_component_id and stop_component_idrun_id for tracing and telemetrySources: src/backend/base/langflow/api/v1/chat.py97-98
The backend dynamically loads component code and instantiates Python classes at runtime. This is a critical security-sensitive operation handled through the lfx eval_custom_component_code function.
Sources: src/backend/base/langflow/interface/initialize/loading.py25-52 src/backend/base/langflow/interface/initialize/loading.py54-76 src/backend/base/langflow/interface/initialize/loading.py147-155
The backend supports two component base classes, both imported from lfx:
| Class | Module | Purpose |
|---|---|---|
Component | lfx.custom.custom_component.component | Modern component base with build_results() method |
CustomComponent | lfx.custom.custom_component.custom_component | Legacy component base with build() method |
The backend re-exports these from src/backend/base/langflow/custom/custom_component/component.py6-12 for backward compatibility.
Sources: src/backend/base/langflow/custom/custom_component/component.py1-25
The update_params_with_load_from_db_fields() function handles special "load from database" fields, which allow components to reference global variables:
Sources: src/backend/base/langflow/interface/initialize/loading.py111-144
The build API implements an async job queue pattern with three event delivery modes: polling, streaming (SSE), and direct (synchronous wait).
Sources: src/backend/base/langflow/api/v1/chat.py133-198 src/backend/base/langflow/api/v1/chat.py201-216
During flow execution, the backend emits the following event types:
| Event Type | Data Structure | Purpose |
|---|---|---|
vertices_sorted | {"ids": [...], "run_id": "..."} | Execution order determined |
vertex_starts | {"id": "...", "data": {...}} | Component execution begins |
vertex_finishes | VertexBuildResponse | Component execution completes |
error | {"error": "...", "traceback": "..."} | Execution error occurred |
end | {"success": true} | Flow execution complete |
The VertexBuildResponse schema is defined in src/backend/base/langflow/api/v1/schemas.py307-332 and includes:
id: Vertex identifiervalid: Whether execution succeededdata: ResultDataResponse with outputs, results, logs, artifactsnext_vertices_ids: Vertices now ready to executeinactivated_vertices: Vertices skipped due to conditional logicSources: src/backend/base/langflow/api/v1/schemas.py307-332 src/backend/base/langflow/api/v1/schemas.py266-305
The JobQueueService manages async jobs and event delivery:
Sources: Referenced from service layer documentation in high-level diagrams
The backend implements a sophisticated type-aware serialization system to handle complex Python objects in API responses. This is critical because component outputs can be LangChain objects, Pandas DataFrames, Pydantic models, or custom types.
Sources: src/backend/base/langflow/serialization/serialization.py253-306 src/backend/base/langflow/serialization/serialization.py189-251
| Input Type | Serialization Function | Output | Truncation |
|---|---|---|---|
str | _serialize_str() line 42 | String | Truncate at max_length, append "..." |
bytes | _serialize_bytes() line 57 | String (UTF-8) | Truncate at max_length |
datetime | _serialize_datetime() line 68 | ISO 8601 string | None |
Decimal | _serialize_decimal() line 73 | float | None |
UUID | _serialize_uuid() line 78 | String | None |
Document | _serialize_document() line 83 | Recursive on to_json() | Recursive limits |
BaseModel (Pydantic) | _serialize_pydantic() line 93 | Dict (recursive) | Recursive limits |
DataFrame | _serialize_dataframe() line 141 | List[dict] | Limit rows to max_items |
list/tuple | _serialize_list_tuple() line 111 | List (recursive) | Truncate to max_items |
Iterator/Generator | _serialize_iterator() line 88 | "Unconsumed Stream" | None |
Sources: src/backend/base/langflow/serialization/serialization.py42-187
Serialization limits are configurable via the SettingsService:
The ResultDataResponse model applies these limits during serialization through its serialize_model() method line 288-304 ensuring all outputs, logs, and artifacts respect truncation limits.
Sources: src/backend/base/langflow/serialization/serialization.py15-39 src/backend/base/langflow/api/v1/schemas.py288-304
The backend implements a flexible authentication system supporting multiple authentication methods: session-based JWT tokens, API keys, and optional OAuth2 flows.
Authentication mechanisms:
| Method | Credential location | Verification | Use case |
|---|---|---|---|
| JWT Access Token | Authorization: Bearer <token> header | Signature + expiry | Interactive sessions |
| JWT Access Token | Cookie access_token_lf | Signature + expiry | Browser sessions |
| JWT Refresh Token | Cookie refresh_token_lf | Signature, then rotate | Token renewal |
| API Key | x-api-key header | Fernet decrypt, DB lookup | API integrations |
| API Key | ?x-api-key= query param | Fernet decrypt, DB lookup | Webhook callbacks |
JWT configuration:
HS256 (symmetric) or RS256/RS512 (asymmetric)ACCESS_TOKEN_EXPIRE_SECONDS (default: 3600)REFRESH_TOKEN_EXPIRE_SECONDS (default: 604800)LANGFLOW_SECRET_KEY env varAPI key security: Keys are Fernet-encrypted before storage in the database. The encryption key is derived from SECRET_KEY. Decryption happens in check_key() src/backend/base/langflow/services/database/models/api_key/crud.py at verification time. Keys can optionally be scoped to specific flows.
Sources: src/backend/base/langflow/services/auth/utils.py1-450 src/backend/base/langflow/api/v1/login.py27-85 src/backend/base/langflow/services/database/models/api_key/crud.py1-60
The backend exposes several endpoint families for different use cases. Each endpoint family has distinct authentication and response characteristics.
Endpoint Details:
| Endpoint | Method | Auth | Purpose | Response | Location |
|---|---|---|---|---|---|
/build/{flow_id}/flow | POST | Session/API Key | Start flow build | {"job_id": "..."} | chat.py138 |
/build/{job_id}/events | GET | Session/API Key | Get build events | SSE stream or JSON | chat.py206 |
/build/{job_id}/cancel | POST | Session/API Key | Cancel flow build | CancelFlowResponse | chat.py228 |
/build_public_tmp/{flow_id}/flow | POST | Cookie (client_id) | Public flow build | {"job_id": "..."} | chat.py586 |
Sources: src/backend/base/langflow/api/v1/chat.py138-203 src/backend/base/langflow/api/v1/chat.py206-221 src/backend/base/langflow/api/v1/chat.py228-259 src/backend/base/langflow/api/v1/chat.py586-658
Endpoint Comparison:
| Endpoint | Auth | Features | Response | Streaming |
|---|---|---|---|---|
/run/{flow_id} | API Key | Basic input/output, tweaks | RunResponse | Optional |
/run/session/{flow_id} | Session | Basic input/output, tweaks | RunResponse | Optional |
/run/advanced/{flow_id} | API Key | Multiple inputs/outputs, session | RunResponse | Optional |
/webhook/{flow_id} | Optional | Webhook data injection | {"status": "..."} | No (async) |
Sources: src/backend/base/langflow/api/v1/endpoints.py487-542 src/backend/base/langflow/api/v1/endpoints.py545-608 src/backend/base/langflow/api/v1/endpoints.py688-806 src/backend/base/langflow/api/v1/endpoints.py611-680
The /run endpoints use a simplified request schema designed for easy API integration:
Tweaks Structure:
Sources: src/backend/base/langflow/api/v1/schemas.py338-348 src/backend/base/langflow/api/v1/endpoints.py103-133
The backend extracts global variables from HTTP headers with the prefix X-LANGFLOW-GLOBAL-VAR-*:
These variables are merged with the context parameter and passed to the graph execution, where components can access them via the component's context.
Sources: src/backend/base/langflow/api/v1/endpoints.py393-401
The /webhook/{flow_id_or_name} endpoint has unique characteristics:
get_webhook_user() which respects WEBHOOK_AUTH_ENABLE settingWebhook components via tweaks{"status": "in progress"}, executes in backgroundWebhook Component Injection:
Sources: src/backend/base/langflow/api/v1/endpoints.py611-680
Additional endpoint families support component discovery:
| Endpoint Family | Purpose | Key Routes |
|---|---|---|
/store | Component marketplace | GET /components, POST /components, GET /tags |
/starter-projects | Template flows | GET / returns list of starter flows |
Sources: src/backend/base/langflow/api/v1/store.py1-181 src/backend/base/langflow/api/v1/starter_projects.py1-76
Refresh this wiki
This wiki was recently refreshed. Please wait 3 days to refresh again.