This page describes how a flow JSON is deserialized into a Graph, how vertices are ordered and dispatched for execution, and the two distinct execution paths the engine supports. For the underlying Graph, Vertex, and Edge data structures and topological sort logic, see Graph Processing. For SSE event streaming during builds, see Event Streaming. For the API endpoints that trigger execution, see API Endpoints.
When a user runs a flow, the backend:
Graph object.Vertex) in order.The engine supports two top-level paths with different orchestration models:
| Mode | Entry Point | Returns | Used By |
|---|---|---|---|
| Interactive Build | POST /build/{flow_id}/flow | job_id + SSE event stream | Playground / UI |
| Programmatic Run | POST /run/{flow_id_or_name} | RunResponse (sync or streaming) | API clients, webhooks |
Both execution paths begin by constructing a Graph from either a database-stored flow or an inline payload.
src/backend/base/langflow/api/v1/chat.py96-110
build_graph_from_db() in api/utils.py fetches the Flow record, reads flow.data, and calls Graph.from_payload().
src/backend/base/langflow/api/v1/endpoints.py163-172
simple_run_flow() calls Graph.from_payload(graph_data, flow_id=..., user_id=..., flow_name=..., context=...) directly, after applying any tweaks to the raw graph data via process_tweaks().
After construction, graph.prepare(stop_component_id, start_component_id) is called to:
graph.first_layer (vertices with no unresolved predecessors).graph.vertices_to_run (the full set of vertices that will execute).A run ID is then assigned via graph.set_run_id(run_id).
Flow JSON to Graph: Key Class Relationships
Sources: src/backend/base/langflow/api/v1/chat.py96-110 src/backend/base/langflow/api/v1/endpoints.py163-172
This mode is used by the Langflow UI playground. It runs the graph one vertex at a time and streams events for each build result.
| Route | Function | Description |
|---|---|---|
POST /build/{flow_id}/flow | build_flow() | Starts a build job, returns job_id |
GET /build/{job_id}/events | get_build_events() | Returns SSE stream for the job |
POST /build/{job_id}/cancel | cancel_build() | Cancels a running build |
src/backend/base/langflow/api/v1/chat.py138-221
build_flow() delegates to start_flow_build() from api/build.py. This starts a background job managed by JobQueueService and returns a job_id. The client then subscribes to GET /build/{job_id}/events to receive progress.
Interactive Build: Sequence Diagram
Sources: src/backend/base/langflow/api/v1/chat.py138-221
The deprecated POST /build/{flow_id}/vertices/{vertex_id} endpoint builds one vertex at a time. The response is a VertexBuildResponse containing:
next_vertices_ids: list of vertices that are now unblockedtop_level_vertices: the top-level subset of next_vertices_idsinactivated_vertices: vertices deactivated as a result of a conditional branchvalid: whether the build succeededdata: ResultDataResponse with outputs, logs, and timingAfter each build, the ChatService cache is updated with the modified graph:
await chat_service.set_cache(flow_id_str, graph)
src/backend/base/langflow/api/v1/chat.py262-432
This path runs the entire flow in a single call and is used by API clients, webhooks, and test scripts.
| Route | Function | Auth |
|---|---|---|
POST /run/{flow_id_or_name} | simplified_run_flow() | API key |
POST /run/session/{flow_id_or_name} | simplified_run_flow_session() | Session cookie |
POST /run/advanced/{flow_id_or_name} | experimental_run_flow() | API key |
POST /webhook/{flow_id_or_name} | webhook_run_flow() | Configurable |
src/backend/base/langflow/api/v1/endpoints.py546-812
simple_run_flow() Logicsrc/backend/base/langflow/api/v1/endpoints.py146-207
simple_run_flow(flow, input_request, stream, api_key_user, event_manager)
→ process_tweaks(graph_data, tweaks)
→ Graph.from_payload(graph_data, ...)
→ graph.set_run_id(run_id)
→ run_graph_internal(graph, flow_id, session_id, inputs, outputs, stream, event_manager)
→ RunResponse(outputs=[RunOutputs], session_id=session_id)
inputs is constructed from input_request.input_value and input_request.input_type as an InputValueRequest.outputs is determined by filtering graph vertices: vertices where vertex.is_output == True and the output type matches input_request.output_type.stream=True, an EventManager and asyncio.Queue are used, and the function returns a StreamingResponse.Programmatic Run: Data Flow
Sources: src/backend/base/langflow/api/v1/endpoints.py146-207 src/backend/base/langflow/api/v1/endpoints.py340-389
Regardless of which execution path is used, each vertex is executed via the same two-step mechanism in interface/initialize/loading.py.
instantiate_class()src/backend/base/langflow/interface/initialize/loading.py25-51
instantiate_class(vertex, user_id, event_manager)
→ vertex_type = vertex.vertex_type
→ custom_params = get_params(vertex.params)
→ code = custom_params.pop("code")
→ class_object = eval_custom_component_code(code)
→ custom_component = class_object(
_user_id=user_id,
_parameters=custom_params,
_vertex=vertex,
_tracing_service=get_tracing_service(),
_id=vertex.id
)
→ returns (custom_component, custom_params)
eval_custom_component_code() dynamically evaluates the component's Python source.Component or CustomComponent.get_instance_results()src/backend/base/langflow/interface/initialize/loading.py54-75
get_instance_results(custom_component, custom_params, vertex, fallback_to_env_vars, base_type)
→ update_params_with_load_from_db_fields(...) ← resolves global variables
→ if base_type == "component": build_component(...)
→ if base_type == "custom_components": build_custom_component(...)
build_component()src/backend/base/langflow/interface/initialize/loading.py147-155
Used for modern Component-based classes (see Component Lifecycle).
build_component(params, custom_component)
→ custom_component.set_attributes(params)
→ build_results, artifacts = await custom_component.build_results()
→ returns (custom_component, build_results, artifacts)
build_custom_component()src/backend/base/langflow/interface/initialize/loading.py158-202
Used for legacy CustomComponent-based classes (those with a build() method).
build_custom_component(params, custom_component)
→ if build() is async: await custom_component.build(**params)
→ else: custom_component.build(**params)
→ artifact = {repr, raw, type}
→ returns (custom_component, build_result, artifact)
Vertex Execution: Key Code Entities
Sources: src/backend/base/langflow/interface/initialize/loading.py25-202
Before graph construction, process_tweaks(graph_data, tweaks, stream) applies caller-supplied overrides to the flow's node configurations. Tweaks are a dict[str, dict] mapping node IDs to field overrides. This is the mechanism used by the simplified /run endpoint and the webhook endpoint to inject dynamic values without modifying the stored flow.
src/backend/base/langflow/api/v1/endpoints.py163-172
update_params_with_load_from_db_fields() is called during get_instance_results(). It iterates over the vertex's load_from_db_fields (fields flagged as global variable references) and resolves each one by calling custom_component.get_variable(name, field, session). If the global variable is not found and fallback_to_env_vars=True, it falls back to environment variables.
src/backend/base/langflow/interface/initialize/loading.py111-144
TracingService wraps graph runs and component builds for observability. The standard call pattern is:
await tracing_service.start_tracers(run_id, run_name, user_id, session_id) — begins a run trace.async with tracing_service.trace_component(component, trace_name, inputs) — wraps each component build.await tracing_service.end_tracers(outputs, error) — closes the run trace.Supported tracer backends: LangSmith, LangWatch, LangFuse, Arize Phoenix, Opik, Traceloop, Openlayer.
src/backend/base/langflow/services/tracing/service.py242-312
| Type | Location | Role |
|---|---|---|
Graph | lfx.graph.graph.base | Holds vertices, edges, run state |
Vertex | lfx.graph.vertex.base | Single node; holds params, outputs, build state |
Edge | lfx.graph.graph.base | Directed connection between two vertices |
RunnableVerticesManager | lfx.graph | Tracks which vertices are ready to execute |
InputValueRequest | lfx.schema.schema | Input payload (value + type) for a run |
RunOutputs | lfx.graph.schema | Output bundle from a single output vertex |
RunResponse | langflow.api.v1.schemas | Top-level API response wrapping RunOutputs |
VertexBuildResponse | langflow.api.v1.schemas | Per-vertex build result in interactive mode |
ResultDataResponse | langflow.api.v1.schemas | Results, outputs, logs, timing for a single vertex |
JobQueueService | langflow.services.job_queue.service | Manages background build jobs |
ChatService | langflow.services.chat.service | Caches in-progress Graph objects |
src/backend/base/langflow/api/v1/schemas.py65-84 src/backend/base/langflow/api/v1/schemas.py260-332
During vertex builds, exceptions are caught at two levels:
ComponentBuildError — raised by the component itself. The message and formatted_traceback are extracted and returned in VertexBuildResponse with valid=False.format_exception_message(exc) produces the user-facing string. The graph cache is cleared (chat_service.clear_cache(flow_id_str)) to prevent stale state.If graph.stop_vertex is set (partial execution requested), the list of next_runnable_vertices is filtered down to only that vertex.
src/backend/base/langflow/api/v1/chat.py340-358 src/backend/base/langflow/api/v1/chat.py388-394
Refresh this wiki
This wiki was recently refreshed. Please wait 3 days to refresh again.