This document describes the output type system in Langflow and how data flows between components during execution. It covers the core output types (Message, Data, DataFrame, Tool, Embeddings), output definition structure in component metadata, the edge-based connection system, type validation, and caching mechanisms.
For information about input types and how they receive data, see Input Type System. For the complete component execution lifecycle, see Component Lifecycle.
Components in Langflow produce outputs of specific types that determine how they can connect to other components. The type system ensures type-safe data flow and enables automatic validation of component connections.
Sources: src/lfx/src/lfx/_assets/component_index.json1-100
Message is the primary type for chat interactions and LLM I/O. It is defined in the lfx package and re-exported via src/backend/base/langflow/schema/message.py7-9
The module re-exports these related types:
| Class | Role |
|---|---|
Message | Rich text message with metadata, attachments, and session tracking |
ContentBlock | Individual content unit within a message (text, code, media, tool result, error) |
MessageResponse | Read-model returned from the database for serialized messages |
ErrorMessage | Subtype indicating an error condition, used when component execution fails |
DefaultModel | Base Pydantic model used by Message-related classes |
Message Fields:
text: Primary text contentsender: Sender type ("User" or "Machine")sender_name: Display name of the sendersession_id: Session identifier for conversation historycontext_id: Additional context groupingfiles: Optional file attachmentstimestamp: Message creation timecontent_blocks: List of ContentBlock objects for structured multi-part contentCommon Producers:
ChatInput: User messages from the playgroundPrompt: Formatted promptsParser: Parsed text from documentsExample Output Definition:
Sources: src/backend/base/langflow/schema/message.py7-9
The Data type represents a single structured record with key-value pairs. It's used for individual documents, search results, or any structured data that needs to flow through the system.
Common Producers:
Example from FAISS Component:
Sources: src/backend/base/langflow/schema/data.py6-8
The DataFrame type represents tabular data with multiple records, similar to a pandas DataFrame. It's used for bulk data processing and batch operations.
Common Producers:
SplitText: Chunked document dataStructuredOutput: Parsed structured data with schemaExample from FAISS Component:
Sources: src/lfx/src/lfx/_assets/component_index.json60-71
The Tool type represents LangChain-compatible tools that can be used by agents. Tools encapsulate functionality that agents can invoke dynamically.
Common Producers:
Example from Notion Component:
Sources: src/lfx/src/lfx/_assets/component_index.json328-339
The Embeddings type represents embedding models that convert text to vector representations. These are used primarily for vector store operations.
Common Producers:
EmbeddingModel: OpenAI, Cohere, Azure embeddingsConnection Pattern:
Embedding outputs typically connect to vector store embedding or embedding_model inputs for both ingestion and retrieval operations.
Sources: src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json241-262
During execution, component outputs are wrapped in a hierarchy of API response schemas for transport to the frontend. These schemas handle serialization, truncation, and transport.
Schema hierarchy:
Sources: src/backend/base/langflow/api/v1/schemas.py65-331 src/backend/base/langflow/schema/schema.py2-24
OutputValue is re-exported from lfx.schema.schema via src/backend/base/langflow/schema/schema.py8 It wraps the value and type string for a single named output port. A ResultDataResponse.outputs dict contains one OutputValue per output port name.
Other symbols re-exported from lfx.schema.schema in src/backend/base/langflow/schema/schema.py2-24:
| Symbol | Role |
|---|---|
OutputType | Enum for API-level output filtering: "chat", "text", "debug", "any" — used by SimplifiedAPIRequest.output_type |
InputType | Enum for API-level input routing |
ErrorLog | Structured error log entry |
LogType | Log type enum |
StreamURL | Streaming endpoint URL type |
build_output_logs | Utility to construct output log dicts from component results |
get_type | Utility to resolve a type name from a value |
Note:
OutputTypehere is not the same as component port types (Message, Data, etc.). It is used inSimplifiedAPIRequestto select which output vertices to return from a flow run.
Sources: src/backend/base/langflow/schema/schema.py2-24
ResultDataResponse is defined in src/backend/base/langflow/api/v1/schemas.py266-304 It holds the results from building a single vertex and is nested inside VertexBuildResponse.
| Field | Type | Description |
|---|---|---|
results | Any | Raw component build results |
outputs | dict[str, OutputValue] | Per-output-port results, keyed by output name |
logs | dict[str, list[Log]] | Per-output-port logs |
message | Any | Artifact/message data |
artifacts | Any | Additional artifacts |
timedelta | float | Execution time in seconds |
duration | str | Human-readable duration string |
used_frozen_result | bool | Whether a cached (frozen) result was used |
ResultDataResponse applies serialization limits via serialize() from src/backend/base/langflow/serialization/serialization.py using configurable max_text_length and max_items_length settings from the backend settings service.
Sources: src/backend/base/langflow/api/v1/schemas.py266-304
VertexBuildResponse is defined in src/backend/base/langflow/api/v1/schemas.py307-331 It is the top-level response from the (deprecated) POST /build/{flow_id}/vertices/{vertex_id} endpoint.
| Field | Type | Description |
|---|---|---|
id | str | Vertex ID |
valid | bool | Whether the build succeeded |
params | Any | Build parameters or error message |
data | ResultDataResponse | Output results and logs |
next_vertices_ids | list[str] | Vertices ready to execute next |
inactivated_vertices | list[str] | Vertices skipped due to inactive branches |
top_level_vertices | list[str] | Top-level runnable vertices |
timestamp | datetime | Build timestamp |
Sources: src/backend/base/langflow/api/v1/schemas.py307-331
RunResponse is defined in src/backend/base/langflow/api/v1/schemas.py65-83 It is the response from the simplified POST /run/{flow_id} endpoint after a complete flow execution.
| Field | Type | Description |
|---|---|---|
outputs | list[RunOutputs] | Outputs from all requested output vertices |
session_id | str | Session ID for the run |
Sources: src/backend/base/langflow/api/v1/schemas.py65-83
ChatOutputResponse is a Pydantic model defined in src/backend/base/langflow/utils/schemas.py18-80 that validates the output payload from chat components.
Fields:
| Field | Type | Description |
|---|---|---|
message | str | list[str | dict] | The chat message content |
sender | str | Sender identifier (default: MESSAGE_SENDER_AI) |
sender_name | str | Display name (default: MESSAGE_SENDER_NAME_AI) |
session_id | str | Session identifier |
stream_url | str | Optional URL for streaming response tokens |
component_id | str | ID of the originating component |
files | list[File] | List of attached files |
type | str | Message type discriminator |
Validation logic:
validate_files (@field_validator): Ensures each file dict contains path, name, and type keys. Missing name is derived from the path basename; missing type is inferred from the file extension checked against TEXT_FILE_TYPES + IMG_FILE_TYPES from lfx.base.data.utils.@model_validator ensures message is a non-empty string or a non-empty list.The File TypedDict (defined in the same file) requires three keys:
path: Filesystem path to the filename: Display nametype: MIME type or extensionSources: src/backend/base/langflow/utils/schemas.py1-80
Each component output is defined in the component's metadata within the component index. The output definition controls how the output behaves during execution and how it can connect to other components.
| Field | Type | Description |
|---|---|---|
name | string | Internal identifier for the output (e.g., "message", "search_results") |
display_name | string | Human-readable name shown in UI |
method | string | Component method name that produces this output |
types | array | List of output types this port can produce |
selected | string | Default selected type from types array |
cache | boolean | Whether to cache this output's result |
tool_mode | boolean | Whether this output can be used as a tool |
allows_loop | boolean | Whether this output can create feedback loops |
group_outputs | boolean | Whether to group multiple outputs together |
Components can declare multiple output types for a single port, allowing flexibility in how the output is consumed:
The selected field indicates the default type, but connections can use any type in the types array.
Sources: src/lfx/src/lfx/_assets/component_index.json47-71
Components can define multiple distinct outputs, each with its own method:
Sources: src/lfx/src/lfx/_assets/component_index.json47-71
Data flows between components through edges that connect output ports to input ports. The edge system enforces type compatibility and manages data transformation.
Each edge in a flow contains metadata about the connection:
Key Fields:
source: ID of the source componentsourceHandle.name: Name of the output portsourceHandle.output_types: Types this output can producetarget: ID of the target componenttargetHandle.fieldName: Name of the input fieldtargetHandle.inputTypes: Types this input acceptsSources: src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json4-32
The frontend validates connections before allowing edges to be created. The backend performs additional validation during graph execution.
Sources: src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json4-32
How a vertex output propagates to the next component's input:
Sources: src/backend/base/langflow/api/v1/chat.py262-432 src/backend/base/langflow/interface/initialize/loading.py147-202
This is the basic conversational pattern seen in most starter projects:
Data Transformation:
Message(text="user query", sender="User")Message(text="formatted prompt")Message(text="LLM response", sender="Machine")Sources: src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json1-149
Retrieval-Augmented Generation involves multiple data type transformations:
Data Transformation:
Message(text=file_content)DataFrame([Data(...), Data(...), ...])Data(text=chunk, metadata={...})Message(text=formatted_results)Sources: src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json62-178
Components can produce structured data with schemas:
Use Case: Extracting structured information from unstructured text, such as parsing resumes or financial reports.
Agents use Tool outputs to access external functionality:
Data Transformation:
StructuredTool with search functionalityStructuredTool with external API callsSources: src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json147-174
Each output is produced by calling a specific method on the component instance. The method name is specified in the output definition's method field.
build_componentThe instantiate_class function in src/backend/base/langflow/interface/initialize/loading.py25-51 compiles the component class from vertex code and instantiates it. Then get_instance_results (src/backend/base/langflow/interface/initialize/loading.py54-75) resolves global variables and calls either:
build_component (src/backend/base/langflow/interface/initialize/loading.py147-155): for new-style Component subclasses — sets input attributes and calls build_results()build_custom_component (src/backend/base/langflow/interface/initialize/loading.py158-202): for legacy CustomComponent subclasses — calls the build() method directlyCall chain:
instantiate_class(vertex, user_id, event_manager)
→ eval_custom_component_code(code) # compile class from code string
→ class_object(_parameters, _vertex, ...)
→ component.set_event_manager(event_manager)
get_instance_results(component, params, vertex)
→ update_params_with_load_from_db_fields(...) # resolve secrets/global vars
→ build_component(params, component)
→ component.set_attributes(params)
→ component.build_results()
→ for each Output(method="..."):
call output_method() → result
→ return (build_results_dict, artifacts_dict)
Sources: src/backend/base/langflow/interface/initialize/loading.py25-202
Component build pipeline mapping code entities to execution steps:
Execution steps:
Graph.build_vertex() calls instantiate_class() to compile and instantiate the component from the code string stored in the vertexget_instance_results() resolves database-backed global variables for secret fields (load_from_db_fields)build_component() sets all input values as attributes and calls build_results()build_results() iterates declared Output definitions, calls each output's named method, and collects results keyed by output nameResultDataResponse to the API layerSources: src/backend/base/langflow/interface/initialize/loading.py25-202 src/backend/base/langflow/api/v1/chat.py262-432
Output caching improves performance by avoiding redundant computation when the same component output is used multiple times.
| Field | Value | Behavior |
|---|---|---|
cache | true | Result stored in memory, reused for same vertex/output |
cache | false | Method executed every time output is requested |
cache_key = f"{vertex_id}:{output_name}"
For example, if a FAISS component with ID FAISS-Uz8O4 has two outputs:
FAISS-Uz8O4:search_results → cached search resultsFAISS-Uz8O4:dataframe → cached dataframe representationCaches are invalidated:
Vector Store Cache Example:
This input controls whether the vector store itself (not just query results) is cached across multiple output methods.
Sources: src/lfx/src/lfx/_assets/component_index.json239-258
Components can expose their outputs as tools for agent consumption. The tool_mode flag indicates whether an output can function as a LangChain tool.
Components implement a build_tool method that returns a StructuredTool:
The tool wraps the component's functionality and includes:
Sources: src/lfx/src/lfx/_assets/component_index.json472-496
The agent receives multiple tools and uses the LLM to decide:
Sources: src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json147-202
The system performs automatic conversions between compatible types:
Inputs that accept Message or Text can receive either:
Message objects are passed directlyMessage objectsDataFrame contains multiple Data objectsData can be promoted to DataFrame with one rowDataFrame can be iterated to yield individual Data objectsComponents convert between Langflow types and LangChain types:
Sources: src/lfx/src/lfx/_assets/component_index.json96-112
All output types are tracked in the component index, allowing the frontend to:
Components declare their base output classes:
The base_classes field indicates the fundamental types this component can produce, which is used for high-level filtering before checking individual output types.
Sources: src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json300-352
The output type system in Langflow provides:
Data flows through the graph via edges that connect typed outputs to compatible inputs, with automatic validation and transformation. Each output is produced by calling a specific method on the component, with results cached based on vertex identity and output name.
Sources: src/lfx/src/lfx/_assets/component_index.json1-258 src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json1-291 src/backend/base/langflow/schema/message.py1-10
Refresh this wiki
This wiki was recently refreshed. Please wait 3 days to refresh again.