This page covers patterns for flows focused on data transformation: how raw inputs are shaped, routed through LLMs, structured into typed outputs, and serialized for display or downstream use. The examples are drawn from the bundled starter projects located in src/backend/base/langflow/initial_setup/starter_projects/.
For agent-centric flows that emphasize tool use and autonomous reasoning, see Agent Patterns. For flows that retrieve and synthesize documents from vector stores, see RAG Patterns. For flows that chain multiple agents sequentially, see Multi-Agent Workflows.
Three primary types carry data between components in all processing flows. Each maps to a schema class in the lfx package.
| Type | Schema class | Description |
|---|---|---|
Message | lfx.schema.message.Message | A chat message with text, sender, session ID, and optional files |
Data | lfx.schema.data.Data | An untyped key-value payload; commonly carries JSON-shaped results |
DataFrame | lfx.schema.dataframe.DataFrame | A tabular structure used by data-fetching and batch-processing components |
ChatOutput accepts all three via its input_value field (typed HandleInput accepting Data, DataFrame, Message). The convert_to_string method in ChatOutput dispatches to serialization helpers depending on which type arrives.
Sources: src/backend/base/langflow/utils/schemas.py1-117
Data type relationships:
Sources: src/backend/base/langflow/initial_setup/starter_projects/Market Research.json60-146 src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json175-230
PromptComponent (module: lfx.components.models_and_agents.prompt.PromptComponent) builds a Message by substituting named variables into a template string.
{variable_name}; Mustache ({{variable_name}}) is enabled by setting use_double_brackets = True.custom_fields.template list.prompt output of type Message, produced by build_prompt().update_build_config() re-parses the template when use_double_brackets changes, cleaning up ports that belonged to the old syntax mode.The PromptComponent feeds its output Message either directly to ChatOutput or to the input_value or system_message inputs of a LanguageModelComponent.
Sources: src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json290-400 src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json554-700 src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json86-155
StructuredOutput accepts a Message (typically the raw text response from an LLM or agent) and parses it into a schema-validated DataFrame. It exposes a dataframe_output output port.
In the Market Research flow, the Agent's response feeds StructuredOutput, which then emits a DataFrame that flows to ParserComponent.
Sources: src/backend/base/langflow/initial_setup/starter_projects/Market Research.json104-146
ParserComponent converts Data or DataFrame inputs into a Message (output: parsed_text). It is the standard bridge from tabular or structured data back to a text format that ChatOutput can display naturally.
Its input port input_data accepts both DataFrame and Data:
input_data (accepts: DataFrame, Data)
↓
parsed_text (output: Message)
Sources: src/backend/base/langflow/initial_setup/starter_projects/Market Research.json130-146 src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json202-230
BatchRunComponent iterates over rows of a DataFrame and runs a configured operation on each row, collecting results into a new DataFrame (output: batch_results). The Youtube Analysis flow uses it to process each YouTube comment row individually before aggregating.
Sources: src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json185-230
SaveToFile accepts Data, DataFrame, or Message on its input port and persists the content to disk. The News Aggregator flow feeds ChatOutput's message output into SaveToFile to archive the result.
Sources: src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json100-119
The simplest data processing pattern feeds one or more text inputs through a PromptComponent into a LanguageModelComponent, then to ChatOutput. Multiple LLM steps can be chained by wiring one LLM's text_output into the next step's PromptComponent.
Basic Prompting flow:
Sources: src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json119-175
The Instagram Copywriter flow runs two LLMs in sequence and merges their outputs into a final prompt before output. The LanguageModelComponent's text_output is wired into one or more subsequent PromptComponent inputs as named variables.
Instagram Copywriter flow:
Sources: src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json33-288
This pattern converts the free-text response from an LLM or agent into a DataFrame with a defined schema, then serializes it back to text for display.
Market Research flow (Agent → StructuredOutput → ParserComponent):
Sources: src/backend/base/langflow/initial_setup/starter_projects/Market Research.json3-146
| Component | Output port | Output type | Target input |
|---|---|---|---|
Agent | response | Message | StructuredOutput.input_value |
StructuredOutput | dataframe_output | DataFrame | ParserComponent.input_data |
ParserComponent | parsed_text | Message | ChatOutput.input_value |
When a source component produces a DataFrame (e.g., from an API or web scrape), BatchRunComponent can apply a per-row transformation. ParserComponent then flattens the result for downstream consumption.
Youtube Analysis flow:
Sources: src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json3-230
The Research Agent starter uses a two-LLM pipeline where the first LLM (LanguageModelComponent-TZiUW) determines what to research, and the second (LanguageModelComponent-80mt4) synthesizes a final answer. An Agent with a search tool executes the actual research between the two LLMs.
Research Agent flow:
Sources: src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json3-289
After a result has been produced, flows can persist it to disk using SaveToFile. The News Aggregator wires ChatOutput.message → SaveToFile.input.
News Aggregator file persistence:
Sources: src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json3-119
ChatOutput.convert_to_string() dispatches based on the runtime type of input_value:
| Input type | Handling |
|---|---|
Message | Returns message.text (preserving existing Message object) |
Data | _serialize_data() → JSON pretty-printed in Markdown code fence |
DataFrame | safe_convert(item) from lfx.helpers.data |
list | Joins items with \n using safe_convert per item |
Generator | Passed through for streaming |
The data_template input (default "{text}") is used to render Data objects when the template approach is preferred over full JSON serialization.
Sources: src/backend/base/langflow/initial_setup/starter_projects/Market Research.json484-620
| Starter project | Key data processing components | Primary pattern |
|---|---|---|
| Basic Prompting | PromptComponent, LanguageModelComponent | Prompt template → single LLM |
| Research Agent | PromptComponent × 4, LanguageModelComponent × 2, Agent | Two-LLM prompt chain with search |
| Instagram Copywriter | PromptComponent × 3, LanguageModelComponent × 2 | Parallel prompt chains merged |
| Market Research | StructuredOutput, ParserComponent | Agent → structured DataFrame → text |
| Youtube Analysis | YouTubeCommentsComponent, BatchRunComponent, ParserComponent | DataFrame batch → Parser |
| News Aggregator | Agent, ChatOutput, SaveToFile | Agent result → file persistence |
| SaaS Pricing | PromptComponent (multi-variable), Agent, CalculatorComponent | Parameterized prompt → agent |
All starter project JSON files are stored in src/backend/base/langflow/initial_setup/starter_projects/ and loaded during initial setup as described in Starter Projects.
Refresh this wiki
This wiki was recently refreshed. Please wait 3 days to refresh again.