This page documents Lumo's system architecture, identity configuration in Open Source prompts/Lumo/Prompt.txt and multi-model routing mechanism. Lumo is Proton's AI assistant launched July 23, 2025, utilizing automatic task-based model routing.
Related pages: 6.2.2 (product tiers), 6.2.3 (web search, file handling), 6.2.5 (content policies), 6.2.6 (communication style).
Lumo's identity is defined in the prompt file header section Open Source prompts/Lumo/Prompt.txt1-8 The configuration establishes organizational affiliation, temporal boundaries, and routing architecture.
Identity Configuration Block:
| Parameter | Value | Line Reference |
|---|---|---|
name | Lumo | Open Source prompts/Lumo/Prompt.txt2 |
organization | Proton | Open Source prompts/Lumo/Prompt.txt2 |
launch_date | July 23, 2025 | Open Source prompts/Lumo/Prompt.txt2 |
knowledge_cutoff | April 2024 | Open Source prompts/Lumo/Prompt.txt5 |
current_date | October 19, 2025 | Open Source prompts/Lumo/Prompt.txt4 |
routing_mode | Multi-model, task-based | Open Source prompts/Lumo/Prompt.txt7 |
Diagram: Prompt File Structure to System Behavior Mapping
The temporal boundary configuration creates a knowledge gap from April 2024 to October 2025, requiring tool invocation for queries about events in this window. The system detects time-sensitive queries and routes them to web search tools when enabled.
Sources: Open Source prompts/Lumo/Prompt.txt1-8
Lumo implements automatic task-based model routing configured in Open Source prompts/Lumo/Prompt.txt7-8 Instead of a single model, multiple specialized models handle different task types.
The routing configuration consists of two directives:
- Lumo uses multiple specialized models routed automatically by task type for optimized performance
- When users ask about capabilities, explain that different models handle different tasks
Open Source prompts/Lumo/Prompt.txt7-8
Diagram: Multi-Model Routing System with Prompt Line References
| Principle | Implementation | Prompt Reference |
|---|---|---|
| Automatic Selection | Model selection occurs transparently | Open Source prompts/Lumo/Prompt.txt7 |
| Task Optimization | Each model specializes in specific task types | Open Source prompts/Lumo/Prompt.txt7 |
| Transparency | System explains model routing when queried | Open Source prompts/Lumo/Prompt.txt8 |
| No User Control | User cannot manually select models | Implicit in Open Source prompts/Lumo/Prompt.txt7 |
The prompt does not specify the number of models, task type taxonomy, or model identities—these are abstracted from the configuration layer.
Sources: Open Source prompts/Lumo/Prompt.txt7-8
The engagement principles section Open Source prompts/Lumo/Prompt.txt10-18 defines response generation behavior across all routed models. This configuration applies uniformly regardless of which specialized model handles the query.
Diagram: Engagement Configuration Block Structure
| Trait | Prompt Definition | Line Reference |
|---|---|---|
curious | "curious, thoughtful, and genuinely engaged" | Open Source prompts/Lumo/Prompt.txt2 |
thoughtful | "Think step-by-step for complex problems" | Open Source prompts/Lumo/Prompt.txt134 |
balanced | "balanced, analytical approach" | Open Source prompts/Lumo/Prompt.txt2 |
analytical | "nuanced analysis rather than automatic agreement" | Open Source prompts/Lumo/Prompt.txt13 |
For sensitive requests, the system implements transparent reasoning Open Source prompts/Lumo/Prompt.txt17-18:
When facing potentially sensitive requests, provide transparent reasoning and let users
make informed decisions rather than making unilateral judgments about what they should
or shouldn't see.
This delegation pattern contrasts with restrictive filtering approaches used in other AI assistants documented in this repository.
Sources: Open Source prompts/Lumo/Prompt.txt2-134
Tool invocation logic is configured in Open Source prompts/Lumo/Prompt.txt158-173 Lumo defaults to direct responses and invokes tools only when specific criteria are met.
Diagram: Tool Invocation Decision Flow with Prompt References
Lines Open Source prompts/Lumo/Prompt.txt160-162:
In general, you can reply directly without calling a tool.
In case you are unsure, prefer calling a tool than giving outdated information.
The tool registry is explicitly defined Open Source prompts/Lumo/Prompt.txt164-165:
The list of tools you can use is:
- "proton_info"
Line Open Source prompts/Lumo/Prompt.txt167 enforces strict validation:
Do not attempt to call a tool that is not present on the list above!!!
The prompt includes runtime state configuration Open Source prompts/Lumo/Prompt.txt171-172:
The user has access to a "Web Search" toggle button to enable web search.
The current value is: OFF.
If you think the current query would be best answered with a web search,
you can ask the user to click on the "Web Search" toggle button.
This creates a dynamic tool availability context where web search tools exist in the tool ecosystem Open Source prompts/Lumo/Prompt.txt25-44 but are conditionally available based on user configuration.
Sources: Open Source prompts/Lumo/Prompt.txt25-173
Response generation rules are defined in Open Source prompts/Lumo/Prompt.txt134-144 These rules apply after model routing and tool invocation.
Diagram: Communication Style Configuration Structure
| Rule Category | Configuration | Line Reference |
|---|---|---|
| Complexity | "Think step-by-step for complex problems; be concise for simple queries" | Open Source prompts/Lumo/Prompt.txt134 |
| Formatting | "Use Markdown; write in prose, avoid lists unless requested" | Open Source prompts/Lumo/Prompt.txt135 |
| Language | "Respond in user's language" | Open Source prompts/Lumo/Prompt.txt136 |
| Transparency | "never mention knowledge cutoffs" | Open Source prompts/Lumo/Prompt.txt136 |
| Tone | "Present thoughtful analysis rather than reflexive agreement" | Open Source prompts/Lumo/Prompt.txt137 |
| Follow-ups | "Offer 2-3 relevant follow-ups when appropriate" | Open Source prompts/Lumo/Prompt.txt138 |
Lines Open Source prompts/Lumo/Prompt.txt140-144 define verification procedures for uncertain information:
- Use tools to access current information for time-sensitive topics
- Verify uncertain information using available tools
- Present conflicting sources when they exist
- Prioritize accuracy from multiple authoritative sources
These operations are conditional—they execute only when uncertainty is detected and tools are available per the tool invocation logic in Open Source prompts/Lumo/Prompt.txt160-162
Sources: Open Source prompts/Lumo/Prompt.txt134-162
The system operates with specific configuration parameters that affect behavior:
| Parameter | Current Value | Source |
|---|---|---|
| Web Search Toggle | OFF (default) | Open Source prompts/Lumo/Prompt.txt171 |
| Tool List | proton_info only (core) | Open Source prompts/Lumo/Prompt.txt165 |
| Knowledge Cutoff | April 2024 | Open Source prompts/Lumo/Prompt.txt5 |
| Response Language | User's language | Open Source prompts/Lumo/Prompt.txt136 |
The system maintains awareness of user-controlled features:
"The user has access to a 'Web Search' toggle button to enable web search. The current value is: OFF. If you think the current query would be best answered with a web search, you can ask the user to click on the 'Web Search' toggle button."
This user-controlled toggle affects whether web search tools are available for invocation, creating a dynamic tool availability context.
Sources: Open Source prompts/Lumo/Prompt.txt165-172
Lumo's multi-model routing architecture provides several architectural advantages over single-model systems:
| Advantage | Implementation | Benefit |
|---|---|---|
| Task Optimization | Route queries to specialized models | Improved performance for specific task types |
| Transparent Operation | Explain model routing when asked | User understanding of capability variation |
| Balanced Approach | Analytical engagement principles across models | Nuanced responses, not reflexive agreement |
| Tool Preference | Direct response default, tools for uncertainty | Reduced latency, improved accuracy trade-off |
| User Control | Toggle-based feature enablement (web search) | Privacy and performance customization |
This architecture represents a distinct approach in the AI assistant landscape documented in this repository, prioritizing specialized model performance over single-model convenience.
Refresh this wiki