The Prompt Optimization Service (PromptService) is the central orchestration layer in the @prompt-optimizer/core package that coordinates prompt optimization and iteration workflows. It integrates template processing, LLM invocation, variable substitution, and history tracking to transform user-provided prompts into optimized versions through AI-powered analysis.
This page documents the orchestration logic, workflow patterns, and integration points of PromptService. For template structure and syntax, see Template System. For LLM provider adapters, see LLM Service and Provider Adapters. For history persistence, see History and Version Control.
The PromptService acts as a facade that coordinates multiple specialized services to execute optimization workflows. It has no direct knowledge of AI providers or storage mechanisms, delegating these concerns to injected dependencies.
Key Responsibilities:
| Service | Primary Responsibility | Input | Output |
|---|---|---|---|
PromptService | Workflow orchestration | OptimizationRequest | PromptRecord + streaming events |
LLMService | Message transmission | ConversationMessage[] | AI response (text/stream) |
TemplateManager | Template storage/retrieval | Template ID | Template object |
HistoryManager | Version persistence | PromptRecord | Chain ID |
TemplateProcessor | Template rendering | Template + context | ConversationMessage[] |
VariableAggregator | Variable resolution | Four-tier sources | Merged key-value map |
Sources: High-level diagram #3 (Core Services and Data Flow Architecture), packages/ui/src/composables/prompt/useOptimization.ts1-500
The core optimize() method implements a multi-step pipeline that transforms an original prompt into an optimized version using AI-guided analysis.
Workflow Steps:
'optimize', 'userOptimize', or context variants)originalPrompt and merged variablesTemplateProcessor.processTemplate() to generate ConversationMessage[] with variables substitutedLLMServiceonChunk() callbacks for real-time UI updatesPromptRecordChainKey Context Variables:
{{prompt}} - The original prompt to be optimized{{variables}} - User-defined variables from the four-tier system (see Variable Management System){{context}} - Additional context in Pro mode (conversation history, test scenarios)Sources: packages/ui/src/components/PromptPanel.vue1-723 packages/ui/src/composables/prompt/useOptimization.ts1-300
Iteration allows users to refine an already-optimized prompt with specific improvement directions. Unlike optimization (which starts fresh), iteration builds upon an existing version and creates a new entry in the chain.
Iteration Context Variables:
| Variable | Description | Example |
|---|---|---|
{{originalPrompt}} | The initial user prompt (V0) | "Write a story" |
{{optimizedPrompt}} | The current version being improved | "Craft a compelling narrative..." |
{{iterateInput}} | User's improvement directive | "Make it more concise" |
{{variables}} | Merged variables (same as optimize) | User/global/context vars |
Version Management:
version=1PromptRecordChain.currentRecord always points to latest versioniterationNote field stores the iterateInput for each iterationSources: packages/ui/src/components/PromptPanel.vue624-649 packages/ui/src/components/HistoryDrawer.vue120-197
PromptService delegates template rendering to TemplateProcessor, which handles two template formats:
| Format | Content Type | Use Case | Example |
|---|---|---|---|
| Simple | string | Single-message templates | "Optimize: {{prompt}}" |
| Advanced | MessageTemplate[] | Multi-turn conversations | [{role: 'system', content: '...'}, {role: 'user', content: '{{prompt}}'}] |
Substitution Process:
{{key}} placeholders with aggregated values[{role: 'user', content: rendered}] or process advanced template messages{{prompt}} must exist)Built-in Template Types:
Sources: packages/ui/src/components/TemplateManager.vue776-801 packages/ui/src/components/TemplateSelect.vue40-60
Every optimization and iteration is persisted through HistoryManager using the PromptRecordChain structure, enabling version tracking, history browsing, and result reuse.
Version Lifecycle:
version=1, type='optimize' or 'userOptimize'version=N+1, type='iterate', stores iterationNotecurrentRecord.timestamp (most recent first)Record Types:
| Type | Created By | Description |
|---|---|---|
'optimize' | System prompt optimization | Basic mode system templates |
'userOptimize' | User prompt optimization | Basic mode user templates |
'conversationMessageOptimize' | Pro mode multi-turn | Context system optimization (legacy name) |
'contextSystemOptimize' | Pro mode multi-turn | Context system optimization (current name) |
'contextUserOptimize' | Pro mode variable | Context user optimization |
'iterate' | Iteration workflow | Any iteration in basic/pro/image modes |
'text2imageOptimize' | Image mode text-to-image | Image prompt optimization |
'image2imageOptimize' | Image mode image-to-image | Image prompt optimization |
Sources: packages/ui/src/components/HistoryDrawer.vue210-360 packages/ui/src/components/PromptPanel.vue344-422
The PromptService is consumed by workspace components through composables that provide reactive state management and event handling.
Key Composables:
useOptimization() - Wraps PromptService.optimize() and .iterate() with reactive state, streaming handlers, and error handlinguseProContext() - Provides Pro mode context (conversations, variables) as additional template contextuseEvaluation() - Integrates with EvaluationService to score optimization quality (see Evaluation System)Streaming Event Handlers:
| Handler | Triggered By | Purpose |
|---|---|---|
onChunk(text) | Each LLM response token | Update UI with partial result (streaming effect) |
onComplete(full, reasoning) | LLM finishes response | Save final result, update history |
onError(error) | LLM/network failure | Display error toast, rollback state |
Sources: packages/ui/src/composables/prompt/useOptimization.ts1-400 packages/ui/src/components/PromptPanel.vue296-723
The OptimizationRequest interface defines the contract between UI components and PromptService, supporting all optimization types with a unified structure.
Core Request Fields:
Sources: packages/ui/src/composables/prompt/useOptimization.ts50-150 packages/core/src/types/optimization.ts1-100
PromptService supports testing multiple prompt variants in parallel to compare effectiveness before committing to production use. This is exposed through workspace test panels.
Test Configuration:
EvaluationService (see Evaluation System)Sources: packages/ui/src/components/TestAreaPanel.vue1-500 packages/ui/src/components/TestControlBar.vue1-300
PromptService implements graceful degradation and error recovery to handle common failure scenarios.
| Error Type | Detection | Recovery Action |
|---|---|---|
| Template Not Found | getTemplate() returns null | Show error toast, prompt template selection |
| LLM API Failure | HTTP error or timeout | Retry with exponential backoff (3 attempts) |
| Invalid Model Key | Model not in ModelManager | Fall back to default model, warn user |
| Variable Resolution Failure | Missing required variable | Show error with missing variable names |
| Stream Interruption | Connection lost mid-stream | Save partial result, allow manual retry |
| Storage Write Failure | HistoryManager.save() fails | Show error, result still usable in-memory |
Streaming Recovery:
When a stream is interrupted, PromptService accumulates the partial result and emits a final onError() event. The UI can:
Sources: packages/ui/src/composables/prompt/useOptimization.ts200-300 packages/ui/src/composables/ui/useToast.ts1-100
PromptService provides a unified API that adapts to three functional modes, each with distinct optimization patterns.
Template Type Resolution:
The workspace component selects the appropriate template type based on functionMode and subMode:
Sources: packages/ui/src/composables/mode/useBasicSubMode.ts1-100 packages/ui/src/composables/mode/useProSubMode.ts1-99 packages/ui/src/components/OptimizationModeSelector.vue1-124
Refresh this wiki