This page documents the configuration settings available at the workspace level in AnythingLLM. It covers how workspace settings are stored in the Workspace model, the available configuration fields, validation rules, and how these settings inherit from system defaults and are consumed during chat operations.
For information about managing documents within workspaces, see Document Management in Workspaces. For workspace threads and conversation organization, see Thread System. For system-level configuration management, see Configuration Management.
Workspaces in AnythingLLM are tenant containers that can override system-level settings with workspace-specific configuration. Each workspace maintains its own settings for:
Workspace settings provide per-tenant customization while maintaining sensible defaults inherited from system configuration.
Sources: server/models/workspace.js1-612 server/prisma/schema.prisma127-156
The Workspace model is defined in server/models/workspace.js and manages all workspace configuration operations. It provides methods for creating, updating, and querying workspaces, along with validation logic for configuration fields.
Sources: server/models/workspace.js35-612 server/prisma/schema.prisma127-156
The Workspace model defines a writable array that specifies which fields can be updated through the API. Fields not in this list (such as slug, vectorTag, pfpFilename) require direct database access or specialized methods to modify.
Writable Configuration Fields:
name - Workspace display nameopenAiTemp - Temperature for LLM inferenceopenAiHistory - Number of chat history messages to includelastUpdatedAt - Timestamp of last modificationopenAiPrompt - System prompt for the workspacesimilarityThreshold - Minimum similarity score for vector searchchatProvider - LLM provider override (e.g., "openai", "anthropic")chatModel - Model name override (e.g., "gpt-4", "claude-3-sonnet")topN - Number of document chunks to retrievechatMode - Chat behavior mode ("chat" or "query")agentProvider - Agent system provider overrideagentModel - Agent model overridequeryRefusalResponse - Custom message when no context found in query modevectorSearchMode - Search mode ("default" or "rerank")Sources: server/models/workspace.js40-58
Each configuration field has an associated validation function that normalizes values and applies constraints. The validateFields() method applies these validations before any update.
| Field | Type | Validation Logic | Default |
|---|---|---|---|
name | String | Max 255 characters, fallback to "My Workspace" | "My Workspace" |
openAiTemp | Float | Must be >= 0, null if invalid | null |
openAiHistory | Integer | Must be >= 0 | 20 |
similarityThreshold | Float | Clamped to 0.0-1.0 range | 0.25 |
topN | Integer | Must be >= 1 | 4 |
chatMode | String | Must be "chat" or "query" | "chat" |
chatProvider | String | Null if empty or "none" | null |
chatModel | String | Null if empty | null |
agentProvider | String | Null if empty or "none" | null |
agentModel | String | Null if empty | null |
queryRefusalResponse | String | Null if empty | null |
openAiPrompt | String | Null if empty | null |
vectorSearchMode | String | Must be "default" or "rerank" | "default" |
Sources: server/models/workspace.js60-132
Sources: server/models/workspace.js164-176
When a workspace is created, it inherits the default system prompt from the system_settings table. If no system default is configured, it falls back to SystemSettings.saneDefaultSystemPrompt.
Workspace Creation Flow:
Sources: server/models/workspace.js197-202 server/models/workspace.js205-222
During chat operations, workspace settings are resolved with fallback to system environment variables:
chatProvider and chatModel override system process.env.LLM_PROVIDER and base modelopenAiTemp overrides LLMConnector.defaultTempopenAiPrompt overrides system default promptopenAiHistory determines message limit (default: 20)similarityThreshold, topN, and vectorSearchMode control retrievalSources: server/utils/chats/stream.js53-56 server/utils/chats/stream.js156-159 server/utils/chats/stream.js231-240 server/utils/chats/stream.js249-252
chatProvider and chatModel
These fields allow workspace-level override of the LLM provider and model. When set, they take precedence over system-level process.env.LLM_PROVIDER and model configuration.
null if empty or "none"chatProvider is set to "default", both chatProvider and chatModel are cleared to nullSources: server/models/workspace.js99-106 server/models/workspace.js240-243
openAiTemp
Controls the temperature parameter for LLM inference, affecting response randomness.
null (uses LLM provider's default temperature)Sources: server/models/workspace.js68-72
openAiHistory
Number of previous chat messages to include in the context window for each new request.
recentChatHistory() during chat operationsSources: server/models/workspace.js73-79 server/utils/chats/stream.js59
openAiPrompt
The system prompt that defines the AI assistant's behavior, personality, and capabilities.
system_settings.default_system_prompt or SystemSettings.saneDefaultSystemPromptnull if emptySources: server/models/workspace.js119-122 server/utils/chats/index.js91-100
similarityThreshold
Minimum cosine similarity score for vector search results to be included in context.
Sources: server/models/workspace.js80-87 server/utils/chats/stream.js156
topN
Maximum number of document chunks to retrieve from vector database.
Sources: server/models/workspace.js88-94 server/utils/chats/stream.js157
vectorSearchMode
Determines the vector search strategy.
topN resultsSources: server/models/workspace.js123-131 server/utils/chats/stream.js159
chatMode
Defines how the workspace handles queries when no relevant context is found.
Mode Comparison:
| Mode | Behavior with No Context | Use Case |
|---|---|---|
chat | Uses general LLM knowledge | General conversation, Q&A with fallback |
query | Refuses to answer, returns queryRefusalResponse | Strict document-only responses, compliance scenarios |
Sources: server/models/workspace.js95-98 server/utils/chats/stream.js16 server/utils/chats/stream.js65-92 server/utils/chats/stream.js200-227
queryRefusalResponse
Custom message returned when in "query" mode and no relevant context is found.
Sources: server/models/workspace.js115-118 server/utils/chats/stream.js66-68
agentProvider and agentModel
Workspace-level overrides for agent system configuration. Agents use the AIbitat framework for multi-step tool calling.
null (inherits system agent configuration)null if empty or "none"Sources: server/models/workspace.js107-114
The following diagram illustrates how workspace configuration is consumed during a streaming chat request:
Sources: server/utils/chats/stream.js18-311
Sources: server/utils/chats/stream.js18-311
The Workspace model includes a method _getContextWindow() that calculates the maximum context window size based on the workspace's LLM provider and model configuration.
workspace.chatProvider or fall back to process.env.LLM_PROVIDERgetLLMProviderClass()workspace.chatModel or fall back to base model for the providerLLMProvider.promptWindowLimit(model) to get token limitnull if provider/model not foundThis value is included in the workspace object when retrieved via Workspace.get() or Workspace.getWithUser().
Sources: server/models/workspace.js327-339 server/models/workspace.js341-362
When the chatProvider field is updated to "default", both chatProvider and chatModel are cleared to null. This prevents configuration inconsistencies where a model is set without a provider.
Sources: server/models/workspace.js240-243
The Workspace model tracks changes to the openAiPrompt field for telemetry and audit purposes. When a prompt is modified:
prompt_history table (if non-default and changed)This feature supports future prompt library or prompt assistant functionality.
Sources: server/models/workspace.js484-533
Embed widgets (embeddable chat interfaces) inherit workspace configuration with optional overrides. The embed_configs table references a workspace and can allow users to override certain settings.
When an embed chat is initiated, the following workspace settings can be overridden if the embed configuration permits:
allow_prompt_override flag allows custom system promptsallow_model_override flag allows different chatModelallow_temperature_override flag allows different temperatureThe embed system uses the same workspace configuration fields but with per-session customization:
Sources: server/utils/chats/embed.js11-207 server/prisma/schema.prisma238-257
workspaces
├── id (Int, Primary Key)
├── name (String)
├── slug (String, Unique)
├── vectorTag (String?, Nullable)
├── createdAt (DateTime)
├── lastUpdatedAt (DateTime)
│
├── LLM Configuration
│ ├── chatProvider (String?, Nullable)
│ ├── chatModel (String?, Nullable)
│ ├── openAiTemp (Float?, Nullable)
│ ├── openAiHistory (Int, Default: 20)
│ └── openAiPrompt (String?, Nullable)
│
├── Vector Configuration
│ ├── similarityThreshold (Float?, Default: 0.25)
│ ├── topN (Int?, Default: 4)
│ └── vectorSearchMode (String?, Default: "default")
│
├── Chat Behavior
│ ├── chatMode (String?, Default: "chat")
│ └── queryRefusalResponse (String?, Nullable)
│
├── Agent Configuration
│ ├── agentProvider (String?, Nullable)
│ └── agentModel (String?, Nullable)
│
├── Display
│ └── pfpFilename (String?, Nullable)
│
└── Relations
├── workspace_users[]
├── documents[]
├── workspace_suggested_messages[]
├── embed_configs[]
├── threads[]
├── workspace_agent_invocations[]
├── prompt_history[]
└── workspace_parsed_files[]
Sources: server/prisma/schema.prisma127-156
Workspace configuration in AnythingLLM provides fine-grained control over AI behavior, vector search, and chat modes at the tenant level. Key characteristics:
prompt_history tableThe configuration system balances flexibility with safety through comprehensive validation while maintaining backward compatibility through nullable fields and sensible defaults.
Sources: server/models/workspace.js1-612 server/prisma/schema.prisma127-156 server/utils/chats/stream.js18-311
Refresh this wiki