This document covers configuration management in DB-GPT, including configuration file formats, structure, and usage patterns. Configuration in DB-GPT controls model deployment, storage backends, database connections, and runtime behavior.
For information about:
DB-GPT uses TOML (Tom's Obvious, Minimal Language) configuration files for declarative system configuration. Configuration can be provided through files, environment variables, and command-line arguments with a clear precedence hierarchy.
Sources: docs/docs/quickstart.md124-142 docs/docs/installation/sourcecode.md92-116
Configuration files are stored in the configs/ directory at the repository root. DB-GPT provides several pre-configured TOML files for different deployment scenarios:
| Configuration File | Purpose | Model Type |
|---|---|---|
dbgpt-proxy-openai.toml | OpenAI proxy deployment | API proxy |
dbgpt-proxy-deepseek.toml | DeepSeek proxy deployment | API proxy |
dbgpt-proxy-ollama.toml | Ollama proxy deployment | API proxy |
dbgpt-local-glm.toml | Local GLM model deployment | Local HuggingFace |
dbgpt-local-vllm.toml | Local vLLM deployment | Local vLLM |
dbgpt-local-llama-cpp.toml | Local llama.cpp deployment | Local llama.cpp |
dbgpt-graphrag.toml | GraphRAG with TuGraph | Graph-enhanced RAG |
Sources: docs/docs/quickstart.md124-399 docs/docs/installation/sourcecode.md92-212
Configuration files follow a hierarchical TOML structure with distinct sections:
Sources: docs/docs/quickstart.md126-135 docs/docs/installation/sourcecode.md95-104
Each [[models.llms]] section defines a large language model with the following fields:
| Field | Type | Description | Required |
|---|---|---|---|
name | string | Model identifier (e.g., "gpt-4", "THUDM/glm-4-9b-chat-hf") | Yes |
provider | string | Model provider type (e.g., "hf", "vllm", "proxy/openai", "proxy/deepseek", "llama.cpp") | Yes |
path | string | Local filesystem path to model files | No (if downloading from HuggingFace) |
api_key | string | API key for proxy models | Yes (for proxy providers) |
api_base | string | Base URL for API proxy models | No (provider-specific default used) |
Example - OpenAI Proxy Configuration:
Example - Local HuggingFace Model:
Example - Local vLLM Model:
Sources: docs/docs/quickstart.md186-202 docs/docs/installation/sourcecode.md137-152 docs/docs/quickstart.md228-244 docs/docs/quickstart.md274-290
Each [[models.embeddings]] section defines an embedding model:
| Field | Type | Description | Required |
|---|---|---|---|
name | string | Model identifier (e.g., "BAAI/bge-large-zh-v1.5", "text-embedding-ada-002") | Yes |
provider | string | Model provider type (e.g., "hf", "proxy/openai") | Yes |
path | string | Local filesystem path to model files | No (if downloading from HuggingFace) |
api_key | string | API key for proxy models | Yes (for proxy providers) |
Example - Local HuggingFace Embedding:
Example - OpenAI Embedding:
Sources: docs/docs/quickstart.md186-202 docs/docs/installation/sourcecode.md145-152 docs/docs/quickstart.md238-243
Sources: docs/docs/installation/integrations/milvus_rag_install.md25-37 docs/docs/installation/integrations/graph_rag_install.md47-60 docs/docs/installation/integrations/oceanbase_rag_install.md25-37
Vector stores are configured in the [rag.storage.vector] section. DB-GPT supports multiple vector store backends.
Milvus Configuration:
OceanBase Vector Configuration:
Chroma Configuration (Default):
Sources: docs/docs/installation/integrations/milvus_rag_install.md29-37 docs/docs/installation/integrations/oceanbase_rag_install.md30-36
Graph stores are configured in the [rag.storage.graph] section for GraphRAG capabilities.
TuGraph Configuration:
| Field | Description |
|---|---|
type | Graph database type ("TuGraph", "Neo4j") |
host | Database host address |
port | Database port (7687 for TuGraph bolt protocol) |
username | Authentication username |
password | Authentication password |
enable_summary | Enable community summary feature |
enable_similarity_search | Enable similarity-based search |
Sources: docs/docs/installation/integrations/graph_rag_install.md52-59
DB-GPT uses a metadata database to store application data, conversations, knowledge spaces, and other persistent state. The database is configured in the [service.web.database] section.
Sources: docs/docs/installation/sourcecode.md248-284
SQLite is the default database backend, requiring minimal configuration:
| Field | Description |
|---|---|
type | Database type, must be "sqlite" |
path | Relative or absolute path to SQLite database file |
SQLite databases are created automatically at the specified path if they don't exist.
Sources: docs/docs/installation/sourcecode.md248-254
For production deployments, MySQL can be used as the metadata database:
| Field | Description |
|---|---|
type | Database type, must be "mysql" |
host | MySQL server hostname or IP address |
port | MySQL server port (typically 3306) |
user | Database username |
database | Database name to use |
password | Database password |
Important: MySQL databases must be initialized manually using the schema script before first use:
Sources: docs/docs/installation/sourcecode.md265-282
Environment variables provide a mechanism to override configuration file settings or provide sensitive credentials without storing them in files.
Environment variables take precedence over configuration file settings:
| Environment Variable | Purpose | Configuration Equivalent |
|---|---|---|
OPENAI_API_KEY | OpenAI API authentication | [[models.llms]].api_key or [[models.embeddings]].api_key |
API_SERVER_BASE_URL | Base URL for API server | Used by examples and clients |
API_SERVER_API_KEY | API server authentication | Used by examples and clients |
API_SERVER_EMBEDDINGS_MODEL | Embeddings model name | Used by examples |
UV_INDEX_URL | PyPI index URL for uv | Not in TOML, used during installation |
Example - Providing API Keys via Environment:
Instead of storing API keys in configuration files:
Use environment variables:
Then the configuration file can omit the api_key field or use a placeholder.
Sources: docs/docs/quickstart.md124 examples/rag/rag_embedding_api_example.py42-50 docs/docs/quickstart.md88-92
The webserver is started using the dbgpt start webserver command with the --config flag:
Alternative syntax using Python directly:
Sources: docs/docs/quickstart.md109-399 docs/docs/installation/sourcecode.md92-212
Use dbgpt-proxy-openai.toml when:
Use dbgpt-proxy-deepseek.toml when:
Use dbgpt-proxy-ollama.toml when:
Use dbgpt-local-glm.toml when:
Use dbgpt-local-vllm.toml when:
Use dbgpt-local-llama-cpp.toml when:
Use dbgpt-graphrag.toml when:
Sources: docs/docs/quickstart.md100-402
File: configs/dbgpt-proxy-openai-milvus.toml (custom)
Sources: docs/docs/quickstart.md126-135 docs/docs/installation/integrations/milvus_rag_install.md29-37 docs/docs/installation/sourcecode.md273-281
File: configs/dbgpt-local-vllm-graphrag.toml (custom)
Sources: docs/docs/quickstart.md274-290 docs/docs/installation/integrations/graph_rag_install.md52-59 docs/docs/installation/sourcecode.md248-253
File: Based on configs/dbgpt-proxy-deepseek.toml
Install Dependencies:
Sources: docs/docs/quickstart.md179-195 docs/docs/installation/sourcecode.md166-176
When the webserver starts, the configuration loading process follows these steps:
--config flag| Error | Cause | Solution |
|---|---|---|
Missing API key | No api_key in config and no environment variable | Set OPENAI_API_KEY or add api_key to TOML |
Model not found | Invalid model name or path | Verify model name/path, check HuggingFace Hub |
Connection refused | Vector/graph store not running | Start Milvus/TuGraph/etc. service |
Database connection failed | Wrong MySQL credentials | Verify host, user, password in config |
Port already in use | Webserver port 5670 occupied | Change port or stop conflicting process |
To debug configuration loading, use verbose logging:
This will output detailed information about:
Sources: docs/docs/quickstart.md139-147 docs/docs/installation/sourcecode.md106-116
| Configuration Aspect | TOML Section | Example File |
|---|---|---|
| LLM Models | [[models.llms]] | configs/dbgpt-proxy-openai.toml |
| Embedding Models | [[models.embeddings]] | configs/dbgpt-local-glm.toml |
| Vector Stores | [rag.storage.vector] | configs/dbgpt-proxy-openai.toml |
| Graph Stores | [rag.storage.graph] | configs/dbgpt-graphrag.toml |
| Application Database | [service.web.database] | Any config file |
| Environment Overrides | Shell environment | export OPENAI_API_KEY=... |
| Runtime Arguments | CLI flags | --config configs/... |
Sources: docs/docs/quickstart.md1-457 docs/docs/installation/sourcecode.md1-304 docs/docs/installation/integrations/graph_rag_install.md47-68
Refresh this wiki