This page covers Immich's external library feature: how it scans import paths on the filesystem, applies exclusion patterns, detects new and changed files, tracks asset offline/online status, watches for filesystem events in real time, and handles library deletion.
This is distinct from the mobile client sync protocol (see Data Synchronization) and from the general asset processing pipeline that runs after a file is imported (see Asset Processing Pipeline).
An external library allows an Immich user to surface photos and videos that already exist on the filesystem — outside of Immich's managed upload area — without copying or moving them. The server tracks each file's path, monitors for changes, and keeps the asset database up to date as files appear, change, or disappear on disk.
All library synchronization logic is implemented in LibraryService (server/src/services/library.service.ts).
A library record contains the following key fields that drive synchronization:
| Field | Type | Description |
|---|---|---|
importPaths | string[] | Absolute directory paths to crawl recursively |
exclusionPatterns | string[] | Glob patterns matched against full file paths to skip |
ownerId | string | The user who owns all assets in this library |
refreshedAt | Date | Timestamp of the last completed file scan |
deletedAt | Date | null | Set when soft-deleted; triggers cleanup |
All import paths must be absolute, readable directories. They may not overlap with Immich's own managed storage path (enforced by StorageCore.isImmichPath()).
Default exclusion patterns applied to new libraries (server/src/services/library.service.ts226-233):
**/@eaDir/**
**/._*
**/#recycle/**
**/#snapshot/**
**/.stversions/**
**/.stfolder/**
A full library scan is split into two independent sub-scans that run in parallel:
Diagram: Full Library Scan Job Chain
Sources: server/src/services/library.service.ts449-475 server/src/services/library.service.ts612-680 server/src/services/library.service.ts696-772
Handler: handleQueueSyncFiles (JobName.LibrarySyncFilesQueueAll)
importPath with validateImportPath() — checks existence, that it is a directory, and that it has read permission.storageRepository.walk() with the valid paths and the library's exclusionPatterns. This yields batches of file paths (JOBS_LIBRARY_PAGINATION_SIZE at a time).assetRepository.filterNewExternalAssetPaths(libraryId, batch) to exclude paths already in the database.LibrarySyncFiles job for the new paths.library.refreshedAt when the crawl completes.Handler: handleSyncFiles (JobName.LibrarySyncFiles)
For each path in the job, processEntity() is called:
stat() is called to get mtimeAssetTable row is constructed with isExternal: true, deviceId: 'Library Import', and a SHA-1 checksum of path:<normalizedPath>assetRepository.createAll()After insertion, queuePostSyncJobs() queues a SidecarCheck job for each new asset ID, which chains into the standard asset processing pipeline (metadata extraction → thumbnails → ML jobs).
Diagram: handleSyncFiles to Asset Processing
Sources: server/src/services/library.service.ts238-277 server/src/services/library.service.ts395-428 server/src/services/job.service.ts70-72
Handler: handleQueueSyncAssets (JobName.LibrarySyncAssetsQueueAll)
This handler performs a two-phase check:
Phase 1 — SQL fast-path: Calls assetRepository.detectOfflineExternalAssets(libraryId, importPaths, exclusionPatterns). This uses a database query to directly mark assets offline if their path falls outside all import paths or matches an exclusion pattern. This is efficient and avoids per-file stat() calls for clearly out-of-scope assets.
Phase 2 — Disk check: If not all assets were caught by Phase 1, streams all remaining asset IDs from libraryRepository.streamAssetIds() and queues them in batches as LibrarySyncAssets jobs.
Handler: handleSyncAssets (JobName.LibrarySyncAssets)
For each asset in the batch, calls storageRepository.stat(asset.originalPath) and passes the result to checkExistingAsset():
Diagram: Asset Sync Decision Logic (checkExistingAsset)
Sources: server/src/services/library.service.ts477-611 server/src/repositories/library.repository.ts
The AssetSyncResult enum values and transitions are defined in server/src/repositories/library.repository.ts
When marking an asset offline, the service distinguishes between normal and trashed assets:
isOffline=true, deletedAt=new Date() — appears in trash in the UI.isOffline=true, deletedAt unchanged — stays trashed.isOffline=false, deletedAt=null — restored to timeline.isOffline=false, deletedAt unchanged — stays in trash.Sources: server/src/services/library.service.ts541-557
On startup, onConfigInit (server/src/services/library.service.ts35-58) acquires DatabaseLock.Library to ensure only one microservice instance manages scheduling. If the lock is acquired:
CronJob.LibraryScan is registered via cronRepository.create() using the expression from library.scan.cronExpression (default: once daily).JobName.LibraryScanQueueAll, which triggers a full scan of all libraries.When system configuration changes (onConfigUpdate), the cron expression and enabled state are updated via cronRepository.update().
The default scan schedule is 0 0 * * * (midnight daily), configurable in Administration → Settings → External Library.
Sources: server/src/services/library.service.ts35-77
When library.watch.enabled is true and the current microservice holds DatabaseLock.Library, watchAll() is called, which calls watch(id) for each library.
The watch() method (server/src/services/library.service.ts79-152) sets up a filesystem watcher via storageRepository.watch() (backed by chokidar) with these settings:
| Option | Value |
|---|---|
usePolling | false (uses native inotify/FSEvents) |
ignoreInitial | true (existing files not emitted on start) |
awaitWriteFinish.stabilityThreshold | 5000 ms |
awaitWriteFinish.pollInterval | 1000 ms |
A picomatch matcher is built from all supported media extensions and the library's exclusionPatterns.
Events and resulting jobs:
| Filesystem Event | Condition | Job Queued |
|---|---|---|
add | path matches media type and not excluded | LibrarySyncFiles |
change | path matches media type and not excluded | LibrarySyncFiles |
unlink | any path | LibraryRemoveAsset |
LibraryRemoveAsset (server/src/services/library.service.ts682-694) looks up the asset by library ID and original path, then removes it immediately via assetRepository.remove().
Note: File watching does not work reliably on network filesystems (NFS, CIFS/SMB). Periodic scanning must be used instead. An
ENOSPCerror indicates the system's inotify watch limit has been reached; increasefs.inotify.max_user_watches.
Sources: server/src/services/library.service.ts79-175
Diagram: Library Deletion Flow
The handleDeleteLibrary job queues AssetDelete jobs with deleteOnDisk: false — the original files on disk are not deleted. Assets are removed from the database only.
If the deletion process is interrupted (e.g., server restart), the LibraryDeleteCheck job (run on each LibraryScanQueueAll) finds libraries with a non-null deletedAt via libraryRepository.getAllDeleted() and re-queues their LibraryDelete jobs to complete cleanup.
Sources: server/src/services/library.service.ts344-393 server/src/services/library.service.ts205-219
All library settings live under the library key in the system configuration:
| Config Path | Default | Description |
|---|---|---|
library.scan.enabled | true | Whether the periodic scan cron job is active |
library.scan.cronExpression | '0 0 * * *' | Cron schedule for automatic scans |
library.watch.enabled | false | Enable real-time filesystem watching |
These are configurable via Administration → Settings → External Library in the web UI.
Sources: server/src/services/library.service.ts35-77
All library jobs run on QueueName.Library.
| Job Name | Handler | Description |
|---|---|---|
LibraryScanQueueAll | handleQueueScanAll | Triggers a full scan of all libraries; also queues LibraryDeleteCheck |
LibrarySyncFilesQueueAll | handleQueueSyncFiles | Crawls import paths, finds new files, queues import jobs |
LibrarySyncFiles | handleSyncFiles | Imports a batch of new file paths as asset records |
LibrarySyncAssetsQueueAll | handleQueueSyncAssets | SQL offline check + queues per-asset disk checks |
LibrarySyncAssets | handleSyncAssets | Checks a batch of existing assets for offline/online/update |
LibraryRemoveAsset | handleAssetRemoval | Removes assets by path (from file watcher unlink event) |
LibraryDelete | handleDeleteLibrary | Deletes all assets in a library and removes the library record |
LibraryDeleteCheck | handleQueueCleanup | Finds libraries stuck in soft-deleted state and re-queues their deletion |
Sources: server/src/services/library.service.ts205-219 server/src/services/library.service.ts238-277 server/src/services/library.service.ts449-475 server/src/services/library.service.ts477-571 server/src/services/library.service.ts612-680 server/src/services/library.service.ts682-694 server/src/services/library.service.ts696-772
DatabaseLock.Library is acquired via databaseRepository.tryLock() at startup. Only the microservice instance that holds this advisory lock will:
CronJob.LibraryScan cron entryThis prevents duplicate cron registrations and multiple watcher instances when more than one microservice process is running.
Sources: server/src/services/library.service.ts42-57
After new assets are created by handleSyncFiles, the standard pipeline described in Asset Processing Pipeline is invoked via queuePostSyncJobs(), which queues SidecarCheck → AssetExtractMetadata → AssetGenerateThumbnails → SmartSearch / AssetDetectFaces / AssetEncodeVideo.
Similarly, when handleSyncAssets determines an asset has been modified (mtime changed), it calls queuePostSyncJobs() for that asset, triggering re-extraction of metadata and regeneration of derived files.
Sources: server/src/services/library.service.ts418-428 server/src/services/job.service.ts69-72
Refresh this wiki
This wiki was recently refreshed. Please wait 2 days to refresh again.