🔧 Error Fixes
· 3 min read
Last updated on

SQLite: Database Is Locked — How to Fix It


Your application throws:

OperationalError: database is locked

Or in other languages:

SQLITE_BUSY: database is locked

A write operation can’t proceed because another connection is holding a lock on the database file.

What causes this

SQLite uses file-level locking. In its default journal mode, only one writer can access the database at a time. If a connection starts a write transaction, every other connection that tries to write will get “database is locked” until the first transaction completes.

Common causes:

  • Multiple threads or processes opening separate connections and writing concurrently
  • A long-running transaction holding the lock (e.g., a big INSERT inside a transaction that takes seconds)
  • A connection that was opened but never properly closed
  • A crashed process left a stale lock file (database.db-journal or database.db-wal)
  • An interactive tool (DB Browser for SQLite, sqlite3 CLI) has the database open while your app tries to write

Fix 1: Use context managers to close connections properly

Unclosed connections are the #1 cause. Always use a context manager or explicit close:

# ✅ Python — context manager ensures connection is closed
import sqlite3

with sqlite3.connect('app.db') as conn:
    conn.execute('INSERT INTO users (name) VALUES (?)', ('Alice',))
    # auto-commits and releases lock when block exits

In Node.js with better-sqlite3:

const db = require('better-sqlite3')('app.db');
try {
  db.exec('INSERT INTO users (name) VALUES ("Alice")');
} finally {
  db.close();
}

Fix 2: Increase the busy timeout

By default, SQLite returns “database is locked” immediately if it can’t acquire the lock. Set a timeout so it retries:

# Python — wait up to 30 seconds for the lock
conn = sqlite3.connect('app.db', timeout=30)
// Node.js better-sqlite3
const db = require('better-sqlite3')('app.db', { timeout: 30000 });
# SQLite CLI
sqlite3 app.db
.timeout 30000

With a timeout, SQLite will retry the operation for the specified duration before giving up. This handles most transient lock contention.

Fix 3: Enable WAL mode

WAL (Write-Ahead Logging) mode is the single biggest improvement you can make. It allows concurrent reads while a write is in progress:

conn = sqlite3.connect('app.db')
conn.execute('PRAGMA journal_mode=WAL')
-- Or in the SQLite CLI
PRAGMA journal_mode=WAL;

WAL mode is persistent — you only need to set it once and it sticks. It dramatically reduces lock contention because readers no longer block writers and writers no longer block readers. The only limitation: two concurrent writers still can’t proceed simultaneously.

Fix 4: Remove stale lock files

If a process crashed mid-transaction, it may have left a lock file:

ls -la app.db*
# app.db
# app.db-journal   ← stale lock (rollback journal mode)
# app.db-wal        ← WAL file (normal in WAL mode)
# app.db-shm        ← shared memory file (normal in WAL mode)

If no process is using the database and you see a -journal file:

# Make sure nothing is using the database
lsof app.db
# If clear, remove the journal
rm app.db-journal

Don’t delete -wal or -shm files while the database is in use — they’re part of normal WAL operation.

Fix 5: Use a single connection with a connection pool

For multi-threaded applications, use a single shared connection or a connection pool with serialized writes:

# Python — single connection, serialized access
import threading

db_lock = threading.Lock()

def write_to_db(data):
    with db_lock:
        with sqlite3.connect('app.db') as conn:
            conn.execute('INSERT INTO logs (msg) VALUES (?)', (data,))

How to prevent it

  • Enable WAL mode on every SQLite database — there’s almost no reason not to
  • Set a busy timeout (5-30 seconds) on every connection
  • Close connections as soon as you’re done with them
  • For web applications with concurrent users, consider whether SQLite is the right choice — PostgreSQL handles concurrency natively
  • Keep write transactions short — do your data processing outside the transaction, then write in one quick batch