Update (Feb 18, 2026): After a productive discussion on Reddit and additional benchmarking, I found that the solutions I originally proposed (batched writes or using a synchronous connection) don't actually help. The real issue is simpler and more fundamental than I described: SQLite is single-writer, so any amount of contention at the SQLite level will severely hurt write performance. The fix is to use a single writer connection with writes queued at the application level, and a separate connection pool for concurrent reads. The original blog post text is preserved below, with retractions and updates marked accordingly. My apologies to the SQLx maintainers for suggesting that this behavior was unique to SQLx.
Write transactions can lead to lock starvation and serious performance degradation when using SQLite with SQLx, the popular async Rust SQL library. In retrospect, I feel like this should have been obvious, but it took a little more staring at suspiciously consistent "slow statement" logs than I'd like to admit, so I'm writing it up in case it helps others avoid this footgun.
SQLite is single-writer. In WAL mode, it can support concurrent reads and writes (or, technically "write" singular), but no matter the mode there is only ever one writer at a time. Before writing, a process needs to obtain an EXCLUSIVE lock on the database.
If you start a read transaction with a SELECT and then perform a write in the same transaction, the transaction will need to be upgraded to write transaction with an exclusive lock:
A read transaction is used for reading only. A write transaction allows both reading and writing. A read transaction is started by a SELECT statement, and a write transaction is started by statements like CREATE, DELETE, DROP, INSERT, or UPDATE (collectively "write statements"). If a write statement occurs while a read transaction is active, then the read transaction is upgraded to a write transaction if possible. (source)
Transactions started with BEGIN IMMEDIATE or BEGIN EXCLUSIVE also take the exclusive write lock as soon as they are started.
Transactions in SQLx look like this:
let mut tx = db_connection.begin().await?; let read_value = sqlx::query("SELECT * FROM table WHERE id = $1") .bind(1) .fetch_one(&mut *tx) .await?; sqlx::query("UPDATE table SET some_field = $1 WHERE id = $2") .bind("hello") .bind(1) .execute(&mut *tx) .await?; tx.commit().await?;
This type of transaction where you read and then write is completely fine. The transaction starts as a read transaction and then is upgraded to a write transaction for the UPDATE.
Update: This section incorrectly attributes the performance degradation to the interaction between async Rust and SQLite. The problem is actually that any contention for the EXCLUSIVE lock at the SQLite level, whether from single statements or batches, will hurt write performance.
The problem arises when you call await within a write transaction. For example, this could happen if you call multiple write statements within a transaction:
let mut tx = db_connection.begin().await?; for (id, value) in values { sqlx::query("INSERT INTO table (id, some_field) VALUES ($1, $2)") .bind(id) .bind(value) .execute(&mut *tx) .await?; } tx.commit().await?;
This code will cause serious performance degradation if you have multiple concurrent tasks that might be trying this operation, or any other write, at the same time.
When the program reaches the first INSERT statement, the transaction is upgraded to a write transaction with an exclusive lock. However, when you call await, the task yields control back to the async runtime. The runtime may schedule another task before returning to this one. The problem is that this task is now holding an exclusive lock on the database. All other writers must wait for this one to finish. If the newly scheduled task tries to write, it will simply wait until it hits the busy_timeout and returns a busy timeout error. The original task might be able to make progress if no other concurrent writers are scheduled before it, but under higher load you might continuously have new tasks that block the original writer from progressing.
Starting a transaction with BEGIN IMMEDIATE will also cause this problem, because you will immediately take the exclusive lock and then yield control with await.
In practice, you can spot this issue in your production logs if you see a lot of SQLx warnings that say slow statement: execution time exceeded alert threshold where the elapsed time is very close to your busy_timeout (which is 5 seconds by default). This is the result of other tasks being scheduled by the runtime and then trying and failing to obtain the exclusive lock they need to write to the database while being blocked by a parked task.
SQLite's concurrency model (in WAL mode) is many concurrent readers with exactly one writer. Mirroring this architecture at the application level provides the best performance.
Instead of a single connection pool, where connections may be upgraded to write at any time, use two separate pools:
let write_options = SqliteConnectOptions::new() .filename("my.db") .journal_mode(SqliteJournalMode::Wal); let read_options = SqliteConnectOptions::new() .filename("my.db") .journal_mode(SqliteJournalMode::Wal) .read_only(true); // Single writer connection — all writes queue here let writer = SqlitePoolOptions::new() .max_connections(1) .connect_with(write_options) .await?; // Multiple reader connections — reads run concurrently let reader = SqlitePoolOptions::new() .max_connections(num_cpus::get() as u32) .connect_with(read_options) .await?;
With this setup, write transactions serialize within the application. Tasks will queue waiting for the single writer connection, rather than all trying to obtain SQLite's EXCLUSIVE lock.
In my benchmarks, this approach was ~20x faster than using a single pool with multiple connections:
| Scenario | Total Time | Rows/sec | P50 | P99 |
|---|---|---|---|---|
| Single pool (50 connections) | 1.93s | 2,586 | 474ms | 182s |
| Single writer connection | 83ms | 60,061 | 43ms | 82ms |
An alternative to separate pools is wrapping writes in a Mutext, which achieves similar performance (95ms in the benchmarks). However, separate pools make the intent clearer and, if the reader pool is configured as read-only, prevent accidentally issuing a write on a reader connection.
Having separate pools works when reads and writes are independent, but sometimes you need to atomically read and then write based on it:
let mut tx = pool.begin().await?; let balance = sqlx::query_scalar::<_, i64>( "SELECT balance FROM accounts WHERE id = ?" ) .bind(account_id) .fetch_one(&mut *tx) .await?; sqlx::query("UPDATE accounts SET balance = ? WHERE id = ?") .bind(balance - amount) .bind(account_id) .execute(&mut *tx) .await?; tx.commit().await?;
Sending this transaction to the single write connection is fine if the read is extremely fast, such as a single lookup by primary key. However, if your application requires expensive reads that must precede writes in a single atomic transaction, the shared connection pool with moderate concurrency might outperform a single writer.
Retraction: Benchmarking showed that batched writes perform no better than the naive loop under concurrency, because 50 connections still contend for the write lock regardless of whether each connection issues 100 small INSERTs or one large INSERT. QueryBuilder is still useful for reducing per-statement overhead, but it does not fix the contention problem.
We could safely replace the example code above with this snippet that uses a bulk insert to avoid the lock starvation problem:
let mut builder = sqlx::QueryBuilder::new( "INSERT INTO table (id, some_field)" ); builder.push_values(values, |mut b, (id, value)| { b.push_bind(*id).push_bind(*value); }); builder.build() .persistent(false) // see note below .execute(&db_connection) .await?;
Note that if you do this with different numbers of values, you should call .persistent(false). By default, SQLx caches prepared statements. However, each version of the query with a different number of arguments will be cached separately, which may thrash the cache.
Retraction: Benchmarking showed that this did not actually improve performance.
Unfortunately, the fix for atomic writes to multiple tables is uglier and potentially very dangerous. To avoid holding an exclusive lock across an await, you need to use the raw_sql interface to execute a transaction in one shot:
sqlx::raw_sql( // this is implicitly wrapped in a transaction "UPDATE table1 SET foo = 'bar'; UPDATE table2 SET baz = 'qux';" ).execute(&db_connection) .await?;
However, this can lead to catastrophic SQL injection attacks if you use this for user input, because raw_sql does not support binding and sanitizing query parameters.
Note that you can technically run a transaction with multiple statements in a query call but the docs say:
The query string may only contain a single DML statement: SELECT, INSERT, UPDATE, DELETE and variants. The SQLite driver does not currently follow this restriction, but that behavior is deprecated.
If you find yourself needing atomic writes to multiple tables with SQLite and Rust, you might be better off rethinking your schema to combine those tables or switching to a synchronous library like rusqlite with a single writer started with spawn_blocking.
Update: the most useful change would actually be making a distinction between a ReadPool and a WritePool. Libraries like SQLx could enforce the distinction at compile time or runtime by inspecting the queries for the presence of write statements, or the ReadPool could be configured as read-only.
Maybe, but it probably won't. If SQLx offered both a sync and async API (definitely out of scope) and differentiated between read and write statements, a write Transaction could be !Send like std::sync::MutexGuard, which would prevent it from being held across an await point.
However, SQLx is not an ORM and it probably isn't worth it for the library to have different methods for read versus write statements. Without that, there isn't a way to prevent write transaction locks from being held across awaits while allowing safe read transactions to be used across awaits.
So, in lieu of type safety to prevent this footgun, I wrote up this blog post and this pull request to include a warning about this in the docs.
Discuss on r/rust and Hacker News.