PostgreSQL Locking and Concurrency Control
Learn about shared vs exclusive locks, lock escalation, deadlocks, optimistic vs pessimistic concurrency control, and FOR UPDATE clauses.
Locking and Concurrency Control: Shared, Exclusive, Deadlocks, and More
Databases let many users access the same data simultaneously. Locks are the mechanism that makes this safe. Understanding locks helps you write code that is both correct and fast under concurrency.
Shared vs Exclusive Locks
PostgreSQL has two main lock modes: shared and exclusive.
Shared Locks
Shared locks allow concurrent reads. Multiple transactions can hold shared locks on the same row at the same time.
-- Transaction 1
SELECT balance FROM accounts WHERE id = 1 LOCK IN SHARE MODE;
-- Transaction 2 can also read
SELECT balance FROM accounts WHERE id = 1 LOCK IN SHARE MODE;
-- Success: both can read simultaneously
Exclusive Locks
Exclusive locks block both shared and exclusive locks. Only one transaction can hold an exclusive lock at a time.
-- Transaction 1
UPDATE accounts SET balance = 900 WHERE id = 1;
-- Acquires exclusive lock
-- Transaction 2 tries to read
SELECT balance FROM accounts WHERE id = 1;
-- Waits...
-- Transaction 1 commits, then Transaction 2 proceeds
FOR SHARE vs FOR UPDATE
SELECT ... FOR SHARE acquires a shared lock. SELECT ... FOR UPDATE acquires an exclusive lock.
-- Shared lock: others can read but not update
SELECT * FROM accounts WHERE id = 1 FOR SHARE;
-- Exclusive lock: blocks everyone else
SELECT * FROM accounts WHERE id = 1 FOR UPDATE;
Use FOR SHARE when you need to read data and ensure it doesn’t change while you’re reading. Use FOR UPDATE when you need to modify data.
Lock Escalation
Some databases have a concept called lock escalation, where the system converts many row-level locks into a table-level lock when too many rows are locked.
PostgreSQL does NOT have lock escalation. You can hold locks on millions of rows if needed, though this consumes memory and impacts performance.
Other databases like SQL Server do have lock escalation, which can cause blocking problems when many rows are locked simultaneously.
-- PostgreSQL: no lock escalation
-- This is fine even with many rows
UPDATE orders SET processed = true WHERE status = 'pending';
-- Holds row-level locks, not escalated to table lock
Deadlocks
flowchart LR
subgraph T1["Transaction A"]
A1["Locks Row 1<br/>(UPDATE orders)"]
A2["Waits for Row 2<br/>(UPDATE accounts)"]
end
subgraph T2["Transaction B"]
B1["Locks Row 2<br/>(UPDATE accounts)"]
B2["Waits for Row 1<br/>(UPDATE orders)"]
end
A1 --> A2
B1 --> B2
A2 -.->|"Row 2 held by B"| B1
B2 -.->|"Row 1 held by A"| A1
A deadlock happens when two or more transactions are waiting for each other to release locks. Neither can proceed. PostgreSQL detects this automatically and rolls back one transaction.
-- Transaction 1
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- Locks row 1
-- Transaction 2 (concurrent)
BEGIN;
UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- Locks row 2
-- Transaction 1
UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- Waits for row 2
-- Transaction 2
UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- Deadlock! Transaction 1 holds row 1
PostgreSQL detects deadlocks automatically and resolves them by rolling back one transaction.
ERROR: deadlock detected
DETAIL: Process 12345 waits for ShareLock on transaction 67890; blocked by process 67891.
Process 67891 waits for ShareLock on transaction 67890; blocked by process 12345.
HINT: See the server log for query details.
Preventing Deadlocks
The best deadlock is the one you prevent. Three rules help:
- Always lock rows in the same order. If Transaction A always locks row 1 before row 2, and Transaction B does the same, deadlocks cannot occur.
-- Always lock accounts in id order
BEGIN;
SELECT * FROM accounts WHERE id = 1 FOR UPDATE;
SELECT * FROM accounts WHERE id = 2 FOR UPDATE;
-- ...
-
Keep transactions short. Long transactions hold locks longer and increase deadlock probability.
-
Access shared objects last. In transaction-intensive workloads, access shared lookup tables after modifying user data.
Handling Deadlocks in Application Code
Your application should be prepared to retry on deadlock errors:
def transfer_funds(from_account, to_account, amount):
max_retries = 3
for attempt in range(max_retries):
try:
with connection.cursor() as cursor:
cursor.execute("BEGIN")
cursor.execute(
"SELECT balance FROM accounts WHERE id = %s FOR UPDATE",
[from_account]
)
cursor.execute(
"UPDATE accounts SET balance = balance - %s WHERE id = %s",
[amount, from_account]
)
cursor.execute(
"UPDATE accounts SET balance = balance + %s WHERE id = %s",
[amount, to_account]
)
cursor.execute("COMMIT")
return True
except DeadlockError:
cursor.execute("ROLLBACK")
continue
return False
Optimistic vs Pessimistic Concurrency Control
There are two philosophical approaches to concurrency control.
Pessimistic Concurrency Control
Assume conflicts will happen and prevent them with locks.
SELECT * FROM accounts WHERE id = 1 FOR UPDATE;
-- Other transactions must wait
Pessimistic locking guarantees no conflicts. It’s simple to reason about and works well under high contention. The downside is that locks block other transactions, which can cause waiting and deadlocks, and reduces throughput under low contention.
Optimistic Concurrency Control
Assume conflicts are rare, detect them when they occur, and retry.
-- Read current version
SELECT balance, version FROM accounts WHERE id = 1;
-- balance=1000, version=5
-- Update with version check
UPDATE accounts
SET balance = balance - 100, version = version + 1
WHERE id = 1 AND version = 5;
-- If version changed, update affects 0 rows -> retry
Optimistic locking has no locks and therefore higher throughput under low contention, and avoids deadlocks. The downside is conflicts cause retries and wasted work if conflicts are frequent, and the application logic is more complex.
When to Use Each
Pessimistic is better when contention is high, conflicts are expensive (long transactions), or you need guaranteed ordering. Optimistic is better when contention is low, conflicts are cheap (short transactions), or you want maximum throughput.
Optimistic vs Pessimistic: Trade-off Summary
| Aspect | Pessimistic (FOR UPDATE) | Optimistic (version/SNAPSHOT) |
|---|---|---|
| Contention model | Locks block other txns | Conflicts detected at commit |
| Throughput under low contention | Lower (locks have overhead) | Higher (no locks) |
| Throughput under high contention | Lower (lock contention) | Much lower (many retries) |
| Transaction rollback cost | Low (blocked, not rolled back) | High (entire txn rolls back) |
| Complexity in app code | Lower (simple BEGIN/COMMIT) | Higher (retry loop required) |
| Risk of deadlocks | Higher | None (no locks) |
| Best for | High contention, long transactions | Low contention, short transactions |
Row-Level vs Page-Level vs Table-Level Locks
PostgreSQL locks at the row level by default, but can escalate to page-level and table-level locks internally.
Lock Hierarchy
From least to most restrictive:
- Row share — acquired by SELECT FOR SHARE
- Row exclusive — acquired by UPDATE, DELETE, INSERT
- Share update exclusive — acquired by VACUUM, ANALYZE, CREATE INDEX CONCURRENTLY
- Share — acquired by CREATE INDEX (non-concurrent)
- Share row exclusive — acquired by CREATE TRIGGER
- Exclusive — acquired by REFRESH MATERIALIZED VIEW CONCURRENTLY
- Access exclusive — acquired by ALTER TABLE, DROP TABLE, TRUNCATE, VACUUM FULL
-- This acquires row-exclusive lock (fine)
UPDATE accounts SET balance = 900 WHERE id = 1;
-- This acquires access exclusive lock (blocks everything)
ALTER TABLE accounts ADD COLUMN new_col TEXT;
Lock Mode Compatibility
| Requested \ Held | Row Share | Row Excl | Share | Share Row Excl | Excl | Access Excl |
|---|---|---|---|---|---|---|
| Row Share | Yes | Yes | Yes | Yes | No | No |
| Row Excl | Yes | No | No | No | No | No |
| Share | Yes | No | Yes | No | No | No |
| Share Row Excl | Yes | No | No | No | No | No |
| Exclusive | No | No | No | No | No | No |
| Access Excl | No | No | No | No | No | No |
Advisory Locks
PostgreSQL provides advisory locks for application-level locking needs.
When to Use Advisory Locks
Use advisory locks when you need to coordinate work that doesn’t map naturally to table locks. Common uses are preventing duplicate batch processing, rate limiting, and resource reservation.
-- Acquire advisory lock
SELECT pg_advisory_lock(12345);
-- Do work
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- Release
SELECT pg_advisory_unlock(12345);
Advisory Lock Functions
pg_advisory_lock(key)— exclusive advisory lockpg_advisory_lock_shared(key)— shared advisory lockpg_try_advisory_lock(key)— non-blocking exclusive lockpg_try_advisory_lock_shared(key)— non-blocking shared lockpg_advisory_unlock(key)— release exclusive lockpg_advisory_unlock_all()— release all locks held by session
-- Non-blocking: returns true if acquired, false if not
SELECT pg_try_advisory_lock(12345);
-- Returns: true (or false if someone else holds it)
Monitoring Locks
Viewing Active Locks
SELECT
l.locktype,
l.relation::regclass,
l.mode,
l.granted,
l.virtualxid,
l.transactionid,
p.pid,
p.usename,
p.query
FROM pg_locks l
JOIN pg_stat_activity p ON l.pid = p.pid
WHERE NOT l.pid = pg_backend_pid();
Viewing Lock Waits
SELECT
p.pid,
p.usename,
p.query,
l.locktype,
l.mode
FROM pg_locks l
JOIN pg_stat_activity p ON l.pid = p.pid
WHERE NOT l.granted
AND NOT l.pid = pg_backend_pid();
Killing a Blocking Process
-- Get the pid from the queries above
SELECT pg_terminate_backend(12345);
-- This rolls back the transaction and releases locks
Real-World Case Study: Amazon DynamoDB’s Lock-Free Design
Amazon DynamoDB chose lock-free concurrency for its distributed key-value store. The reasoning: at massive scale, managing locks across distributed nodes is expensive and creates single points of failure. Traditional database locks require a coordinator to manage shared state — that coordinator becomes a bottleneck and a fragility point.
DynamoDB uses optimistic concurrency control instead. Each item has a Version attribute (or you can use conditional writes with EXPECT conditions). When you update an item, you specify the expected current state. If the condition fails (another write committed first), DynamoDB returns a ConditionalCheckFailedException. Your application decides whether to retry.
This design choice shows up in their API directly:
response = table.update_item(
Key={'pk': 'order123'},
UpdateExpression='SET total = :new_total',
ConditionExpression='total = :expected_total', # optimistic lock
ExpressionAttributeValues={
':new_total': 150.00,
':expected_total': 100.00 # if this changed, fail
}
)
The consequence: DynamoDB can scale horizontally without lock coordination overhead, and it never blocks a writer waiting for another writer. The tradeoff is that high-contention items (multiple writes to the same key simultaneously) generate many failed conditional writes and require application-level retries. For most real-world workloads this works well — contention is rare. For write-heavy hot keys, you need a separate strategy (write sharding, or accepting last-writer-wins semantics).
The lesson applies to PostgreSQL: optimistic locking via version columns gives you similar benefits in a single-node or primary-replica setup — no lock manager overhead, no blocking, and horizontal read scalability via MVCC. The cost is retry logic in your application and wasted work when conflicts are frequent.
Common Production Failures
Deadlock victim retry not implemented: Your code does a fund transfer with FOR UPDATE but does not retry on deadlock errors. When PostgreSQL rolls back a deadlock victim, the application throws an exception and the transfer is lost. Every write transaction using FOR UPDATE in a concurrent workload needs a retry loop — deadlocks are not a sign of bad code, they are an inevitable consequence of concurrent locking.
Long-running transactions holding locks: A report query runs inside a transaction, holds locks for 10 minutes while generating a PDF, then commits. During that time, every concurrent update to the rows it read is blocked. The fix is to use READ COMMITTED and read rows without locks, or to ensure transactions are as short as possible.
Optimistic locking without retry logic: You implement version-based optimistic concurrency but only check if the update succeeded once — you don’t retry. Concurrent updates fail silently (0 rows affected) and the user sees no feedback. You need a retry loop that re-reads the row, increments the version, and tries again.
FOR UPDATE on a scan that returns many rows: You write SELECT * FROM orders WHERE status = 'pending' FOR UPDATE expecting to lock just a few rows. If the table has millions of pending orders, this escalates to a table-level lock and blocks all concurrent access to the orders table. Lock only the specific rows you need.
SELECT FOR SHARE blocking writes: On PostgreSQL, SELECT ... FOR SHARE holds a row-level share lock. Other transactions can read but cannot update. If your “read-only” report runs with FOR SHARE and the table has frequent updates, your report blocks writers. Use plain SELECT in READ COMMITTED mode unless you genuinely need to prevent concurrent updates.
Advisory lock not released on error: You acquire pg_advisory_lock but an exception occurs before pg_advisory_unlock. The lock is held until the session ends. If your connection pool recycles connections without resetting state, you can leak advisory locks across requests. Always use pg_try_advisory_lock in a BEGIN/ROLLBACK block so the lock is released on error.
Interview Questions
Q: You have two transactions both trying to update the same row. Transaction A reads the balance as 1000 and Transaction B reads it as 1000 at the same time. Both compute new balances and write. What happens under READ COMMITTED? How does FOR UPDATE change this?
Under READ COMMITTED, both transactions read 1000, both compute 1100 (for example, adding 100), and both write — the second write overwrites the first. Transaction A’s update is lost. This is the “lost update” problem. To fix it, use SELECT ... FOR UPDATE in Transaction A before reading — this acquires an exclusive lock on the row, and Transaction B’s read blocks until Transaction A commits or rolls back. Transaction B then reads the committed value (1000 or whatever A wrote) and computes correctly. Alternatively, use SERIALIZABLE, which detects the write-write conflict and rolls back one transaction.
Q: A developer adds SELECT ... FOR UPDATE to lock rows during a batch job that processes 50,000 orders. The deployment hits production and suddenly the entire orders table is blocked for 30 seconds. What happened?
The FOR UPDATE on a scan that matches many rows — SELECT * FROM orders WHERE status = 'pending' FOR UPDATE — acquires row-level locks on every matching row. With 50,000 pending orders, that means 50,000 row locks. While the transaction holds these locks, any other transaction trying to access those rows (or in some cases, the whole table) waits. The fix is to lock rows in small batches, or lock only the specific rows needed using a cursor with LIMIT, processing chunks in separate transactions.
Q: What is the difference between pessimistic and optimistic concurrency control, and when would you choose one over the other?
Pessimistic locking assumes conflicts will happen and prevents them by acquiring locks before working. Optimistic locking assumes conflicts are rare and detects them at commit time. Choose pessimistic when contention is high (locks prevent wasted work from retries), when transactions are long (a retry after a long-running transaction wastes significant time), or when you need guaranteed ordering. Choose optimistic when contention is low (retries are rare), when transactions are short (retry cost is negligible), or when you want maximum throughput and can tolerate occasional failures. Most web applications with short transactions and moderate contention benefit from optimistic locking — but if you are processing financial transfers with high concurrency, pessimistic with explicit row locks is safer.
Q: Your application uses advisory locks for rate limiting, but after a few hours of running you notice that some requests are taking much longer than expected. Eventually you find that advisory locks are not being released. Walk through what happened.
Advisory locks acquired with pg_advisory_lock are released when the session ends, not when a transaction commits. If your application holds an advisory lock inside a transaction and then an error occurs before calling pg_advisory_unlock, the lock stays held for the lifetime of the database session. If your connection pool recycles sessions without resetting the session state — or if the application server process is killed and restarted without closing connections cleanly — you can accumulate leaked advisory locks over time. New requests trying to acquire the same advisory lock will block waiting for a lock that nobody is using. The fix is to always acquire advisory locks inside a BEGIN/ROLLBACK block with pg_try_advisory_lock, or use advisory locks with ON ROLLBACK RELEASE semantics if your version supports it.
Conclusion
Locks are how databases make concurrent access safe. Shared locks allow readers to coexist; exclusive locks serialize writers. Deadlocks happen when transactions hold locks in inconsistent orders, but you can prevent them with consistent locking order and short transactions. Choose pessimistic locking when contention is high, optimistic when it’s low. PostgreSQL’s row-level locking avoids the escalation problems of other databases, and advisory locks give you flexibility for application-level coordination.
For more on keeping transactions safe, see Transaction Isolation and Relational Databases.
Category
Related Posts
Isolation Levels: READ COMMITTED Through SERIALIZABLE
Understand READ COMMITTED, REPEATABLE READ, and SERIALIZABLE isolation levels, read vs write anomalies, and SET TRANSACTION syntax.
Database Capacity Planning: A Practical Guide
Plan for growth before you hit walls. This guide covers growth forecasting, compute and storage sizing, IOPS requirements, and cloud vs on-prem decisions.
Connection Pooling: HikariCP, pgBouncer, and ProxySQL
Learn connection pool sizing, HikariCP, pgBouncer, and ProxySQL, timeout settings, idle management, and when pooling helps or hurts performance.