SQL Isolation Levels and Concurrency
When a single user runs queries against a database, everything is straightforward. But production databases serve hundreds or thousands of concurrent users, all reading and writing data simultaneously. What happens when two users try to modify the same row at the same time? What if one user reads data that another user is in the middle of changing? These are the questions that SQL isolation levels answer.
Isolation levels define the rules for how concurrent transactions interact with each other. They control what one transaction can see of another transaction's uncommitted or recently committed changes. Choosing the right isolation level is a balancing act: stronger isolation means fewer concurrency bugs but more performance overhead; weaker isolation means better performance but the risk of reading incorrect or inconsistent data.
This guide explains all four SQL standard isolation levels, demonstrates the three categories of concurrency problems they prevent (or allow), and provides practical guidance on choosing the right level for real-world scenarios. Every concept is illustrated with step-by-step timelines showing exactly what each transaction sees at each moment.
Why Isolation Matters
Consider this scenario at a ticket booking system:
User A checks: 2 seats remaining on Flight 101
User B checks: 2 seats remaining on Flight 101
User A books 2 seats → success (0 remaining)
User B books 2 seats → success?? (now -2 remaining!)
Without proper isolation, both users saw the same snapshot, both proceeded with their bookings, and the system oversold the flight. This is not a theoretical concern. It is the kind of bug that costs companies real money and real trust.
Isolation levels give you control over how aggressively the database prevents these conflicts, and understanding the trade-offs lets you make an informed choice for each part of your application.
The Sample Data
We will use these tables throughout the guide:
accounts table:
| id | owner | balance |
|---|---|---|
| 1 | Alice | 5000.00 |
| 2 | Bob | 3000.00 |
| 3 | Charlie | 1500.00 |
CREATE TABLE accounts (
id INT PRIMARY KEY,
owner VARCHAR(50) NOT NULL,
balance DECIMAL(12,2) NOT NULL
);
INSERT INTO accounts (id, owner, balance) VALUES
(1, 'Alice', 5000.00),
(2, 'Bob', 3000.00),
(3, 'Charlie', 1500.00);
products table:
| id | name | stock | price |
|---|---|---|---|
| 101 | Widget Pro | 10 | 79.99 |
| 102 | Gadget Max | 5 | 149.99 |
CREATE TABLE products (
id INT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
stock INT NOT NULL,
price DECIMAL(10,2) NOT NULL
);
INSERT INTO products (id, name, stock, price) VALUES
(101, 'Widget Pro', 10, 79.99),
(102, 'Gadget Max', 5, 149.99);
orders table:
| id | customer_id | amount | status |
|---|---|---|---|
| 1 | 1 | 250.00 | completed |
| 2 | 1 | 180.00 | completed |
| 3 | 2 | 320.00 | completed |
CREATE TABLE orders (
id INT PRIMARY KEY,
customer_id INT NOT NULL,
amount DECIMAL(12,2) NOT NULL,
status VARCHAR(20) NOT NULL,
CONSTRAINT fk_orders_customer
FOREIGN KEY (customer_id)
REFERENCES accounts(id)
);
INSERT INTO orders (id, customer_id, amount, status) VALUES
(1, 1, 250.00, 'completed'),
(2, 1, 180.00, 'completed'),
(3, 2, 320.00, 'completed');
The Three Concurrency Problems
Before diving into isolation levels, you need to understand the three specific problems that can occur when transactions run concurrently. Each isolation level prevents a different subset of these problems.
1. Dirty Read
A dirty read occurs when a transaction reads data that has been modified by another transaction but not yet committed. If that other transaction rolls back, the first transaction has read data that never actually existed in the database.
Timeline:
══════════════════════════════════════════════════ ═════════════
Transaction A Transaction B
───────────── ─────────────
BEGIN;
UPDATE accounts
SET balance = 1000
WHERE id = 1;
-- Alice's balance changed from 5000
-- to 1000 (NOT YET COMMITTED)
BEGIN;
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 1000 ← DIRTY READ!
-- Transaction B sees uncommitted data
-- and uses it for decisions...
COMMIT;
ROLLBACK;
-- Alice's balance goes back to 5000
-- But Transaction B already acted on 1000!
Why it is dangerous: Transaction B made decisions based on a balance of 1000, but that value never actually existed in committed data. Alice's balance was always 5000 (the update was rolled back). Any logic Transaction B performed with the value 1000 is based on fiction.
Real-world example: A reporting query runs during a batch update, reads partially modified data, and generates an inaccurate financial report that gets sent to executives.
2. Non-Repeatable Read
A non-repeatable read occurs when a transaction reads the same row twice and gets different values because another transaction modified and committed the row between the two reads.
Timeline:
═══════════════════════════════════════════════════════════════
Transaction A Transaction B
───────────── ─────────────
BEGIN;
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 5000
BEGIN;
UPDATE accounts
SET balance = 4000
WHERE id = 1;
COMMIT;
-- Change is committed and permanent
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 4000 ← DIFFERENT VALUE!
-- Same query, same transaction,
-- but the result changed!
COMMIT;
Why it is dangerous: Transaction A is performing a multi-step calculation that depends on Alice's balance being consistent. First it read 5000 and used that in a computation. Then it read 4000 from the same row. The two values contradict each other within the same transaction, potentially leading to incorrect results.
Real-world example: A transfer transaction reads an account balance to verify sufficient funds (sees $5000), then another transaction deducts money, and when the transfer actually executes, the balance is no longer what it expected.
3. Phantom Read
A phantom read occurs when a transaction executes the same query twice and gets a different set of rows because another transaction inserted or deleted rows that match the query's condition and committed between the two reads.
Timeline:
═══════════════════════════════════════════════════════════════
Transaction A Transaction B
───────────── ─────────────
BEGIN;
SELECT COUNT(*) FROM orders
WHERE customer_id = 1;
-- Returns: 2 (orders #1 and #2)
BEGIN;
INSERT INTO orders
(customer_id, amount, status)
VALUES (1, 400.00, 'completed');
COMMIT;
SELECT COUNT(*) FROM orders
WHERE customer_id = 1;
-- Returns: 3 ← PHANTOM ROW!
-- A new row appeared that wasn't
-- there before, like a ghost
COMMIT;
Why it is dangerous: Transaction A is computing a customer's total order value for a credit check. The first query says 2 orders totaling $430. A new order is inserted. The second query says 3 orders totaling $830. The credit check is now inconsistent because it used different data at different points.
Real-world example: A report calculates department headcount as 50, then calculates total salaries for 51 people (someone was hired between the two queries). The average salary calculation is wrong because the numerator and denominator came from different snapshots.
Summary of Concurrency Problems
| Problem | What Happens | Data Source |
|---|---|---|
| Dirty Read | Read uncommitted data from another transaction | Other transaction's uncommitted changes |
| Non-Repeatable Read | Same row returns different values in same transaction | Other transaction's committed changes to existing rows |
| Phantom Read | Same query returns different rows in same transaction | Other transaction's committed inserts or deletes |
The Four SQL Isolation Levels
The SQL standard defines four isolation levels, each preventing an increasing number of concurrency problems. Think of them as a dial going from "maximum performance, minimum safety" to "maximum safety, minimum performance."
Quick Reference Matrix
| Isolation Level | Dirty Read | Non-Repeatable Read | Phantom Read |
|---|---|---|---|
| READ UNCOMMITTED | ⚠️ Possible | ⚠️ Possible | ⚠️ Possible |
| READ COMMITTED | ✅ Prevented | ⚠️ Possible | ⚠️ Possible |
| REPEATABLE READ | ✅ Prevented | ✅ Prevented | ⚠️ Possible* |
| SERIALIZABLE | ✅ Prevented | ✅ Prevented | ✅ Prevented |
*PostgreSQL's REPEATABLE READ implementation also prevents phantom reads, going beyond the SQL standard.
Setting Isolation Levels
-- PostgreSQL
BEGIN;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- ... queries ...
COMMIT;
-- Or set for the entire session:
SET default_transaction_isolation = 'repeatable read';
-- MySQL
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START TRANSACTION;
-- ... queries ...
COMMIT;
-- Or set globally/session-level:
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- SQL Server
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
-- ... queries ...
COMMIT TRANSACTION;
-- Oracle
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- or
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- (Oracle only supports these two)
Default Isolation Levels
| Database | Default Level |
|---|---|
| PostgreSQL | READ COMMITTED |
| MySQL (InnoDB) | REPEATABLE READ |
| SQL Server | READ COMMITTED |
| Oracle | READ COMMITTED |
| SQLite | SERIALIZABLE |
READ UNCOMMITTED
READ UNCOMMITTED is the weakest isolation level. A transaction can see uncommitted changes made by other transactions. This means dirty reads, non-repeatable reads, and phantom reads are all possible.
How It Works
-- Transaction A
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN;
SELECT balance FROM accounts WHERE id = 1;
-- Can see uncommitted changes from other transactions
Demonstration
Transaction A (READ UNCOMMITTED) Transaction B
──────────────────────────── ─────────────
BEGIN;
BEGIN;
UPDATE accounts
SET balance = 999
WHERE id = 1;
-- NOT committed yet
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 999 ← DIRTY READ
-- Sees B's uncommitted change
ROLLBACK;
-- B's change is undone
-- Balance is back to 5000
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 5000
-- The 999 value that A read earlier
-- never actually existed!
COMMIT;
When to Use READ UNCOMMITTED
Almost never. The only legitimate use case is approximate analytics on very large datasets where:
- Absolute accuracy is not required
- The performance cost of stronger isolation is prohibitive
- You are running read-only queries and understand the data may be slightly wrong
-- Example: Rough estimate of total inventory value (approximate is acceptable)
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT SUM(stock * price) AS approx_total_value FROM products;
PostgreSQL does not actually implement READ UNCOMMITTED. If you set it, PostgreSQL silently upgrades to READ COMMITTED. PostgreSQL's architecture (MVCC) makes dirty reads unnecessary and impractical. SQL Server does support true READ UNCOMMITTED, and it is sometimes used for NOLOCK hints on reporting queries.
SQL Server NOLOCK Hint
In SQL Server, READ UNCOMMITTED is commonly used via the NOLOCK table hint:
-- SQL Server: equivalent to READ UNCOMMITTED for this table
SELECT * FROM accounts WITH (NOLOCK) WHERE id = 1;
This is widely used in SQL Server environments for reporting queries, but it is also widely criticized because it can return incorrect data. Use it only when you fully understand the trade-off.
READ COMMITTED
READ COMMITTED is the default isolation level in PostgreSQL, SQL Server, and Oracle. A transaction can only see data that has been committed by other transactions. Dirty reads are prevented, but non-repeatable reads and phantom reads are still possible.
How It Works
Each individual statement within the transaction sees the latest committed data at the moment that statement begins execution. Different statements within the same transaction may see different snapshots.
Demonstration: Dirty Reads Prevented
Transaction A (READ COMMITTED) Transaction B
─────────────────────────── ─────────────
BEGIN;
BEGIN;
UPDATE accounts
SET balance = 999
WHERE id = 1;
-- NOT committed yet
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 5000 ← Correct!
-- Does NOT see B's uncommitted change
-- Dirty read prevented!
COMMIT;
-- NOW the change is committed
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 999
-- Sees B's committed change
-- This is a NON-REPEATABLE READ
-- (different value for the same query)
COMMIT;
The dirty read is prevented. Transaction A did not see the uncommitted value of 999. But notice that the two SELECT statements returned different values (5000 then 999). This is a non-repeatable read, which READ COMMITTED allows.
Demonstration: Non-Repeatable Read Still Possible
Transaction A (READ COMMITTED) Transaction B
─────────────────────────── ─────────────
BEGIN;
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 5000
BEGIN;
UPDATE accounts
SET balance = 4500
WHERE id = 1;
COMMIT;
-- A needs Alice's balance again for a calculation
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 4500 ← DIFFERENT!
-- Non-repeatable read: same row, different value
-- A's calculation may now be inconsistent
-- because it used 5000 earlier and 4500 now
COMMIT;
When READ COMMITTED Is Appropriate
READ COMMITTED is suitable for most OLTP (transactional) applications:
- Web application CRUD operations
- Simple data modifications
- Operations where each statement is independent
- Situations where "seeing the latest committed data" is the desired behavior
It is the right default for most applications. You only need stronger isolation for specific operations that require multi-statement consistency.
REPEATABLE READ
REPEATABLE READ guarantees that if a transaction reads a row, it will see the same value for that row for the entire duration of the transaction, regardless of other transactions committing changes. The transaction sees a consistent snapshot taken at the time of its first read.
How It Works
At the start of the transaction (or at the first query), the database takes a snapshot of all committed data. For the rest of the transaction, all queries see this snapshot, even if other transactions commit changes in the meantime.
Demonstration: Non-Repeatable Reads Prevented
Transaction A (REPEATABLE READ) Transaction B
──────────────────────────── ─────────────
BEGIN;
SET TRANSACTION ISOLATION LEVEL
REPEATABLE READ;
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 5000
-- Snapshot taken at this point
BEGIN;
UPDATE accounts
SET balance = 4500
WHERE id = 1;
COMMIT;
-- Committed! Balance is now 4500
-- in the actual table
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 5000 ← SAME VALUE!
-- Transaction A still sees its snapshot
-- Non-repeatable read PREVENTED
UPDATE accounts
SET balance = 3000
WHERE id = 1;
-- Another change committed
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 5000 ← STILL the same!
-- Snapshot is consistent throughout
COMMIT;
-- After commit, A would see 3000 in a new transaction
Transaction A sees a frozen snapshot. No matter how many times other transactions modify Alice's balance, Transaction A always sees 5000. The non-repeatable read is prevented.
Phantom Reads Under REPEATABLE READ
According to the SQL standard, phantom reads are still possible at this level. However, the actual behavior depends on your database:
MySQL InnoDB (Standard Behavior: Phantoms Possible)
MySQL InnoDB uses gap locks in REPEATABLE READ, which actually prevents most phantom reads in practice. However, in specific edge cases with certain query patterns, phantoms can still appear.
Transaction A (REPEATABLE READ) Transaction B
──────────────────────────── ─────────────
BEGIN;
SELECT COUNT(*) FROM orders
WHERE customer_id = 1;
-- Returns: 2
BEGIN;
INSERT INTO orders
(customer_id, amount, status)
VALUES (1, 400, 'completed');
COMMIT;
SELECT COUNT(*) FROM orders
WHERE customer_id = 1;
-- MySQL InnoDB: Returns 2 (snapshot)
-- Phantom read prevented in practice
-- due to MVCC snapshot
-- BUT: If Transaction A does a locking read:
SELECT COUNT(*) FROM orders
WHERE customer_id = 1
FOR UPDATE;
-- Returns: 3 ← PHANTOM!
-- Locking reads see the latest committed data
COMMIT;
PostgreSQL (Stronger Than Standard: Phantoms Prevented)
PostgreSQL's REPEATABLE READ implementation uses Serializable Snapshot Isolation techniques that prevent phantom reads entirely. The snapshot taken at the beginning of the transaction is fully consistent:
Transaction A (REPEATABLE READ) Transaction B
──────────────────────────── ─────────────
BEGIN;
SET TRANSACTION ISOLATION LEVEL
REPEATABLE READ;
SELECT COUNT(*) FROM orders
WHERE customer_id = 1;
-- Returns: 2
BEGIN;
INSERT INTO orders
(customer_id, amount, status)
VALUES (1, 400, 'completed');
COMMIT;
SELECT COUNT(*) FROM orders
WHERE customer_id = 1;
-- Returns: 2 ← Still 2!
-- PostgreSQL prevents phantom reads
-- even at REPEATABLE READ
COMMIT;
Write Conflicts Under REPEATABLE READ
An important behavior: if Transaction A tries to update a row that was modified by another committed transaction after A's snapshot was taken, the database detects a conflict.
PostgreSQL:
Transaction A (REPEATABLE READ) Transaction B
──────────────────────────── ─────────────
BEGIN;
SET TRANSACTION ISOLATION LEVEL
REPEATABLE READ;
SELECT balance FROM accounts
WHERE id = 1;
-- Returns: 5000 (snapshot)
BEGIN;
UPDATE accounts
SET balance = 4500
WHERE id = 1;
COMMIT;
UPDATE accounts
SET balance = balance - 200
WHERE id = 1;
-- ERROR: could not serialize access
-- due to concurrent update
-- Transaction A must ROLLBACK and retry
ROLLBACK;
PostgreSQL aborts the transaction because it detected a write conflict with the snapshot. The application must retry the entire transaction.
MySQL InnoDB:
MySQL handles this differently. It allows the update to proceed but applies it to the latest committed version of the row, not the snapshot version:
-- MySQL: The UPDATE sees the latest committed value
UPDATE accounts SET balance = balance - 200 WHERE id = 1;
-- Uses balance = 4500 (latest committed), not 5000 (snapshot)
-- Result: balance = 4300
This difference in write conflict handling is one of the most important practical differences between PostgreSQL and MySQL at the REPEATABLE READ level. PostgreSQL fails fast and forces a retry. MySQL silently uses the latest data, which may or may not be what your application expects.
When REPEATABLE READ Is Appropriate
- Reports and analytics that run multiple queries and need consistent data throughout
- Balance calculations that read values, compute results, and write back
- Any multi-step read where seeing different data midway would produce incorrect results
-- Good use case: monthly report that makes multiple queries
-- All queries must see the same consistent snapshot
BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT SUM(amount) AS total_revenue FROM orders WHERE status = 'completed';
SELECT COUNT(DISTINCT customer_id) AS unique_customers FROM orders;
SELECT AVG(amount) AS avg_order_value FROM orders WHERE status = 'completed';
-- All three queries see the same data, even if new orders are being
-- inserted concurrently
COMMIT;
SERIALIZABLE
SERIALIZABLE is the strongest isolation level. It guarantees that concurrent transactions produce the same result as if they had been executed one at a time, in some serial order. All three concurrency problems (dirty reads, non-repeatable reads, and phantom reads) are prevented.
How It Works
The database ensures that the outcome of concurrent transactions is equivalent to some serial execution of those transactions. If the database cannot guarantee this, it will abort one of the conflicting transactions and force it to retry.
Demonstration: All Anomalies Prevented
Transaction A (SERIALIZABLE) Transaction B (SERIALIZABLE)
───────────────────────── ─────────────────────────
BEGIN;
SET TRANSACTION ISOLATION LEVEL
SERIALIZABLE;
BEGIN;
SET TRANSACTION ISOLATION LEVEL
SERIALIZABLE;
SELECT SUM(balance) FROM accounts;
-- Returns: 9500 (5000 + 3000 + 1500)
SELECT SUM(balance) FROM accounts;
-- Returns: 9500
INSERT INTO accounts (id, owner, balance)
VALUES (4, 'Diana', 2000);
INSERT INTO accounts (id, owner, balance)
VALUES (5, 'Eve', 3000);
COMMIT;
-- Succeeds
COMMIT;
-- ERROR: could not serialize access
-- due to read/write dependencies
-- among transactions
-- Transaction B must retry!
The database detected that both transactions read the full accounts table (via SUM) and then both inserted new rows. If both commits succeeded, each transaction's SUM would be "wrong" because it did not include the other transaction's insert. This is a serialization anomaly, and the SERIALIZABLE level prevents it by aborting one of the transactions.
The Classic Write Skew Example
Write skew is an anomaly that only SERIALIZABLE prevents. It occurs when two transactions each read overlapping data, make decisions based on what they read, and write to different rows, creating a state that neither transaction would have allowed individually.
Scenario: A hospital rule says at least one doctor must be on call at all times. Currently, Dr. Alice and Dr. Bob are both on call.
-- doctors table
-- | id | name | on_call |
-- |----|-------|---------|
-- | 1 | Alice | true |
-- | 2 | Bob | true |
Transaction A (Dr. Alice) Transaction B (Dr. Bob)
───────────────────────── ─────────────────────────
BEGIN; BEGIN;
SELECT COUNT(*) FROM doctors
WHERE on_call = true;
-- Returns: 2
-- "Someone else is on call,
-- I can go off call"
SELECT COUNT(*) FROM doctors
WHERE on_call = true;
-- Returns: 2
-- "Someone else is on call,
-- I can go off call"
UPDATE doctors
SET on_call = false
WHERE id = 1; -- Alice goes off call
UPDATE doctors
SET on_call = false
WHERE id = 2; -- Bob goes off call
COMMIT;
COMMIT;
Result without SERIALIZABLE: Both transactions succeed. Both doctors are now off call. Nobody is on call. The business rule is violated, even though each transaction independently verified the rule.
Result with SERIALIZABLE: One of the transactions would be aborted by the database, preventing the violation. The application retries the aborted transaction, which would now see only 1 doctor on call and refuse to proceed.
Performance Impact of SERIALIZABLE
SERIALIZABLE has the highest overhead because the database must track dependencies between transactions and detect potential anomalies:
| Mechanism | Database | Description |
|---|---|---|
| SSI (Serializable Snapshot Isolation) | PostgreSQL | Tracks read/write dependencies; aborts conflicting transactions |
| Locking | SQL Server | Uses range locks, key locks; transactions may block each other |
| Gap Locking | MySQL | Extends range locks to prevent phantoms; can cause lock waits |
| Serialization | Oracle | Uses snapshot isolation; detects conflicts at commit time |
-- PostgreSQL: SERIALIZABLE uses SSI
-- Transactions are NOT blocked; they proceed optimistically
-- If a conflict is detected at COMMIT, one transaction is aborted
-- The application MUST be prepared to retry
-- SQL Server: SERIALIZABLE uses locking
-- Transactions may BLOCK each other, waiting for locks to be released
-- Less need for retries, but more potential for deadlocks
When SERIALIZABLE Is Appropriate
- Financial transactions where correctness is paramount
- Inventory reservation systems where overselling is unacceptable
- Scheduling systems with complex constraints (the doctor on-call example)
- Any operation where the business logic depends on multiple reads producing a consistent picture
-- Seat reservation: must not oversell
BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT available_seats FROM flights WHERE flight_id = 101;
-- Returns: 2
-- Application logic: if available_seats >= requested_seats, proceed
UPDATE flights
SET available_seats = available_seats - 2
WHERE flight_id = 101 AND available_seats >= 2;
COMMIT;
-- If another transaction also reserved seats concurrently,
-- one of them will be aborted, preventing overselling
Implementation Differences by Database
How each database actually implements isolation levels varies significantly. Understanding these differences is crucial if you work across multiple databases.
PostgreSQL: MVCC (Multi-Version Concurrency Control)
PostgreSQL uses MVCC at all levels. Instead of locking rows, it maintains multiple versions of each row. Readers never block writers, and writers never block readers.
| Level | PostgreSQL Behavior |
|---|---|
| READ UNCOMMITTED | Treated as READ COMMITTED (dirty reads never happen) |
| READ COMMITTED | Each statement sees its own snapshot of committed data |
| REPEATABLE READ | Entire transaction sees a single snapshot; prevents phantoms too |
| SERIALIZABLE | SSI tracks dependencies; aborts on conflict; requires retry logic |
-- PostgreSQL: Transactions don't block each other (MVCC)
-- But conflicting writes at REPEATABLE READ/SERIALIZABLE cause aborts
-- Application MUST retry aborted transactions:
-- Retry loop (pseudocode)
LOOP:
BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- ... do work ...
COMMIT;
IF success THEN EXIT LOOP;
-- If serialization failure, retry
END LOOP;
MySQL InnoDB: MVCC + Locking
MySQL uses MVCC for reads and locking for writes. The behavior differs notably from PostgreSQL:
| Level | MySQL InnoDB Behavior |
|---|---|
| READ UNCOMMITTED | Truly allows dirty reads |
| READ COMMITTED | Each statement gets a fresh snapshot |
| REPEATABLE READ | Snapshot at first read; gap locks prevent most phantoms; current reads (FOR UPDATE) see latest data |
| SERIALIZABLE | All reads become SELECT ... FOR SHARE; heavy locking |
-- MySQL: At SERIALIZABLE, every SELECT implicitly becomes
-- SELECT ... FOR SHARE (lock in share mode)
-- This means reads can BLOCK writes on the same rows
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION;
SELECT * FROM accounts WHERE id = 1;
-- MySQL acquires a shared lock on this row
-- Other transactions trying to UPDATE this row will BLOCK
-- until this transaction commits or rolls back
COMMIT;
SQL Server: Locking (Default) or MVCC (RCSI/SI)
SQL Server traditionally uses locking for all isolation levels. Readers block writers, and writers block readers. However, SQL Server also offers two snapshot-based modes:
| Mode | Description |
|---|---|
| Default (locking) | Readers acquire shared locks, blocking writers |
| READ COMMITTED SNAPSHOT (RCSI) | READ COMMITTED uses MVCC snapshots instead of locks |
| SNAPSHOT ISOLATION | Similar to REPEATABLE READ with MVCC; requires database-level setting |
-- SQL Server: Enable snapshot-based READ COMMITTED (database level)
ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON;
-- Now READ COMMITTED behaves like PostgreSQL's: readers don't block writers
-- SQL Server: Enable SNAPSHOT isolation level (database level)
ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON;
-- Now you can use:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRANSACTION;
-- ... snapshot-based reads ...
COMMIT;
Cross-Database Comparison
| Behavior | PostgreSQL | MySQL InnoDB | SQL Server (Default) |
|---|---|---|---|
| Readers block writers? | Never | Never (MVCC reads) | Yes (shared locks) |
| Writers block readers? | Never | Never (MVCC reads) | Yes (exclusive locks) |
| Dirty reads possible? | Never (even at READ UNCOMMITTED) | Yes (at READ UNCOMMITTED) | Yes (at READ UNCOMMITTED) |
| Conflict resolution | Abort + retry | Locking + wait | Locking + wait |
| Phantom prevention at REPEATABLE READ | Yes (beyond standard) | Mostly (gap locks) | No (standard behavior) |
Practical Guidelines
Choosing the Right Isolation Level
Isolation Level Decision Guide:
Q: Does your transaction involve only a single statement?
→ READ COMMITTED is fine (auto-commit handles it)
Q: Do you read the same data multiple times in one transaction?
- Yes: use REPEATABLE READ for consistent snapshots
- No: READ COMMITTED is fine
Q: Do you read a SET of rows (COUNT, SUM, range queries) and then make decisions based on that set?
- Yes: consider SERIALIZABLE to prevent phantoms and write skew
- No: REPEATABLE READ is sufficient
Q: Is absolute correctness more important than performance?
- Yes: use SERIALIZABLE with retry logic
- No: READ COMMITTED or REPEATABLE READ
Q: Approximate analytics on large data sets?
→ READ UNCOMMITTED (SQL Server) or READ COMMITTED
Mix Isolation Levels Within an Application
You do not need to use the same isolation level everywhere. Most applications use READ COMMITTED as the default and upgrade to a stronger level only for specific critical operations:
-- Normal CRUD operations: READ COMMITTED (default)
BEGIN;
UPDATE users SET last_login = NOW() WHERE id = 42;
COMMIT;
-- Monthly financial report: REPEATABLE READ
BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT SUM(amount) FROM transactions WHERE month = '2024-03';
SELECT COUNT(*) FROM transactions WHERE month = '2024-03';
-- Both queries see the same snapshot
COMMIT;
-- Inventory reservation: SERIALIZABLE
BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT stock FROM products WHERE id = 101;
UPDATE products SET stock = stock - 1 WHERE id = 101 AND stock >= 1;
COMMIT;
-- Retry if serialization failure
Implementing Retry Logic
When using REPEATABLE READ or SERIALIZABLE, your application must handle serialization failures by retrying the transaction:
# Python retry pattern for SERIALIZABLE transactions
import psycopg2
import time
MAX_RETRIES = 3
def transfer_funds(conn, from_id, to_id, amount):
for attempt in range(MAX_RETRIES):
try:
with conn:
with conn.cursor() as cur:
cur.execute(
"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE"
)
cur.execute(
"UPDATE accounts SET balance = balance - %s "
"WHERE id = %s AND balance >= %s",
(amount, from_id, amount)
)
if cur.rowcount == 0:
raise ValueError("Insufficient funds")
cur.execute(
"UPDATE accounts SET balance = balance + %s "
"WHERE id = %s",
(amount, to_id)
)
return # Success!
except psycopg2.errors.SerializationFailure:
conn.rollback()
if attempt < MAX_RETRIES - 1:
time.sleep(0.1 * (attempt + 1)) # Backoff
continue
raise # Max retries exceeded
Locking and Isolation: SELECT FOR UPDATE
In addition to isolation levels, you can use explicit locking to provide stronger guarantees for specific rows:
-- SELECT FOR UPDATE: Locks the selected rows until COMMIT/ROLLBACK
-- Prevents other transactions from modifying these rows
BEGIN;
SELECT balance FROM accounts
WHERE id = 1
FOR UPDATE;
-- Row is now LOCKED. Other transactions trying to
-- UPDATE or SELECT FOR UPDATE on this row will WAIT.
-- Safely modify the row knowing nobody else can change it
UPDATE accounts SET balance = balance - 500 WHERE id = 1;
COMMIT;
-- Lock released
| Locking Clause | Who Is Blocked |
|---|---|
FOR UPDATE | Other UPDATE, DELETE, and FOR UPDATE on the same rows |
FOR SHARE / FOR KEY SHARE | Other UPDATE and DELETE, but not other FOR SHARE reads |
FOR UPDATE NOWAIT | Same as FOR UPDATE, but returns error immediately instead of waiting |
FOR UPDATE SKIP LOCKED | Same as FOR UPDATE, but skips already-locked rows (great for job queues) |
Job Queue Pattern with SKIP LOCKED
-- Worker picks up the next available job, skipping jobs claimed by others
BEGIN;
SELECT id, payload FROM jobs
WHERE status = 'pending'
ORDER BY created_at
LIMIT 1
FOR UPDATE SKIP LOCKED;
-- If another worker already locked the first job, skip to the next one
UPDATE jobs SET status = 'processing' WHERE id = <selected_id>;
COMMIT;
This pattern enables multiple workers to process jobs concurrently without conflicts.
Common Mistakes to Avoid
Mistake 1: Using SERIALIZABLE Everywhere
-- BAD: Every query uses SERIALIZABLE "just to be safe"
SET default_transaction_isolation = 'serializable';
SERIALIZABLE adds overhead and increases the chance of transaction aborts. Most operations do not need it. Use it selectively for operations that truly require it.
Mistake 2: Not Implementing Retry Logic with Strong Isolation
-- BAD: Using SERIALIZABLE without handling serialization failures
BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- ... complex operation ...
COMMIT;
-- If this fails with a serialization error, the application crashes
-- instead of gracefully retrying
Fix: Always wrap SERIALIZABLE transactions in a retry loop (see the Python example above).
Mistake 3: Assuming READ COMMITTED Prevents All Problems
-- BAD: Assuming these two reads will be consistent
BEGIN; -- READ COMMITTED (default)
SELECT balance FROM accounts WHERE id = 1; -- Returns 5000
-- Another transaction deducts 2000 and commits here
SELECT balance FROM accounts WHERE id = 1; -- Returns 3000!
-- Your application computed something with 5000 and then
-- used 3000 for another calculation. Inconsistent!
COMMIT;
Fix: Use REPEATABLE READ when you need multi-read consistency:
BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT balance FROM accounts WHERE id = 1; -- Returns 5000
-- Another transaction deducts 2000 and commits
SELECT balance FROM accounts WHERE id = 1; -- Still returns 5000
COMMIT;