How Databases Handle Conflicts During Concurrency

In my last post, we saw how databases achieve concurrency by creating wiggle room for interleaving non-conflicting operations through conflict-free schedules. This ability to reoder independent operations without changing the final outcome is what gives serializable systems their performance edge.

But we also saw that conflicting operations fix the relative order of transactions in any serializable schedule. So the question now becomes, what holds these schedules in place?

This post explores exactly that i.e., how databases handle conflicts in their ongoing effort to hold these fragile houses of concurrency standing tall.

Lets dive in!

Two Kinds of Conflicts

Every concurrency-control mechanism must deal with two fundamental kinds of interference that arise when transactions access the same data:

1. Read-Write Conflict

A Read–Write (RW) conflict occurs when one transaction reads a data item that another later modifies. If we reverse their order, the reader sees a different value, changing the transaction’s behaviour and the final result.

Example: Initial value of X = 100

Transaction T1 Transaction T2
Reads X (100) -
Computes Y = X + 10 -
- Writes X = 200

In the original order R1(X) -> W2(X), T1 reads the old value of X = 100 before T2 updates it. T1 then computes Y = 110 and finishes, unaware that X was later changed.

If we swap their order W2(X) -> R1(X), T1 now reads the new value X = 200 instead. This time, it writes Y = 210.

Same logic. Different results.

That’s why RW conflicts fix the order between transactions. They can’t be swapped without breaking serializability.

2. Write-Write Conflict

A Write–Write (WW) conflict happens when two transactions update the same data item. If we reverse them, whichever write happens last determines the final value, again changing the result!

Example: Initial value of X = 0

Transaction T1 Transaction T2
Writes X = 10 -
- Writes X = 20

In the original order W1(X) -> W2(X), T1 writes X = 10, and then T2 overwrites it with X = 20. The final value in the database is therefore X = 20.

If we reverse their order W2(X) -> W1(X), T1’s update now lands last, leaving X = 10 instead.

Two valid schedules but two different outcomes.

Whichever write happens last decides the final state, which means their order isn’t interchangeable.

Keeping Order During Conflicts

Every database has one simple goal under concurrency i.e., let transactions overlap safely without breaking serializability. But the ways they achieve that goal differs in when they step in to manage conflicts.

Let's walk through the four concurrency control philosophies:

1. Prevention: PCC

In traditional systems, safety came first. Pessimistic Concurrency Control (PCC) prevents conflicts by locking data items before access.

Conflict Type Resolution Strategy
Read-Write Readers must wait for writers holding locks to finish.
Write-Write Writers queue behind one another, ensuring only one writer at a time.

Example: Suppose X = 100

Transaction T1 Transaction T2
Locks X -
Reads X = 100 -
Writes X = 200 -
Commits, releases lock -
- Locks X, then updates X = 300

This approach leaves no room for anomalies but sacrifices concurrency. PCC keeps correctness simple, yet often at the cost of throughput and latency.

2. Validation: OCC

Optimistic Concurrency Control (OCC) flips the idea: Why block if most transactions don't conflict at all? OCC lets transactions run freely without locks and only checks for conflicts at commit.

Conflict Type Resolution Strategy
Read-Write At commit, validates that no other transaction modified what was read.
Write-Write Aborts one if both modified the same record.

Example: Initial X = 100

Transaction T1 Transaction T2
Reads X = 100 -
- Writes X = 200
Writes X = 110 -
Validates -> detects overlap -> aborts T1 -

Here, T1 is aborted because T2 changed a value T1 had read. OCC assumes conflicts are rare but when they do occur, you simply roll back and retry.

It’s a great fit for systems where retries are cheap and contention is low.

3. Versioning: MVCC

Optimistic control (OCC) lets transactions run freely, only checking for conflicts at commit. Multi-Version Concurrency Control (MVCC) takes a different path. Instead of detecting read-write conflicts later, it avoids them entirely.

The core idea is simple. Readers see a past version of the world while writers create new ones for the future.

Multi-Version Concurrency Control (MVCC) never overwrites in place. Instead:
1. Each update creates a new version of the data.
2. Each transaction reads from a snapshot of committed versions that existed when the transaction began.
3. Writers proceed independently, creating newer versions for future transactions.

Conflict Type Resolution Strategy
Read-Write Readers see consistent snapshots; writers never block them. No waiting, no dirty reads.
Write-Write Detected at commit. Only one writer’s version can stand.

Unlike OCC, which checks after the fact, MVCC sidesteps read-write interference altogether. Readers never block writers, and writers don’t block readers.

But this comes with a trade-off. Because read-write conflicts are avoided, not validated, MVCC cannot detect certain anomalies like write skew on its own. That’s where later techniques like dependency tracking step in, watching how transactions interact across snapshots to restore true serial order.

The Problem: A Reader That Later Writes

This is a great place to pause and ask:

What if a transaction reads a value and later tries to update it, only to find that another transaction has already changed it?

Let's see this with an example: Initial value of X = 100

Transaction T1 Transaction T2
Reads X = 100 -
- Updates X = 200
Writes X = 110 -

At first, T1 reads X = 100 and plans to update it to 110. While T1 is still working, T2 commits its own update, setting X = 200.

Now when T1 tries to commit, it’s updating a value that has already changed. In other words, its write is based on an obsolete version of X. If the database allowed this blindly, T1 would overwrite T2’s more recent change, effectively losing T2’s work.

What MVCC Does Here

Under MVCC, the database doesn’t block T1 when it first reads X = 100. But when T1 later tries to write, the system checks whether the version it read is still the latest. If the data has changed since T1’s snapshot, T1 is aborted (or retried). This check usually happens at commit time. In MVCC this is part of validation or write-conflict detection.

So MVCC lets transactions read freely, but ensures that outdated writers never commit.

If you look closely, MVCC extends the same idea as conflict-free scheduling by creating another wiggle room, maintaining correctness even during RW conflicts while broadening the window for concurrency and performance. This is why most modern databases use MVCC under the hood.

4. Observation: Dependency Tracking

The most modern approach is observation-based concurrency control, or dependency tracking. Instead of blocking or validating explicitly, the system watches transactions as they run, recording read-write dependencies between them.

Conflict Type Resolution Strategy
Read-Write Tracks when a transaction reads data that another later modifies; if dependencies form a cycle, aborts one transaction.
Write-Write Detects concurrent updates to the same record; one transaction loses to preserve serial order.

You see even techniques like MVCC and OCC, some anomalies still slip through. These are the subtle, indirect read-write interactions across different data items that look harmless to conflict-serializability theory but can still break true serial order.

The Write Skew Problem

Let’s consider a classic example of write skew. Two doctors sharing on-call duty, each updating different rows in the same table.

Transaction T1 Transaction T2
Reads doctor_A.on_call = TRUE -
- Reads doctor_B.on_call = TRUE
Writes doctor_B.on_call = FALSE -
- Writes doctor_A.on_call = FALSE

Both doctors check that someone is on call and then take themselves off duty. Individually, each transaction looks fine, it never touches the same row as the other.

According to conflict serializability, these are non-conflicting operations. But together, they leave no one on call, a state that no serial order can explain.

If T1 had gone first, T2 should’ve seen doctor_B = FALSE.
If T2 had gone first, T1 should’ve seen doctor_A = FALSE.

Both read old data, and both wrote based on outdated assumptions.

How Dependency Tracking Fixes It

Dependency tracking catches this through read–write dependencies:
T1 -> T2 because T2 writes doctor_A that T1 read.
T2 -> T1 because T1 writes doctor_B that T2 read.
• These edges form a cycle (T1 → T2 → T1), meaning no serial order can exist.

Once this cycle is detected, the system aborts one transaction to restore serializability.

It’s important to note that dependency tracking isn’t meant to handle direct read-write conflicts on the same data item. Those are already handled by MVCC or OCC, which validate or abort stale writers at commit time.

Conclusion

Databases keep order not by avoiding conflicts entirely, but by deciding when and how to step in. Some act early with locks, some wait until the end and validate, others version the world so readers and writers can coexist, and the most advanced observe dependencies as they unfold.

Each of these approaches i.e., prevention, validation, versioning, and observation represents a different point on the timeline of control. Together, they make it possible for transactions to overlap safely while still appearing as if they ran one after another.

But these philosophies are not standalone. Modern databases often combine them using locks for certain workloads, versioning for others, and dependency tracking on top of snapshots to achieve both correctness and high concurrency. It’s this layering, not any single method, that allows databases to scale while preserving the illusion of serial execution.

Show Comments