The distributed commit problem is how get all members of a group to perform an operation (transaction commit) or none at all.
It is needed when a distributed transaction commits.
It is necessary to find out if each site was successful in performing his part of the transaction before allowing any site to make that transaction's changes permanent.
Two Phase Commit (Gray 1987)
Assumptions: Reliable communications and a designated coordinator. Simple case: no failures..
First phase is the voting phase
Coordinator sends all participants a VOTE REQUEST
All participants respond COMMIT or ABORT
Second phase is decision phase. Coordinator decides commit or abort: if any participant voted ABORT, then decision must be abort. Otherwise, commit.
Coordinator sends all participants decision
Participants (who have been waiting for decision) commit or abort as instructed and ack.
2PC: Participant Failures
If a participant, P, fails before voting, coordinator, C, can timeout and decide abort. (cannot decide commit because p1 may vote abort)
If p1 fails after voting, if he voted commit, he must be prepared to commit the transaction when he recovers. He must ask the coordinator or other participants what the decision was before doing so. If he voted abort, he can abort the transaction when he recovers.
So BEFORE the participant votes, he must log on stable storage his position.
2PC: Coordinator Failures
If coordinator fails before requesting votes, participants timeout and abort.
If coordinator fails after requesting votes (or after requesting some votes),
Ps who voted commit cannot unilaterally abort as all Ps may have voted commit, and C decided commit and some P may have received the decision. Must either wait until the coordinator recovers or contact ALL the other Ps. If one is unreachable, he cannot decide.
Two-Phase Commit (1)
The finite state machine for the coordinator in 2PC using the notation (msg rcvd/msg sent).
The finite state machine for a participant.
Two-Phase Commit: Recovery
Actions taken by a participant P when residing in state READY and having contacted another participant Q.
Two-Phase Commit
Outline of the steps taken by the coordinator in a two phase commit protocol
Two-Phase Commit
Steps taken by participant process in 2PC.
Two-Phase Commit
Steps taken for handling incoming decision requests.
If coordinator fails after deciding but before sending all decision messages, problem for P’s
If the P got the decision message, he carries out the decision (and tells others if asked)
If the P voted abort, he just aborts.
If the P voted commit, he must wait until the coordinator recovers or he gets a response from ALL participants (P must know who the other Ps are). The crash or unavailability of coordinator and one P results in a BLOCK.
How to Solve Blocking?
All participants could be in same state or in states adjacent to that one state, but no further away since their moves are "coordinated". They move out of current state when directed to by coordinator.
Problem: when coordinator plus one or more Ps fails, the other Ps must be able to determine the outcome based on the states of the living participants.
Participant State Diagrams
P’s can move out of current state only when directed to by C. C gives the command only when he receives confirmation that all Ps have made the transition. Assume this is SD of P
If one P is in state 1, what are the possible states of the others?
Participant State Diagrams
If one P is in state 3, what are the possible states of the others?
How to Solve Blocking
So the conditions for a non-blocking atomic commit protocol are -
There is no state from which it is possible to make a transition directly to both commit and abort.
That is, no {commit,abort}
Any state from which a transition to commit is possible has the property that it is possible for participants to decide without waiting for the recovery of the coordinator or dead participants.
P can decide based on the sets of possible states.
C in wait state: some P died before/while voting: decide abort
C in pre-commit state: some part has already voted commit but failed to ack the prepare command: commit transaction: when part recovers and inquires it will commit the transaction.
Checkpointing, Logging, Recovery
Recovery is what happens after the crash. Logs and checkpoints make it possible.
A checkpoint is a durable record of a consistent state that existed on this node at some time in the past.
A log is a durable record or history of the significant events (such as writes but not reads) that have occurred at this site (either since the start or since the last checkpoint).
Recovery Strategies
Backward recovery: Bring the system back to a previous correct and consistent state. Checkpoint is made periodically during normal operations by recording (on stable storage) the current state. (Problems with on-going transactions). After a failure, the state can be restored from the checkpoint info. Issue: taking a checkpoint is a lot of overhead.
Forward recovery: Bring the system to a new correct state after a crash. May involve asking another site what the current state is.
Recovery Strategies
Combination: Use both checkpoint and recovery log.
Take checkpoint and delete old log and start new log.
When recovering from failure, restore checkpoint state then replay log and re-do these operations.
Often used in databases
Checkpoints in a Distributed System
Taking a checkpoint is a single site operation, whereas in a DS, there is a global state that must remain consistent.
If one process, P2, fails and rolls back to a previous checkpoint, that point may be inconsistent with the rest of the sites in the DS.
This means that the other sites may have to roll back also. This is a problem if the taking of checkpoints is not coordinated -- need a global snapshot.
Checkpointing
P2 fails and rolls back to previous checkpoint. It is now inconsistent with P1, so P1 must roll back, which causes P2 to roll back one more checkpoint to final recovery line.
Independent Checkpointing
The problem with independent checkpoints is the domino effect or cascading rollbacks to find a consistent state: need distributed snapshot!
Logging
All events which affect the system state are logged starting with the last checkpoint.
Messages received
Updates from clients
After failure, the system state is restored from the checkpoint, then the log is replayed to bring the system to a consistent state.
Message Logging
Q did not correctly log m2 even though it may have caused Q to send m3. On replay, Q may never send m3 but R has received it.