Recovery

36 Slides390.50 KB

Recovery

Review: The ACID properties A tomicity: All actions in the Xaction happen, or none happen. C onsistency: If each Xaction is consistent, and the DB starts consistent, it ends up consistent. I solation: Execution of one Xaction is isolated from that of other Xacts. D urability: If a Xaction commits, its effects persist. CC guarantees Isolation and Consistency. The Recovery Manager guarantees Atomicity & Durability.

Why is recovery system necessary? Transaction failure : Logical errors: application errors (e.g. div by 0, segmentation fault) System errors: deadlocks System crash: hardware/software failure causes the system to crash. Disk failure: head crash or similar disk failure destroys all or part of disk storage Lost data can be in main memory or on disk

Storage Media Volatile storage: does not survive system crashes examples: main memory, cache memory Nonvolatile storage: survives system crashes examples: disk, tape, flash memory, non-volatile (battery backed up) RAM Stable storage: a “mythical” form of storage that survives all failures approximated by maintaining multiple copies on distinct nonvolatile media

Recovery and Durability To achieve Durability: Put data on stable storage To approximate stable storage make two copies of data Problem: data transfer failure

Recovery and Atomicity Durability is achieved by making 2 copies of data What about atomicity Crash may cause inconsistencies

Recovery and Atomicity Example: transfer 50 from account A to account B goal is either to perform all database modifications made by Ti or none at all. Requires several inputs (reads) and outputs (writes) Failure after output to account A and before output to B . DB is corrupted!

Recovery Algorithms Recovery algorithms are techniques to ensure database consistency and transaction atomicity and durability despite failures Recovery algorithms have two parts 1. Actions taken during normal transaction processing to ensure enough information exists to recover from failures 2. Actions taken after a failure to recover the database contents to a state that ensures atomicity and durability

Background: Data Access Physical blocks: blocks on disk. Buffer blocks: blocks in main memory. Data transfer: input(B) transfers the physical block B to main memory. output(B) transfers the buffer block B to the disk, and replaces the appropriate physical block there. Each transaction Ti has its private work-area in which local copies of all data items accessed and updated by it are kept. Ti's local copy of a data item x is called xi. Assumption: each data item fits in and is stored inside, a single block.

Data Access (Cont.) Transaction transfers data items between system buffer blocks and its private work-area using the following operations : read(X) assigns the value of data item X to the local variable xi. write(X) assigns the value of local variable xi to data item {X} in the buffer block. both these commands may necessitate the issue of an input(BX) instruction before the assignment, if the block BX in which X resides is not already in memory. Transactions Perform read(X) while accessing X for the first time; All subsequent accesses are to the local copy. After last access, transaction executes write(X). output(B ) need not immediately follow write(X). X System can perform the output operation when it deems fit.

buffer Buffer Block A X Buffer Block B Y input(A) A output(B) B read(X) write(Y) x2 x1 y1 work area of T1 memory work area of T2 disk

Recovery and Atomicity (Cont.) To ensure atomicity, first output information about modifications to stable storage without modifying the database itself. We study two approaches: log-based recovery, and shadow-paging

Log-Based Recovery Simplifying assumptions: Transactions run serially logs are written directly on the stable storage Log: a sequence of log records; maintains a record of update activities on the database. (Write Ahead Log, W.A.L.) Log records for transaction Ti: Ti start Ti , X, V1, V2 Ti commit Two approaches using logs Deferred database modification Immediate database modification

Log example Transaction T1 Read(A) A A-50 Write(A) Read(B) B B 50 Write(B) Log T1, start T1, A, 1000, 950 T1, B, 2000, 2050 T1, commit

Deferred Database Modification Ti starts: write a Ti start record to log. Ti write(X) write Ti, X, V to log: V is the new value for X The write is deferred Note: old value is not needed for this scheme Ti partially commits: Write Ti commit to the log DB updates by reading and executing the log: Ti start Ti commit

Deferred Database Modification How to use the log for recovery after a crash? Redo: if both Ti start and Ti commit are there in the log. Crashes can occur while the transaction is executing the original updates, or while recovery action is being taken example transactions T0 and T1 (T0 executes before T1): T0: read (A) T1 : read (C) A: - A - 50 C:- C- 100 write (A) write (C) read (B) B:- B 50 write (B)

Deferred Database Modification (Cont.) Below we show the log as it appears at three instances of time. T0, start T0, A, 950 T0, B, 2050 (a) T0, start T0, A, 950 T0, B, 2050 T0, commit T1, start T1, C, 600 (b) T0, start T0, A, 950 T0, B, 2050 T0, commit T1, start T1, C, 600 T1, commit (c) What is the correct recovery action in each case?

Immediate Database Modification Database updates of an uncommitted transaction are allowed Tighter logging rules are needed to ensure transactions are undoable LOG records must be of the form: Ti, X, Vold, Vnew Log record must be written before database item is written Output of DB blocks can occur: Before or after commit In any order

Immediate Database Modification (Cont.) Recovery procedure : Undo : Ti, start is in the log but Ti commit is not. Undo: restore the value of all data items updated by Ti to their old values, going backwards from the last log record for Ti Redo: Ti start and Ti commit are both in the log. sets the value of all data items updated by Ti to the new values, going forward from the first log record for Ti Both operations must be idempotent: even if the operation is executed multiple times the effect is the same as if it is executed once

Immediate Database Modification Example Log Write Output T0 start T0, A, 1000, 950 To, B, 2000, 2050 A 950 B 2050 T0 commit T1 start T1, C, 700, 600 C 600 BB, BC T1 commit BA Note: BX denotes block containing X.

I M Recovery Example T0, start T0, A, 1000, 950 T0, B, 2000, 2050 T0, start T0, A, 1000, 950 T0, B, 2000, 2050 T0, commit T1, start T1, C, 700, 600 (a) (b) are: Recovery actions in each case above T0, start T0, A, 1000, 950 T0, B, 2000, 2050 T0, commit T1, start T1, C, 700, 600 T1, commit (c) (a) undo (T0): B is restored to 2000 and A to 1000. (b) undo (T1) and redo (T0): C is restored to 700, and then A and B are set to 950 and 2050 respectively. (c) redo (T0) and redo (T1): A and B are set to 950 and 2050 respectively. Then C is set to 600

Checkpoints Problems in recovery procedure as discussed earlier : 1. searching the entire log is time-consuming 2. we might unnecessarily redo transactions which have already output their updates to the database. How to avoid redundant redoes? Put marks in the log indicating that at that point DB and log are consistent. Checkpoint!

Checkpoints At a checkpoint: Quiese system operation. Output all log records currently residing in main memory onto stable storage. Output all modified buffer blocks to the disk. Write a log record checkpoint onto stable storage.

Checkpoints (Cont.) Recovering from log with checkpoints: 1.Scan backwards from end of log to find the most recent checkpoint record 2.Continue scanning backwards till a record Ti start is found. 3.Need only consider the part of log following above start record. Why? 4.After that, recover from log with the rules that we had before.

Example of Checkpoints Tc Tf T1 T2 T3 T4 checkpoint checkpoint system failure T1 can be ignored (updates already output to disk due to checkpoint) T2 and T3 redone. T4 undone

Shadow Paging Shadow paging: alternative to log-based recovery; works mainly for serial execution of transactions Keeps “clean” data (the shadow pages) untouched during transaction (in stable storage) Writes to a copy of the data Replace the shadow page only when the transaction is committed and output to the disk

Shadow Paging Maintain two page tables during the lifetime of a transaction –the current page table, and the shadow page table Store the shadow page table in nonvolatile storage, Shadow page table is never modified during execution To start with, both page tables are identical. Only current page table is used for data item accesses during execution of the transaction. Whenever any page is about to be written for the first time A copy of this page is made onto an unused page. The current page table is then made to point to the copy The update is performed on the copy

Sample Page Table

Example of Shadow Paging Shadow and current page tables after write to page 4

Shadow Paging To commit a transaction : 1. Flush all modified pages in main memory to disk 2. Output current page table to disk 3. Make the current page table the new shadow page table, as follows: keep a pointer to the shadow page table at a fixed (known) location on disk. to make the current page table the new shadow page table, simply update the pointer to point to current page table on disk Once pointer to shadow page table has been written, transaction is committed. No recovery is needed after a crash! — new transactions can start right away, using the shadow page table.

Shadow Paging Advantages no overhead of writing log records recovery is trivial Disadvantages : Copying the entire page table is very expensive Data gets fragmented Hard to extend for concurrent transactions

Recovery With Concurrent Transactions To permit concurrency: All transactions share a single disk buffer and a single log Concurrency control: Strict 2PL :i.e. Release eXclusive locks only after commit. Logging is done as described earlier. The checkpointing technique and actions taken on recovery have to be changed (based on ARIES) since several transactions may be active when a checkpoint is performed.

Recovery With Concurrent Transactions (Cont.) Checkpoints for concurrent transactions: checkpoint L L: the list of transactions active at the time of the checkpoint We assume no updates are in progress while the checkpoint is carried out Recovery for concurrent transactions, 3 phases: Initialize undo-list and redo-list to empty Scan the log backwards from the end, stopping when the first checkpoint L record is found. ANALYSIS For each record found during the backward scan: 1. if the record is Ti commit , add Ti to redo-list 2. if the record is Ti start , then if Ti is not in redo-list, add Ti to undo-list 3. For every Ti in L, if Ti is not in redo-list, add Ti to undo-list

Recovery With Concurrent Transactions Scan log backwards UNDO Perform undo(T) for every transaction in undo-list Stop when you have seen T, start for every T in undo-list. Locate the most recent checkpoint L record. Scan log forwards from the checkpoint L record till the end of the log. perform redo for each log record that belongs to a transaction on redo-list REDO

Example of Recovery : T0 start T0, A, 0, 10 T0 commit T1 start T1, B, 0, 10 T2 start T2, C, 0, 10 T2, C, 10, 20 checkpoint {T1, T2} T3 start T3, A, 10, 20 T3, D, 0, 10 T3 commit Redo-list{T3} Undo-list{T1, T2} Undo: Set C to 10 Set C to 0 Set B to 0 DB Initial At crash After rec. Redo: Set A to 20 Set D to 10 A 0 20 20 B 0 10 0 C 0 20 0 D 0 10 10

Remote Backup Systems Remote backup systems provide high availability by allowing transaction processing to continue even if the primary site is destroyed.

Back to top button