Oracle Berkeley DB - Transactional Data Storage (TDS) Tutorial
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Oracle Berkeley DB - Transactional Data Storage (TDS) Tutorial

  • 4,812 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
4,812
On Slideshare
4,812
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
75
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Designed to follow the Use-Case Based Tutorial for Data Store
  • Ideally want three use cases: simple one, one that uses ALLDB, and one that uses cds_groupbegin WW-like site: Per-user database? Food database (secondary points to food) Recipe database (secondary for ingredients to recipes; categories to recipes) Add food: look up food (read) write per-user Add recipe: look up recipe, iterate over incredients, write into per-user New recipe:
  • Review the ACID properties
  • Modifications can be grouped, of course, but reads can be transactional as well to be repeatable. Filesystem operations are things like database remove, rename, create (most relational engines don’t support this, it’s a very useful feature).
  • Talk about deadlocks versus timeouts
  • We’ll introduce a few more things when we get to performance tuning
  • Recovery must be single threaded. (Still true in BDB 4.4?)
  • For BDB 4.4, probably add DB_REGISTER to the list of flags.
  • Use DB_ENV->txn_begin, txn_commit and txn_abort or specify DB_AUTO_COMMIT in the DB->put call.
  • This gives you fully serializable read: a complete consistent snapshot of the database. DB_READ_UNCOMMITTED or DB_READ_COMMITTED should be used for the cursor, if possible, in a real application. (Pre-BDB 4.4, “DB_READ_UNCOMMITTED” was “DB_DIRTY_READ” and “DB_READ_COMMITTED” was “DB_DEGREE_2”. Older names still work ok.)
  • If a parent transaction aborts, all child transactions of that parent also abort, regardless of whether the child transaction was currently aborted, committed, or unresolved. If a child transaction aborts, the parent’s locks are unchanged. This slide show doesn’t mention preparing transactions as part of a federated database.
  • The CDS product should never deadlock All DBMS systems can deadlock – they usually just handle it in the server and retry automatically Deadlocks cannot occur in read-only applications (next slide)
  • Condition 1: can’t happen in read-only situation Condition 2: avoid this with DB_NO_WAIT Condition 3: get all the locks up front (don’t know how to do this) Condition 4: called a waits-for graph – find cycles and break them. This is called deadlock detection and is the normal mode of operation in DB
  • Most applications choose Asynchronous detection. A few use synchronous detection Lock/transaction timeouts are not common.
  • DB_RMW may increase contention (because write locks are held longer than they would otherwise be) but it may decrease the number of deadlocks that occur. See Transaction Tuning, Transaction Throughput and Access Method Tuning portion of the BDB Reference Guide. Faster transactions hold locks for a shorter time.
  • DB_ENV->txn_checkpoint() allows application to specify minimum number of kbytes in the log or minimum number of minutes since last checkpoint. Checkpointing is I/O intensive as is recovery. Recovery might take longer than real time, because it may have to rewrite the same pages over and over. This is uncommon but possible. “ Checkpoints don’t block operations.” During the write of a page, the page is locked against modifications, so if page contention is a problem, checkpointing could make it worse.
  • Force: regardless Kbytes: since last checkpoint Time: since last checkpoint Trickle can be run periodically to make sure that at least N% of the cache is clean.
  • “ Physical” logging means the physical bytes that changed are written, rather than logging of API calls, of SQL statements, etc. DB_ENV->set_lg_dir must be called before DB_ENV->open The log filename consists 10 digits, with a maximum of 2,000,000,000 log files. Consider an application performing 6000 transactions per second for 24 hours a day, logged into 10MB log files, in which each transaction is logging approximately 500 bytes of data. The following calculation: (10 * 2^20 * 2000000000) / (6000 * 500 * 365 * 60 * 60 * 24) indicates that the system will run out of log filenames in roughly 221 years. Because transaction logs are the basis for recovery, a transaction protected database will not allow non-transaction protected operations. If it did, then the transaction log would not have a complete record of all of the database operations and would not be recoverable.
  • Log information is stored in the in-memory log buffer until the storage space fills up or transaction commit forces the information to be flushed to stable storage (in the case of on-disk logs). In the presence of long-running transactions or transactions producing large amounts of data, larger buffer sizes can increase throughput. The log region is used to store filenames and so may need to be increased in size if a large number of files will be opened and registered with the specified Berkeley DB environment's log manager. DB_ENV->set_lg_regionmax must be called before DB_ENV->open
  • See Hot Failover and Recovery section of Reference Guide db_hotbackup utility as of BDB 4.4
  • In step #2 you have to stop all transactional operations (including file-level operations such as database renames) In step #4 it may be easier to simply copy all of the files in DB_ENV->set_data_dir or all of the files stored in the data directories. In step #5 you only have to copy a single log file (the last one) because you did a checkpoint in step #3, which means that only the last log file is interesting; all changes recorded in the previous log files have been flushed to the backing database files.
  • See the “Berkeley DB recoverability” section of the Reference Guide for information on copying databases atomically To reduce the number of log files that need to be backed up use db_archive –l or DB_ENV->log_archive with DB_ARCH_DATA to identify the log files that are not in use. Either do not archive them or physically move them to an alternative location before copying the log files. BDB 4.4 has a utility program for this.
  • Single-threading of recovery is automatic in BDB 4.4.
  • Common mistake is application writers don’t run recovery all the time. Instead, they assume that if they don’t get a DB_RUNRECOVERY error on startup, things are OK. Big mistake. You should always run recovery on startup. You may or may not get a DB_RUNRECOVERY error on opening when recovery is needed. DB would have to validate the entire set of databases on startup to check if recovery is needed and that’s too slow. DB_RUNRECOVERY might fail if, for example, memory was corrupted. But beware of having a new program start up while others are running – the new program should NOT run recovery.
  • Berkeley DB 4.x has significant recovery performance improvements
  • To recover in an alternative directory databases must be referenced using relative file names. Log files copied: the idea here is that you can run recovery repeatedly if you have too many log files and not enough disk space -- copy in log files 1-M, run recovery, copy in log files M+1-N, run recovery, and so on.
  • It may be simpler to copy all of the database files in step 1. Should be integrated with log archival process. See Hot Failover section of the Reference Guide. Three directories: active, backup, and failover. An archive directory is optional.
  • No promises that the OS writes file system pages to disk atomically, but it’s kind of dump if they don’t.
  • Putting log files on a different disk than database files helps performance and reliability. The former by reducing disk seeks and the latter by making a single disk failure less likely to lose an unrecoverable amount of data. Using separate partitions on the same disk is mostly pointless and probably hurts performance.
  • See “Architecting Transactional Data Store applications” in the Reference Guide. DB_REGISTER flag in Berkeley DB 4.4 eases the handling of crashes in threads of control. failchk() in Berkeley DB 4.4 provides an alternate, higher-performance means of handling thread failure
  • If failchk() returns 0, application may proceed normally w.r.t. the database. If failchk() returns DB_RUNRECOVERY, crash occurred while a thread was executing inside the BDB API. We can’t recover so standard application recovery is required is required: shutdown all threads, run recovery, restart. Transactions that are underway will be aborted.
  • DB_REGISTER flag in Berkeley DB 4.4 makes tracking threads of control easier. DB_REGISTER was implemented specifically to support loosely managed multi-process applications, where individual processes start, do some work and then shut down. In this type of application, having each process start up and open the environment with DB_REGISTER helps to identify and resolve issues left by crashed processes. Create a monitor process that opens and closes the environment every few seconds—the period determines the maximum latency for detecting stale transactions and locks. Without such a process, latency to detect thread/process death and recover from it is longer; might be fine for a given application. DB_REGISTER avoids permission problems on some operating systems w.r.t. processes accessing thread information of other, unrelated processes.
  • If Monitor doesn’t get alerted when threads or processes die, use a timer. If it does get alerted (e.g. a signal is sent on death of a child process), the system will respond faster—but not all operating systems can do that. failchk() is more efficient than DB_REGISTER at getting the system going again after a single thread or process crashes because most times, control is outside the BDB API when the crash occurs so recovery isn’t needed; DB_REGISTER can’t tell that. DB_REGISTER works on processes, not threads. If a thread crashes but the owning process doesn’t, DB_REGISTER doesn’t know.
  • Read operations are given shared locks Write operations are given exclusive locks
  • Instructor: there will be a lot of discussion on the second bullet point Configure deadlock detection with DB_LOCK_EXPIRE if you want timeouts to be the only way a deadlock is broken.

Transcript

  • 1. Oracle Berkeley DB Transactional Data Store A Use-Case Based Tutorial
  • 2. Part III: Transactional Data Store
    • Overview
      • What is TDS?
      • When is TDS appropriate?
    • Case Studies
      • Web services
  • 3. What is TDS?
    • Recoverable data management: recovery from application or system failure.
    • Ability to group operations into transactions: all operations in transaction either succeed or fail.
    • Concurrency control: multiple readers and writers operate on the database simultaneously.
  • 4. Transactions Provide ACID
    • A tomic: multiple operations appear as a single operation
      • all or none of the operations applied
    • C onsistent: no access sees a partially completed transaction
    • I solated: system behaves as if there is no concurrency
    • D urable: modifications persist after failure
    • Applications can relax ACID for performance
    • (more on this later)
  • 5. Transactions
    • All operations can be transactionally protected
      • Database create, remove, rename
      • Key/Data pair get, put, update
    • Why should you care?
      • Never lose data after system or application failure (durability)
      • Group multiple updates into a single operation (atomicity)
      • Roll back changes (abort transaction if something odd happens)
      • Hide changes from other users until completed (consistency)
      • Changes made in other transactions while your transaction is underway aren’t visible unless you want to see them (isolation)
  • 6. Transaction Terminology
    • Thread of control
      • Process or true thread
    • Free-threaded
      • Object protected for simultaneous access by multiple threads
    • Deadlock
      • Two or more threads of control request mutually exclusive locks
      • Blocked threads cannot proceed until a thread releases its locks
      • They are stuck forever.
    • Transaction
      • One or more operations grouped into a single unit of work
  • 7. Transaction Terminology
    • Transaction abort
      • The unit of work is backed out/rolled back/undone
    • Transaction commit
      • The unit of work is permanently done
    • System or application failure
      • Unexpected application exit, for whatever reason
    • Recovery
      • Making databases consistent after failure so they can be used again
      • Must fix data and metadata
  • 8. Transactional APIs (1)
    • New flags to DB_ENV->open
      • DB_INIT_TXN, DB_INIT_LOCK, DB_INIT_LOG, DB_RECOVER, DB_RECOVER_FATAL
    • New configuration options
      • DB_ENV->set_tx_max : max number of concurrent transactions
      • DB_ENV->set_lk_detect : set deadlock resolution policy
      • DB_ENV->set_tx_timestamp : recover to a timestamp
      • DB_ENV->set_timeout : transaction timeout
  • 9. Transactional APIs (2)
    • Transaction calls
      • DB_ENV->txn_begin
      • DB_TXN->commit
      • DB_TXN->abort
      • DB_TXN->prepare
    • New error returns
      • DB_RUNRECOVERY
      • DB_DEADLOCK
    • Utility functions
      • DB_ENV->txn_checkpoint
      • DB_ENV->lk_detect
      • DB_ENV->log_archive
  • 10. What goes in a transaction?
    • All updates must be transaction-protected.
      • DB->put, DBC->c_put
      • DBC->c_del, DBC->c_del
    • All databases that will be accessed inside transactions must be opened/created in a transaction.
    • File system operations
      • DB_ENV->dbremove
      • DB_ENV->dbrename
  • 11. What About Reads?
    • Read operations can go in transactions
      • May not always have to
    • A read that is a part of read-modify-write should be transaction protected.
    • Reads that must be consistent should be transaction-protected.
    • Other reads might use weaker semantics
      • DB_READ_COMMITTED: never see uncommitted data
      • DB_READ_UNCOMMITTED: may see uncommitted data
  • 12. Anatomy of an Application
    • Create/Open/Recover environment
    • Open database handles
    • Spawn utility threads
    • Spawn worker threads
      • Begin transaction
      • Do database operations (insert/delete/update/etc.)
      • Commit or abort transaction
      • Do it again if appropriate (main loop)
    • Close database handles
    • Close environment
  • 13. Create/Open/Recover Environment if ((ret = db_env_create(&DB_ENV, 0)) != 0) … error_handling… /* Configure the environment. */ flags = DB_CREATE |DB_INIT_LOG | DB_INIT_TXN | DB_INIT_LOCK | DB_INIT_MPOOL | DB_RECOVER; if ((ret = DB_ENV->open(DB_ENV, HOME, flags, 0)) != 0) … error_handling…
  • 14. Create/Open Database
    • Must specify DB_ENV to database open
    • That DB_ENV must have been opened with DB_INIT_TXN
    • Database open must be transactional
      • Specify a DB_TXN in the open
      • Specify DB_AUTO_COMMIT in the open
      • Open environment with DB_AUTO_COMMIT
  • 15. Transaction Operations
    • Begin transaction with DB_ENV->txn_begin()
    • DB_TXN->commit() commits the transaction
      • Releases all locks
      • All log records are written to disk (by default)
    • DB_TXN->abort() aborts the transaction
      • Releases all locks
      • Modifications are rolled back
    • Must close all cursors before commit or abort
  • 16. Transaction Differences
    • Without transactions:
      • ret = dbp->put(dbp, NULL, &key, &data, 0);
      • … error handling …
    • With transactions
      • ret = DB_ENV->txn_begin(DB_ENV, NULL, &txn, 0);
      • … error handling …
      • ret = dbp->put(dbp, txn, &key, &data, 0);
      • … error handling …
      • ret = txn->commit(txn, 0);
      • … error handling …
    • Note: if the DB handle was opened transactionally, then we will automatically wrap every modification operation in a transaction (DB_AUTO_COMMIT behavior).
  • 17. Transactional Cursors
    • Specify transaction handle on cursor creation
    • Cursor operations are performed inside that transaction
    • Cursors must be closed before commit or abort
      • DBC *dbc = NULL;
      • DB_TXN *txn = NULL;
      • ret = dbenv->txn_begin(dbenv, NULL, &txn, 0);
      • if (ret != 0)
      • … error handling …
      • ret = dbp->cursor(dbp, txn, &dbc, 0);
      • if (ret != 0)
      • … error handling …
  • 18. Transactional Iteration
    • ret = dbp->cursor(dbp, txn, &dbc, 0);
    • … error handling …
    • while ((ret = dbc->c_get(dbc, &key, &data, DB_NEXT)) == 0)
    • … process record: write new data, delete old data …
    • if (ret == DB_NOTFOUND)
    • ret = 0;
    • if ((temp_ret = dbc->close(dbc)) != 0 && ret == 0)
    • ret = temp_ret;
    • if (ret == 0)
    • ret = txn->commit(txn, 0);
    • else
    • (void) txn->abort(txn);
    • if (ret != 0)
    • … error handling …
  • 19. Nested Transactions
    • Split large transactions into smaller ones
      • Children can be individually aborted.
    • Create nested transaction
      • Pass parent DB_TXN handle to DB_ENV->txn_begin()
    • Parent transaction with active children can only
      • Create more children
      • Commit or abort
    • If parent commits (aborts), all children commit (abort)
    • Child transaction’s locks
      • Won’t conflict with the parent’s locks
      • Will conflict with other children’s (siblings) locks
  • 20. Deadlocks
    • What are deadlocks?
    • Deadlock resolution
    • Deadlock detection
    • Dealing with deadlocks
    • Deadlock avoidance
  • 21. Defining Deadlocks
    • Consider two threads
      • Thread 1: Write A, Write B
      • Thread 2: Write B, Write A
    • Let’s say that the sequence of operations is
      • T1: Write lock A
      • T2: Write lock B
      • T1: Request lock on B (blocks)
      • T2: Request lock on A (blocks
    • Neither thread can make forward progress
  • 22. Conditions for Deadlock
    • Exclusive access
    • Block when access is unavailable
    • May request additional resources while holding resources
    • The graph of “who is waiting for whom” has a cycle in it.
  • 23. Deadlock Resolution
    • Two techniques:
      • Assume a sufficiently long wait is a deadlock and self-abort
      • Detect and selectively abort
    • Berkeley DB supports both
    • Timeouts
      • Use DB_ENV->set_timeout to specify a maximum length of time that a lock or transaction should block.
      • Return DB_LOCK_NOTGRANTED if the timeout expires before a lock is granted
  • 24. Deadlock Detection
    • Two step process:
      • Traverse waits-for graph looking for cycles
      • If cycle, select victim to abort
    • Looking for cycles
      • Synchronously: checked on each blocking lock
        • DB_ENV->set_lk_detect()
        • + Immediate detection and notification
        • - Higher CPU cost
      • Asynchronously: run detector thread
        • DB_ENV->lock_detect() or db_deadlock utility program
        • - Detected only when thread of control runs
        • + Lower CPU cost
  • 25. Picking who to abort
    • Ideally want to limit wasted effort.
    • Deadlock resolution policy is configurable
      • Default is DB_LOCK_RANDOM
    • Other options
      • DB_LOCK_MINLOCKS
      • DB_LOCK_MINWRITE (# of write locks, not write ops)
      • DB_LOCK_YOUNGEST
      • DB_LOCK_OLDEST
    • The victim
      • Gets a DB_LOCK_DEADLOCK error return
      • Must immediately (close all cursors and) abort
  • 26. Dealing with Deadlocks
    • Transactional applications must code for deadlocks.
    • Typical response to deadlock is to retry
    • Repeated retries may indicate a serious problem
  • 27. Deadlock Example
    • for (fail = 0; fail < MAXIMUM_RETRY; fail++) {
    • ret = DB_ENV->txn_begin(DB_ENV, NULL, &txn, 0);
    • … error_handling …
    • ret = db->put(db, txn, &key, &data, 0);
    • if (ret == 0) {
    • … commit the transaction …
    • return 0; /* Success! */
    • } else { /* DB_LOCK_DEADLOCK or something else */
    • … abort the transaction …
    • }
    • }
    • DB_ENV->err(DB_ENV, ret, “Maximum retry limit exceeded”);
    • return (ret); /* Retry limit reached; give up. */
  • 28. Deadlock Avoidance
    • Good practices
      • Keep transactions short
      • Read/write databases in the same order in all transactions
      • Limit the number of concurrent writers
      • Use DB_RMW for read-modify-write operations
      • DB_READ_UNCOMMITTED for readers
      • DB_READ_COMMITTED for cursors
      • DB_REVSPLITOFF for cyclical Btrees (grow/shrink/grow/…)
    • Debugging
      • db_stat –Co
      • db_stat -Cl
      • “ Deadlock Debugging” section of the Reference Guide
  • 29. Subsystems and Utilities
    • Checkpoint
    • Logging
    • Backups
    • Recovery
  • 30. Checkpoints
    • Recall that Berkeley DB maintains a cache of database pages.
    • Checkpoints write dirty pages from the cache into the databases
      • Transaction commit only writes the log
    • Checkpoints:
      • Permit log file reclamation
      • Reduce recovery time
      • Block other operations minimally
    • Checkpoint is I/O intensive
    • Typically, checkpoint in a separate thread
      • Command line db_checkpoint
      • Use DB_ENV->txn_checkpoint()
  • 31. Controlling Checkpoints
    • Log file size dictates unit of log reclamation
      • Use DB_ENV->set_lg_max
    • Checkpoint method and utility controlled by
      • Force checkpoint
      • More than kbytes of log have been written
      • More than min minutes have passed
    • Reduce checkpoint impact by keeping mpool clean
      • Use DB_ENV->memp_trickle
  • 32. Logging
    • Database modifications are described in log records .
    • Log records describe physical changes to pages.
    • Log records are indexed by log sequence numbers ( LSN s)
    • The log is comprised of one or more log files .
    • Logs are named log. NNNNNNNNNN
    • Logs can be maintained on-disk or in-memory
      • On-disk logs stored in DB_HOME directory
      • Or in location configured by DB_ENV->set_lg_dir()
    • Log records flushed by DB_TXN->commit()
    • For best performance and recoverability properties,
    • place logs and databases on separate disks
  • 33. Logging Configuration
    • DB_ENV->set_lg_max()
      • Set the size of the individual log files
      • On disk logs: default is 10MB
      • In-memory logs: default is 256KB
    • DB_ENV->set_lg_bsize()
      • Set the size of the in-memory log buffer
      • On-disk logs: default is 32KB
      • In-memory logs: default is 1MB
    • DB_ENV->set_lg_regionmax()
      • Set the size of the logging subsystem’s shared region (default is 60KB)
      • Region holds filenames
  • 34. Backups
    • Full backup: Copy database and log files
      • Standard
        • Pause database operations and perform the backup
        • Creates a snapshot at a known point in time
      • Hot
        • Backup while database(s) are still active
        • Creates a snapshot at fuzzy point in time
    • Incremental backup
      • Copy log files to be replayed/recovered against a full backup
      • Hot failover
  • 35. Standard Backup
    • Commit or abort all on-going transactions
    • Pause all database writes
    • Force a checkpoint
    • Copy all database files to backup location
      • Find active database files with DB_ENV->log_archive()
      • or db_archive program with DB_ARCH_DATA flag
    • Copy the last log file to backup location
      • DB_ENV->log_archive() or db_archive with DB_ARCH_LOG identifies all of the log files
  • 36. Hot Backup
    • Do not stop database operations
    • Copy all database files to backup location
      • Use DB_ENV->log_archive() with DB_ARCH_DATA or db_archive –s
      • Database files may be modified during the backup, so the copy utility must read each database page atomically.
    • Copy all log files to backup location
    • The order of operations must be preserved.
    • Copy databases first and then log files.
  • 37. Log File Removal
    • Remove unused log files to regain disk space
      • db_archive –l or
      • DB_ENV->log_archive() to identify unused log files
    • To allow catastrophic recovery
      • Move to backup media; do not simply delete them
    • Never remove all of the transaction logs
    • Do not remove active transaction logs
  • 38. Recovery
    • Write-ahead logging
      • Log records always written to disk before database updates
    • Recovery
      • Database changes validated against the log
        • Redo committed operations not in the database
        • Undo aborted operations that are in the database
      • Removes and re-initializes the environment files
      • Must be single-threaded
      • Other threads of control must wait for recovery
  • 39. When to Run Recovery
    • DB_RUNRECOVERY error
      • Subsequent API calls return DB_RUNRECOVERY
      • Restart application and run recovery
    • Always perform recovery at application startup
      • Prevent spreading corruption
  • 40. Kinds of Recovery
    • Normal recovery assumes no loss of media
      • Reviews log records since the last checkpoint
      • More frequent checkpoints mean faster recovery
      • DB_RECOVER flag when opening the environment or
      • db_recover program
    • Catastrophic recovery does the same, but:
      • Reviews all available log files
      • Catastrophic recovery can take awhile
      • DB_RECOVER_FATAL flag or db_recover –c
    • Recovery to a timestamp is available
  • 41. Recovery Procedures
    • Simplest case
      • Re-create the databases, no need for recovery
    • Normal recovery
      • Up-to-date database and log files are available
      • Not used when creating hot backups
    • Catastrophic recovery
      • Database or log files destroyed or corrupted
      • Normal recovery fails for any reason
      • Used when creating hot backups
  • 42. Catastrophic Recovery
    • Often performed in a directory other than where the database files previously lived
    • Copy the most recent snapshot of the database and log files to the recovery directory
    • Copy any newer log files into the recovery directory
    • Log files must be recovered in sequential order
    • Run db_recover -c
      • Or call DB_ENV->open() with DB_RECOVER_FATAL
  • 43. Maintaining a Hot Standby
    • db_archive -s to identify all the active database files
    • Copy identified database files to the backup/failover directory
    • Archive all existing log files from the backup directory
    • Use db_archive (no option) in the active environment to identify the inactive logs and move them to the backup directory
    • Use db_archive -l in the active environment to identify all active log files and copy these to the hot failover directory
    • Run db_recover -c against the hot failover directory to catastrophically recover the environment
    • Steps 2-5 can be repeated as often as desired
    • If step 1 is repeated, steps 2-5 must follow to ensure consistency
  • 44. Programming Practices
    • Disk guarantees
    • Application structures
    • Locking
    • Special errors
  • 45. Disk I/O Integrity Guarantees
    • Disk writes are performed a database page at a time
    • Berkeley DB assumes pages are atomically written
    • If the OS requires multiple writes per page
      • A partial page can be written, possibly corrupting the database
    • To guard against this:
      • Set database page size equal to file system page size OR
      • Configure the environment to perform checksums*
      • Use DB->set_flags() with the DB_CHKSUM flag
      • If corruption is detected run catastrophic recovery
      • * Checksums can be computationally expensive
  • 46. Disk I/O Integrity Guarantees
    • Berkeley DB relies on the integrity of the underlying software and hardware interfaces
      • Berkeley DB assumes POSIX compliance, especially with regard to disk I/O
      • Avoid hardware that performs partial writes or “lies” about writes (acknowledges them when the data hits the disk cache and does not guarantee that the cache can be written under all circumstances).
      • Best practice: store database files and log files on physically separate disks
  • 47. Transactional Applications
    • Must handle recovery gracefully
    • Always run recovery, normally on startup, before other threads/processes join the environment
    • Use DB_REGISTER to indicate if recovery is necessary.
      • All DB_ENV handles must be opened with DB_REGISTER for this to work.
      • If DB_REGISTER and DB_RECOVER are both set, recovery will only be run if necessary.
    • Use DB_ENV->failchk() after any thread/process failure to determine if it is safe to continue.
  • 48. Single-process Applications
    • A single process, multi-threaded application
      • First thread opens the database environment
        • Run recovery at this time
      • First thread opens databases
      • First thread spawns subsequent threads
      • Subsequent threads share DB_ENV and DB handles
      • Last thread to exit closes the DB_ENV and DB handles
    • If any thread exits abnormally
      • Some thread calls DB_ENV->failchk()
      • Return value dictates action: continue or run recovery
  • 49. Multi-process Applications #1
    • Order of process start up must be controlled
      • Recovery must happen before anything else
      • Recovery must be single-threaded
    • Processes themselves may be threaded
    • Processes maintain their own DB_ENV and DB handles
    • If thread of control exits abnormally
      • All threads should exit database environment
      • Recovery must be run
    • Automate with DB_REGISTER flag in DB_ENV->open()
  • 50. Multi-process Applications #2
    • Often easiest to have a single monitoring process
    • Monitor is responsible for running recovery
    • Monitor starts other processes after recovery
    • Monitor is on a timer or waits for other processes to die
      • At which time it uses failchk() mechanism
      • If failchk() returns DB_RUNRECOVERY:
        • Monitor runs recovery
        • Other processes die when the environment is recreated
        • Monitor kills stubborn or slow processes
        • Monitor re-starts the system
  • 51. Locking in Transactions
    • Initialize locking subsystem
      • Use the flag DB_INIT_LOCK to DB_ENV->open
    • Pages or records are read- or write-locked
    • Conflicting lock requests wait until object available
    • Locking within transactions
      • Locks are released at the end of the transaction
      • Earlier if specified ( DB_READ_COMMITTED , DB_READ_UNCOMMITTED ).
    • Locking without transactions
      • Locks are released at the end of the operation
  • 52. Additional Locking Options
    • Use lock timeouts instead of or in addition to regular deadlock detection
      • DB_LOCK_NOTGRANTED
        • (optional) is returned when a lock times out
      • DB_LOCK_DEADLOCK
        • returned when a lock times out
        • or the transaction is selected for deadlock resolution
    • Accuracy of timeout depends on how often deadlock detection is performed
  • 53. Special Errors
    • Catastrophic error: should never happen
    • Subsequent calls to the API will return the same error
    • DB->set_paniccall() or DB_ENV->set_paniccall()
      • Callback function when catastrophic error occurs
    • Exit and restart application
      • Run recovery if you are using an environment
  • 54. End of Part II