11/11

463 views
424 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
463
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • 2
  • 3
  • 7
  • 8
  • 11
  • 12
  • 9
  • 10
  • 6
  • 14
  • 23
  • 11/11

    1. 1. Data Integration, Concluded Physical Data Storage Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems June 21, 2010
    2. 2. An Alternate Approach: The Information Manifold (Levy et al.) <ul><li>When you integrate something, you have some conceptual model of the integrated domain </li></ul><ul><ul><li>Define that as a basic frame of reference, everything else as a view over it </li></ul></ul><ul><ul><li>“ Local as View” </li></ul></ul><ul><li>May have overlapping/incomplete sources </li></ul><ul><ul><li>Define each source as the subset of a query over the mediated schema </li></ul></ul><ul><ul><li>We can use selection or join predicates to specify that a source contains a range of values : </li></ul></ul><ul><ul><ul><li>ComputerBooks(…)  Books(Title, …, Subj), Subj = “Computers” </li></ul></ul></ul>
    3. 3. The Local-as-View Model <ul><li>The basic model is the following: </li></ul><ul><ul><li>“ Local” sources are views over the mediated schema </li></ul></ul><ul><ul><li>Sources have the data – mediated schema is virtual </li></ul></ul><ul><ul><li>Sources may not have all the data from the domain – “open-world assumption” </li></ul></ul><ul><li>The system must use the sources (views) to answer queries over the mediated schema </li></ul>
    4. 4. Query Answering <ul><li>Assumption: conjunctive queries , set semantics </li></ul><ul><li>Suppose we have a mediated schema: author(aID, isbn, year), book(isbn, title, publisher) </li></ul><ul><li>Suppose we have the query: </li></ul><ul><li> q(a, t) :- author(a, i, _), book(i, t, p), t = “DB2 UDB” </li></ul><ul><li>and sources: </li></ul><ul><ul><ul><ul><li>s1(a,t)  author(a, i, _), book(i, t, p), t = “DB2 UDB” </li></ul></ul></ul></ul><ul><ul><ul><ul><li>… </li></ul></ul></ul></ul><ul><ul><ul><ul><li>s5(a, t, p)  author(a, i, _), book(i,t), p = “SAMS” </li></ul></ul></ul></ul><ul><li>We want to compose the query with the source mappings – but they’re in the wrong direction! </li></ul><ul><li>Yet: everything in s1, s5 is an answer to the query! </li></ul>
    5. 5. Answering Queries Using Views <ul><li>Numerous recently-developed algorithms for these </li></ul><ul><ul><li>Inverse rules [Duschka et al.] </li></ul></ul><ul><ul><li>Bucket algorithm [Levy et al.] </li></ul></ul><ul><ul><li>MiniCon [Pottinger & Halevy] </li></ul></ul><ul><ul><li>Also related: “chase and backchase” [Popa, Tannen, Deutsch] </li></ul></ul><ul><li>Requires conjunctive queries </li></ul>
    6. 6. Inverse Rules <ul><li>Reverse the implication in each rule: </li></ul><ul><ul><ul><ul><li>author(a, i 1 (a,t) , _) :- s1(a,t) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>book( i 1 (a,t) , t, p 1 (a,t) ) :- s1(a,t), t = “DB2 UDB” </li></ul></ul></ul></ul><ul><ul><ul><ul><li>… </li></ul></ul></ul></ul><ul><ul><ul><ul><li>author(a, i 5 (a,t,p) , _) :- s5(a, t, p) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>book( i 5 (a,t,p) , t) :- s5(a, t, p), p = “SAMS” </li></ul></ul></ul></ul><ul><li>Then unfold: </li></ul><ul><ul><ul><li>q(a, t) :- author(a, i, _), book(i, t, p), t = “DB2 UDB” </li></ul></ul></ul><ul><ul><ul><li>q(a, t) :- s1(a, x), i = i 1 (a,x), s1(a,t), t=“DB2 UDB”, i = i 1 (a,t), p=p 1 (a,t), t=“DB2 UDB” </li></ul></ul></ul><ul><ul><ul><li>q(a, t) :- author(a, i, _), book(i, t, p), t = “DB2 UDB” </li></ul></ul></ul><ul><ul><ul><li>q(a, t) :- s5(a, x, y), i = i 1 (a,x,y), s5(a,t,p), i = i 1 (a,t,p), p=“SAMS”, t=“DB2 UDB” </li></ul></ul></ul>
    7. 7. Summary of Data Integration <ul><li>Local-as-view integration has replaced global-as-view as the standard </li></ul><ul><ul><li>More robust way of defining mediated schemas and sources </li></ul></ul><ul><ul><li>Mediated schema is clearly defined, less likely to change </li></ul></ul><ul><ul><li>Sources can be more accurately described </li></ul></ul><ul><li>Methods exist for query reformulation, including inverse rules </li></ul><ul><li>Integration requires standardization on a single schema </li></ul><ul><ul><li>Can be hard to get consensus </li></ul></ul><ul><ul><li>Today we have peer-to-peer data integration, e.g., Piazza [Halevy et al.], Orchestra [Ives et al.], Hyperion [Miller et al.] </li></ul></ul><ul><li>Some other aspects of integration were addressed in related papers </li></ul><ul><ul><li>Overlap between sources; coverage of data at sources </li></ul></ul><ul><ul><li>Semi-automated creation of mappings and wrappers </li></ul></ul><ul><li>Data integration capabilities in commercial products: Oracle Fusion, IBM’s WebSphere Information Integrator, numerous packages from middleware companies; MS BizTalk Mapper, IBM Rational Data Architect </li></ul>
    8. 8. Performance: What Governs It? <ul><li>Speed of the machine – of course! </li></ul><ul><li>But also many software-controlled factors that we must understand: </li></ul><ul><ul><li>Caching and buffer management </li></ul></ul><ul><ul><li>How the data is stored – physical layout, partitioning </li></ul></ul><ul><ul><li>Auxiliary structures – indices </li></ul></ul><ul><ul><li>Locking and concurrency control (we’ll talk about this later) </li></ul></ul><ul><ul><li>Different algorithms for operations – query execution </li></ul></ul><ul><ul><li>Different orderings for execution – query optimization </li></ul></ul><ul><ul><li>Reuse of materialized views, merging of query subexpressions – answering queries using views; multi-query optimization </li></ul></ul>
    9. 9. Our General Emphasis <ul><li>Goal: cover basic principles that are applied throughout database system design </li></ul><ul><li>Use the appropriate strategy in the appropriate place </li></ul><ul><ul><li>Every (reasonable) algorithm is good somewhere </li></ul></ul><ul><li>… And a corollary: database people reinvent a lot of things and add minor tweaks… </li></ul>
    10. 10. Storing Tuples in Pages <ul><li>Tuples </li></ul><ul><ul><li>Many possible layouts </li></ul></ul><ul><ul><ul><li>Dynamic vs. fixed lengths </li></ul></ul></ul><ul><ul><ul><li>Ptrs, lengths vs. slots </li></ul></ul></ul><ul><ul><li>Tuples grow down, directories grow up </li></ul></ul><ul><ul><li>Identity and relocation </li></ul></ul><ul><li>Objects and XML are harder </li></ul><ul><ul><li>Horizontal, path, vertical partitioning </li></ul></ul><ul><ul><li>Generally no algorithmic way of deciding </li></ul></ul><ul><li>Generally want to leave some space for insertions </li></ul>t1 t2 t3
    11. 11. Alternatives for Organizing Files <ul><li>Many alternatives, each ideal for some situation, and poor for others : </li></ul><ul><ul><li>Heap files: for full file scans or frequent updates </li></ul></ul><ul><ul><ul><li>Data unordered </li></ul></ul></ul><ul><ul><ul><li>Write new data at end </li></ul></ul></ul><ul><ul><li>Sorted Files: if retrieved in sort order or want range </li></ul></ul><ul><ul><ul><li>Need external sort or an index to keep sorted </li></ul></ul></ul><ul><ul><li>Hashed Files: if selection on equality </li></ul></ul><ul><ul><ul><li>Collection of buckets with primary & overflow pages </li></ul></ul></ul><ul><ul><ul><li>Hashing function over search key attributes </li></ul></ul></ul>
    12. 12. Model for Analyzing Access Costs <ul><li>We ignore CPU costs, for simplicity: </li></ul><ul><ul><li>p(T): The number of data pages in table T </li></ul></ul><ul><ul><li>r(T): Number of records in table T </li></ul></ul><ul><ul><li>D: (Average) time to read or write disk page </li></ul></ul><ul><ul><li>Measuring number of page I/O’s ignores gains of pre-fetching blocks of pages; thus, I/O cost is only approximated. </li></ul></ul><ul><ul><li>Average-case analysis; based on several simplistic assumptions. </li></ul></ul><ul><ul><li>Good enough to show the overall trends! </li></ul></ul>
    13. 13. Approximate Cost of Operations <ul><li>Several assumptions underlie these (rough) estimates! </li></ul>* * No overflow buckets, 80% page occupancy Heap File Sorted File Hashed File Scan all recs p(T) D p(T) D 1.25 p(T) D Equality Search p(T) D / 2 D log 2 p(T) D Range Search p(T) D D log 2 p(T) + (# pages with matches) 1.25 p(T) D Insert 2D Search + p(T) D 2D Delete Search + D Search + p(T) D 2D
    14. 14. Speeding Operations over Data <ul><li>Recall that we’re interested in how to get good performance in answering queries </li></ul><ul><li>The first consideration is how the data is made accessible to the DBMS </li></ul><ul><ul><li>We saw different arrangements of the tables: Heap (unsorted) files, sorted files, and hashed files </li></ul></ul><ul><li>Today we look further at 3 core concepts that are used to efficiently support sort- and hash-based access to data: </li></ul><ul><ul><li>Indexing </li></ul></ul><ul><ul><li>Sorting </li></ul></ul><ul><ul><li>Hashing </li></ul></ul>
    15. 15. Technique I: Indexing <ul><li>An index on a file speeds up selections on the search key attributes for the index (trade space for speed). </li></ul><ul><ul><li>Any subset of the fields of a relation can be the search key for an index on the relation. </li></ul></ul><ul><ul><li>Search key is not the same as key (minimal set of fields that uniquely identify a record in a relation). </li></ul></ul><ul><li>An index contains a collection of data entries , and supports efficient retrieval of all data entries k* with a given key value k . </li></ul><ul><li>Generally the entries of an index are some form of node in a tree – but should the index contain the data, or pointers to the data? </li></ul>
    16. 16. Alternatives for Data Entry k* in Index <ul><li>Three alternatives for where to put the data: </li></ul><ul><ul><li>Data record wherever key value k appears </li></ul></ul><ul><ul><ul><li>Clustered  fast lookup </li></ul></ul></ul><ul><ul><ul><li>Index is large; only 1 can exist </li></ul></ul></ul><ul><ul><li>< k , rid of data record with search key value k > , OR </li></ul></ul><ul><ul><li>< k , list of rids of data records with search key k > </li></ul></ul><ul><ul><ul><li>Can have secondary indices </li></ul></ul></ul><ul><ul><ul><li>Smaller index may mean faster lookup </li></ul></ul></ul><ul><ul><ul><li>Often not clustered  more expensive to use </li></ul></ul></ul><ul><li>Choice of alternative for data entries is orthogonal to the indexing technique used to locate data entries with a given key value k </li></ul>rid = row id, conceptually a pointer
    17. 17. Classes of Indices <ul><li>Primary vs. secondary : primary has the primary key </li></ul><ul><ul><li>Most DBMSs automatically generate a primary index when you define a primary key </li></ul></ul><ul><li>Clustered vs. unclustered : order of records and index are approximately the same </li></ul><ul><ul><li>Alternative 1 implies clustered, but not vice-versa </li></ul></ul><ul><ul><li>A file can be clustered on at most one search key </li></ul></ul><ul><li>Dense vs. Sparse : dense has index entry per data value; sparse may “skip” some </li></ul><ul><ul><li>Alternative 1 always leads to dense index [Why?] </li></ul></ul><ul><ul><li>Every sparse index is clustered! </li></ul></ul><ul><ul><li>Sparse indexes are smaller; however, some useful optimizations are based on dense indexes </li></ul></ul>
    18. 18. Clustered vs. Unclustered Index <ul><li>Suppose Index Alternative (2) used, with pointers to records stored in a heap file </li></ul><ul><ul><li>Perhaps initially sort data file, leave some gaps </li></ul></ul><ul><ul><li>Inserts may require overflow pages </li></ul></ul><ul><li>Consider how these strategies affect disk caching and access </li></ul>Index entries Data entries direct search for (Index File) (Data file) Data Records data entries Data entries Data Records CLUSTERED UNCLUSTERED
    19. 19. B+ Tree: The DB World’s Favorite Index <ul><li>Insert/delete at log F N cost </li></ul><ul><ul><li>(F = fanout, N = # leaf pages) </li></ul></ul><ul><ul><li>Keep tree height-balanced </li></ul></ul><ul><li>Minimum 50% occupancy (except for root). </li></ul><ul><li>Each node contains d <= m <= 2 d entries. d is called the order of the tree. </li></ul><ul><li>Supports equality and range searches efficiently. </li></ul>Index Entries Data Entries (&quot;Sequence set&quot;) (Direct search)
    20. 20. Example B+ Tree <ul><li>Search begins at root, and key comparisons direct it to a leaf. </li></ul><ul><li>Search for 5*, 15*, all data entries >= 24* ... </li></ul><ul><li>Based on the search for 15*, we know it is not in the tree! </li></ul>Root 17 24 30 2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* 13
    21. 21. B+ Trees in Practice <ul><li>Typical order: 100. Typical fill-factor: 67%. </li></ul><ul><ul><li>average fanout = 133 </li></ul></ul><ul><li>Typical capacities: </li></ul><ul><ul><li>Height 4: 1334 = 312,900,700 records </li></ul></ul><ul><ul><li>Height 3: 1333 = 2,352,637 records </li></ul></ul><ul><li>Can often hold top levels of tree in buffer pool: </li></ul><ul><ul><li>Level 1 = 1 page = 8 KB </li></ul></ul><ul><ul><li>Level 2 = 133 pages = 1 MB </li></ul></ul><ul><ul><li>Level 3 = 17,689 pages = 133 MB </li></ul></ul><ul><ul><li>Level 4 = 2,352,637 pages = 18 GB </li></ul></ul><ul><ul><li>“ Nearly O(1)” access time to data – for equality or range queries! </li></ul></ul>
    22. 22. Inserting Data into a B+ Tree <ul><li>Find correct leaf L. </li></ul><ul><li>Put data entry onto L. </li></ul><ul><ul><li>If L has enough space, done! </li></ul></ul><ul><ul><li>Else, must split L (into L and a new node L2) </li></ul></ul><ul><ul><ul><li>Redistribute entries evenly, copy up middle key. </li></ul></ul></ul><ul><ul><ul><li>Insert index entry pointing to L2 into parent of L. </li></ul></ul></ul><ul><li>This can happen recursively </li></ul><ul><ul><li>To split index node, redistribute entries evenly, but push up middle key. (Contrast with leaf splits.) </li></ul></ul><ul><li>Splits “grow” tree; root split increases height. </li></ul><ul><ul><li>Tree growth: gets wider or one level taller at top . </li></ul></ul>
    23. 23. Inserting 8* Example: Copy up Root 17 24 30 2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* 13 Want to insert here; no room, so split & copy up: 2* 3* 5* 7* 8* 5 Entry to be inserted in parent node. (Note that 5 is copied up and continues to appear in the leaf.) 8*
    24. 24. Inserting 8* Example: Push up Root 17 24 30 2* 3* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* 13 5* 7* 8* 5 Need to split node & push up 5 24 30 17 13 Entry to be inserted in parent node. (Note that 17 is pushed up and only appears once in the index. Contrast this with a leaf split.)
    25. 25. Deleting Data from a B+ Tree <ul><li>Start at root, find leaf L where entry belongs. </li></ul><ul><li>Remove the entry. </li></ul><ul><ul><li>If L is at least half-full, done! </li></ul></ul><ul><ul><li>If L has only d-1 entries, </li></ul></ul><ul><ul><ul><li>Try to re-distribute, borrowing from sibling (adjacent node with same parent as L). </li></ul></ul></ul><ul><ul><ul><li>If re-distribution fails, merge L and sibling. </li></ul></ul></ul><ul><li>If merge occurred, must delete entry (pointing to L or sibling) from parent of L. </li></ul><ul><li>Merge could propagate to root, decreasing height. </li></ul>
    26. 26. B+ Tree Summary <ul><li>B+ tree and other indices ideal for range searches, good for equality searches. </li></ul><ul><ul><li>Inserts/deletes leave tree height-balanced; log F N cost. </li></ul></ul><ul><ul><li>High fanout (F) means depth rarely more than 3 or 4. </li></ul></ul><ul><ul><li>Almost always better than maintaining a sorted file. </li></ul></ul><ul><ul><li>Typically, 67% occupancy on average. </li></ul></ul><ul><ul><li>Note: Order (d) concept replaced by physical space criterion in practice (“at least half-full”). </li></ul></ul><ul><ul><ul><li>Records may be variable sized </li></ul></ul></ul><ul><ul><ul><li>Index pages typically hold more entries than leaves </li></ul></ul></ul>

    ×