Balancing Replication and Partitioning in a Distributed Java Database

14,830 views
14,701 views

Published on

This talk, presented at JavaOne 2011, describes the ODC, a distributed, in-memory database built in Java that holds objects in a normalized form in a way that alleviates the traditional degradation in performance associated with joins in shared-nothing architectures. The presentation describes the two patterns that lie at the core of this model. The first is an adaptation of the Star Schema model used to hold data either replicated or partitioned data, depending on whether the data is a fact or a dimension. In the second pattern, the data store tracks arcs on the object graph to ensure that only the minimum amount of data is replicated. Through these mechanisms, almost any join can be performed across the various entities stored in the grid, without the need for key shipping or iterative wire calls.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
14,830
On SlideShare
0
From Embeds
0
Number of Embeds
11,538
Actions
Shares
0
Downloads
63
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Big data sets are held distributed and only joined on the grid to collocated objects. Small data sets are held in replicated caches so they can be joined in process (only ‘active’ data is held)
  • Big data sets are held distributed and only joined on the grid to collocated objects. Small data sets are held in replicated caches so they can be joined in process (only ‘active’ data is held)
  • Balancing Replication and Partitioning in a Distributed Java Database

    1. 1. In-memory databases offer significant gains in But three issues performance as all data is have stunted their freely available. There is no uptake: AddressTraditional disk- spaces only being need to page to and from disk.oriented database large enough for aarchitecture is subset of a typicalshowing its age. This makes joins a user’s data. The ‘one problem. When data more bit’ problem must be joined across and durability. multiple machines performance degradation Snowflake is inevitable. Distributed in- Schemas allow memory databases us to mix But this model only goes solve these three Partitioning so far. “Connected problems but at the and Replication Replication” takes us a price of loosing the so joins never step further allowing us single address space. hit the wire. to make the best possible use of replication.
    2. 2. The lay of the land: The main architecturalconstructs in the database industry
    3. 3. Shared Disk
    4. 4. ms μs ns ps1MB Disk/Network 1MB Main Memory 0.000,000,000,000Cross Continental Main Memory L1 Cache RefRound Trip Ref Cross Network L2 Cache Ref Round Trip * L1 ref is about 2 clock cycles or 0.7ns. This is the time it takes light to travel 20cm
    5. 5. Distributed Cache
    6. 6. Taken from “OLTP Throughthe Looking Glass, and WhatWe Found There”Harizopoulos et al
    7. 7. Shared Nothing Teradata, Vertica, Greenplumb… SN Regular In-Memory In-MemoryDatabase Database Drop Disk Exasol, VoltDB,Oracle, Sybase, MySql Times Ten, HSQL, KDB Distribute Hana ODC Distributed Caching Coherence, Gemfire, Gigaspaces
    8. 8. Distributed Architecture Simplify the Contract. Stick to RAM
    9. 9. 450 processes 2TB of RAM Oracle CoherenceMessaging (Topic Based) as a system of record (persistence)
    10. 10. Access Layer Java Java client client API API Query Layer Transactions Data Layer Mtms CashflowsPersistence Layer
    11. 11. IndexingPartitioning Replication
    12. 12. But your storage is limited bythe memory on a node
    13. 13. Keys Fs-Fz Keys Xa-YdScalable storage, bandwidthand processing
    14. 14. Trader Party Version 1 Trade Trader Party Version 2 Trade Trader Party Version 3 Trade Trader Party Version 4 Trade…and you needversioning to do MVCC
    15. 15. Trade Trader Party Party TraderTrade Party TraderTrade Party
    16. 16. So better to usepartitioning, spreading data around the cluster.
    17. 17. Trader Party TradeTrade Trader Party
    18. 18. Trader Party TradeTrade Trader Party
    19. 19. !This is what using Snowflake Schemas and the Connected Replication pattern is all about!
    20. 20. Crosscutting Keys Common Keys
    21. 21. ReplicatedTrader Party Trade Partitioned
    22. 22. Valuation Legs Valuationsart Transaction Mapping Cashflow Mapping Facts: Party Alias Transaction =>Big, Cashflows common Legs Parties keys Ledger Book Source Book Dimensions Cost Centre Product =>Small, Risk Organisation Unit Business Unit crosscutting HCS Entity Keys Set of Books 0 37,500,000 75,000,000 112,500,000 150,000,000
    23. 23. Coherence’s KeyAssociation gives us thisTrades MTMs Common Key
    24. 24. ReplicatedTrader Party Trade Partitioned (
    25. 25. Query LayerTrader Party Trade Transactions Data Layer Mtms Cashflows Fact Storage (Partitioned)
    26. 26. Dimensions (repliacte) Transactions Mtms Facts Cashflows (distribute/ partition)Fact Storage(Partitioned)
    27. 27. Valuation Legs Valuations Facts:art Transaction Mapping Cashflow Mapping Party Alias =>Big =>Distribute Transaction Cashflows Legs Parties Ledger Book Source Book Dimensions =>Small Cost Centre Product Risk Organisation Unit => Replicate Business Unit HCS Entity Set of Books 0 37,500,000 75,000,000 112,500,000 150,000,000
    28. 28. We use a variant on a Snowflake Schema to partition big stuff, that hasthe same key and replicate small stuff that has crosscutting keys.
    29. 29. ReplicateDistribute
    30. 30. Select Transaction, MTM, ReferenceData FromMTM, Transaction, Ref Where Cost Centre = ‘CC1’
    31. 31. Select Transaction, MTM, ReferenceData FromMTM, Transaction, Ref Where Cost Centre = ‘CC1’ LBs[]=getLedgerBooksFor(CC1) SBs[]=getSourceBooksFor(LBs[]) So we have all the bottom level dimensions needed to query facts Transactions Mtms Cashflows Partitioned
    32. 32. Select Transaction, MTM, ReferenceData FromMTM, Transaction, Ref Where Cost Centre = ‘CC1’ LBs[]=getLedgerBooksFor(CC1) SBs[]=getSourceBooksFor(LBs[]) So we have all the bottom level dimensions needed to query facts Transactions Get all Transactions and Mtms MTMs (cluster side join) for the passed Source Books Cashflows Partitioned
    33. 33. Select Transaction, MTM, ReferenceData From MTM, Transaction, Ref Where Cost Centre = ‘CC1’Populate raw facts LBs[]=getLedgerBooksFor(CC1)(Transactions) with SBs[]=getSourceBooksFor(LBs[])dimension data So we have all the bottom levelbefore returning to dimensions needed to query factsclient. Transactions Get all Transactions and Mtms MTMs (cluster side join) for the passed Source Books Cashflows Partitioned
    34. 34. Replicated Partitioned Java clientDimensions Facts APIWe never have to do a distributed join!
    35. 35. So all the big stuff is held paritioned And we can joinwithout shipping keys around and having intermediate results
    36. 36. Trader Party TradeTrade Trader Party
    37. 37. Trader Party Version 1 Trade Trader Party Version 2 Trade Trader Party Version 3 Trade Trader Party Version 4 Trade
    38. 38. Trade Trader Party Party TraderTrade Party TraderTrade Party
    39. 39. Valuation Legs Valuationsrt Transaction Mapping Cashflow Mapping Party Alias Facts Transaction Cashflows Legs Parties This is a dimension Ledger Book •  It has a different Source Book Cost Centre key to the Facts. Dimensions Product •  And it’s BIG Risk Organisation Unit Business Unit HCS Entity Set of Books 0 125,000,000
    40. 40. Party Alias Parties Ledger Book Source Book Cost Centre ProductRisk Organisation Unit Business Unit HCS Entity Set of Books 0 1,250,000 2,500,000 3,750,000 5,000,000
    41. 41. Party Alias Parties Ledger Book Source Book Cost Centre ProductRisk Organisation Unit Business Unit HCS Entity Set of Books 20 1,250,015 2,500,010 3,750,005 5,000,000
    42. 42. So we only replicate‘Connected’ or ‘Used’ dimensions
    43. 43. Processing Layer Dimension Caches (Replicated) Transactions Data LayerAs new Facts are added Mtmsrelevant Dimensions thatthey reference are moved Cashflowsto processing layer caches Fact Storage (Partitioned)
    44. 44. Query Layer Save Trade (With connected dimension Caches) Data LayerCache Trade (All Normalised)Store Partitioned Trigger Source Cache Party Ccy Alias Book
    45. 45. Query Layer (With connected dimension Caches) Data Layer Trade (All Normalised)Party Source CcyAlias Book
    46. 46. Query Layer (With connected dimension Caches) Data Layer Trade (All Normalised)Party Source CcyAlias Book Party Ledger Book
    47. 47. ‘Connected Replication’ A simple pattern whichrecurses through the foreign keys in the domain model, ensuring only ‘Connected’ dimensions are replicated
    48. 48. Java clientJava schema API Java ‘Stored Procedures’ and ‘Triggers’
    49. 49. Partitioned Storage

    ×