Balancing Replication and Partitioning in a Distributed Java Database
Upcoming SlideShare
Loading in...5
×
 

Balancing Replication and Partitioning in a Distributed Java Database

on

  • 13,594 views

This talk, presented at JavaOne 2011, describes the ODC, a distributed, in-memory database built in Java that holds objects in a normalized form in a way that alleviates the traditional degradation in ...

This talk, presented at JavaOne 2011, describes the ODC, a distributed, in-memory database built in Java that holds objects in a normalized form in a way that alleviates the traditional degradation in performance associated with joins in shared-nothing architectures. The presentation describes the two patterns that lie at the core of this model. The first is an adaptation of the Star Schema model used to hold data either replicated or partitioned data, depending on whether the data is a fact or a dimension. In the second pattern, the data store tracks arcs on the object graph to ensure that only the minimum amount of data is replicated. Through these mechanisms, almost any join can be performed across the various entities stored in the grid, without the need for key shipping or iterative wire calls.

Statistics

Views

Total Views
13,594
Views on SlideShare
3,067
Embed Views
10,527

Actions

Likes
0
Downloads
51
Comments
0

13 Embeds 10,527

http://www.benstopford.com 10496
http://lonrs05720 8
http://www.benstopford.com. 5
http://translate.googleusercontent.com 3
http://feeds.feedburner.com 3
http://webcache.googleusercontent.com 2
http://a0.twimg.com 2
http://localhost 2
http://benstopford.com 2
http://paper.li 1
http://clab.glb.tiaa-cref.org 1
http://ranksit.com 1
http://cache.baiducontent.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Big data sets are held distributed and only joined on the grid to collocated objects. Small data sets are held in replicated caches so they can be joined in process (only ‘active’ data is held)
  • Big data sets are held distributed and only joined on the grid to collocated objects. Small data sets are held in replicated caches so they can be joined in process (only ‘active’ data is held)

Balancing Replication and Partitioning in a Distributed Java Database Balancing Replication and Partitioning in a Distributed Java Database Presentation Transcript

  • In-memory databases offer significant gains in But three issues performance as all data is have stunted their freely available. There is no uptake: AddressTraditional disk- spaces only being need to page to and from disk.oriented database large enough for aarchitecture is subset of a typicalshowing its age. This makes joins a user’s data. The ‘one problem. When data more bit’ problem must be joined across and durability. multiple machines performance degradation Snowflake is inevitable. Distributed in- Schemas allow memory databases us to mix But this model only goes solve these three Partitioning so far. “Connected problems but at the and Replication Replication” takes us a price of loosing the so joins never step further allowing us single address space. hit the wire. to make the best possible use of replication.
  • The lay of the land: The main architecturalconstructs in the database industry
  • Shared Disk
  • ms μs ns ps1MB Disk/Network 1MB Main Memory 0.000,000,000,000Cross Continental Main Memory L1 Cache RefRound Trip Ref Cross Network L2 Cache Ref Round Trip * L1 ref is about 2 clock cycles or 0.7ns. This is the time it takes light to travel 20cm
  • Distributed Cache
  • Taken from “OLTP Throughthe Looking Glass, and WhatWe Found There”Harizopoulos et al
  • Shared Nothing Teradata, Vertica, Greenplumb… SN Regular In-Memory In-MemoryDatabase Database Drop Disk Exasol, VoltDB,Oracle, Sybase, MySql Times Ten, HSQL, KDB Distribute Hana ODC Distributed Caching Coherence, Gemfire, Gigaspaces
  • Distributed Architecture Simplify the Contract. Stick to RAM
  • 450 processes 2TB of RAM Oracle CoherenceMessaging (Topic Based) as a system of record (persistence)
  • Access Layer Java Java client client API API Query Layer Transactions Data Layer Mtms CashflowsPersistence Layer
  • IndexingPartitioning Replication
  • But your storage is limited bythe memory on a node
  • Keys Fs-Fz Keys Xa-YdScalable storage, bandwidthand processing
  • Trader Party Version 1 Trade Trader Party Version 2 Trade Trader Party Version 3 Trade Trader Party Version 4 Trade…and you needversioning to do MVCC
  • Trade Trader Party Party TraderTrade Party TraderTrade Party
  • So better to usepartitioning, spreading data around the cluster.
  • Trader Party TradeTrade Trader Party
  • Trader Party TradeTrade Trader Party
  • !This is what using Snowflake Schemas and the Connected Replication pattern is all about!
  • Crosscutting Keys Common Keys
  • ReplicatedTrader Party Trade Partitioned
  • Valuation Legs Valuationsart Transaction Mapping Cashflow Mapping Facts: Party Alias Transaction =>Big, Cashflows common Legs Parties keys Ledger Book Source Book Dimensions Cost Centre Product =>Small, Risk Organisation Unit Business Unit crosscutting HCS Entity Keys Set of Books 0 37,500,000 75,000,000 112,500,000 150,000,000
  • Coherence’s KeyAssociation gives us thisTrades MTMs Common Key
  • ReplicatedTrader Party Trade Partitioned (
  • Query LayerTrader Party Trade Transactions Data Layer Mtms Cashflows Fact Storage (Partitioned)
  • Dimensions (repliacte) Transactions Mtms Facts Cashflows (distribute/ partition)Fact Storage(Partitioned)
  • Valuation Legs Valuations Facts:art Transaction Mapping Cashflow Mapping Party Alias =>Big =>Distribute Transaction Cashflows Legs Parties Ledger Book Source Book Dimensions =>Small Cost Centre Product Risk Organisation Unit => Replicate Business Unit HCS Entity Set of Books 0 37,500,000 75,000,000 112,500,000 150,000,000
  • We use a variant on a Snowflake Schema to partition big stuff, that hasthe same key and replicate small stuff that has crosscutting keys.
  • ReplicateDistribute
  • Select Transaction, MTM, ReferenceData FromMTM, Transaction, Ref Where Cost Centre = ‘CC1’
  • Select Transaction, MTM, ReferenceData FromMTM, Transaction, Ref Where Cost Centre = ‘CC1’ LBs[]=getLedgerBooksFor(CC1) SBs[]=getSourceBooksFor(LBs[]) So we have all the bottom level dimensions needed to query facts Transactions Mtms Cashflows Partitioned
  • Select Transaction, MTM, ReferenceData FromMTM, Transaction, Ref Where Cost Centre = ‘CC1’ LBs[]=getLedgerBooksFor(CC1) SBs[]=getSourceBooksFor(LBs[]) So we have all the bottom level dimensions needed to query facts Transactions Get all Transactions and Mtms MTMs (cluster side join) for the passed Source Books Cashflows Partitioned
  • Select Transaction, MTM, ReferenceData From MTM, Transaction, Ref Where Cost Centre = ‘CC1’Populate raw facts LBs[]=getLedgerBooksFor(CC1)(Transactions) with SBs[]=getSourceBooksFor(LBs[])dimension data So we have all the bottom levelbefore returning to dimensions needed to query factsclient. Transactions Get all Transactions and Mtms MTMs (cluster side join) for the passed Source Books Cashflows Partitioned
  • Replicated Partitioned Java clientDimensions Facts APIWe never have to do a distributed join!
  • So all the big stuff is held paritioned And we can joinwithout shipping keys around and having intermediate results
  • Trader Party TradeTrade Trader Party
  • Trader Party Version 1 Trade Trader Party Version 2 Trade Trader Party Version 3 Trade Trader Party Version 4 Trade
  • Trade Trader Party Party TraderTrade Party TraderTrade Party
  • Valuation Legs Valuationsrt Transaction Mapping Cashflow Mapping Party Alias Facts Transaction Cashflows Legs Parties This is a dimension Ledger Book •  It has a different Source Book Cost Centre key to the Facts. Dimensions Product •  And it’s BIG Risk Organisation Unit Business Unit HCS Entity Set of Books 0 125,000,000
  • Party Alias Parties Ledger Book Source Book Cost Centre ProductRisk Organisation Unit Business Unit HCS Entity Set of Books 0 1,250,000 2,500,000 3,750,000 5,000,000
  • Party Alias Parties Ledger Book Source Book Cost Centre ProductRisk Organisation Unit Business Unit HCS Entity Set of Books 20 1,250,015 2,500,010 3,750,005 5,000,000
  • So we only replicate‘Connected’ or ‘Used’ dimensions
  • Processing Layer Dimension Caches (Replicated) Transactions Data LayerAs new Facts are added Mtmsrelevant Dimensions thatthey reference are moved Cashflowsto processing layer caches Fact Storage (Partitioned)
  • Query Layer Save Trade (With connected dimension Caches) Data LayerCache Trade (All Normalised)Store Partitioned Trigger Source Cache Party Ccy Alias Book
  • Query Layer (With connected dimension Caches) Data Layer Trade (All Normalised)Party Source CcyAlias Book
  • Query Layer (With connected dimension Caches) Data Layer Trade (All Normalised)Party Source CcyAlias Book Party Ledger Book
  • ‘Connected Replication’ A simple pattern whichrecurses through the foreign keys in the domain model, ensuring only ‘Connected’ dimensions are replicated
  • Java clientJava schema API Java ‘Stored Procedures’ and ‘Triggers’
  • Partitioned Storage