Your SlideShare is downloading. ×
Multi-Master Replication with Slony
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Multi-Master Replication with Slony

3,414
views

Published on

ne of the most sought after features in PostgreSQL is a scalable multi-master replication solution. While there does exists some tools to create multi-master clusters such as Bucardo and pgpool-II, …

ne of the most sought after features in PostgreSQL is a scalable multi-master replication solution. While there does exists some tools to create multi-master clusters such as Bucardo and pgpool-II, they may not be the right fit for an application. In this session, you will learn some of the strengths and weaknesses of these more popular multi-master solutions for PostgreSQL and how they compare to using Slony for your multi-master needs. We will explore the types of deployments best suited for a Slony deployment and the steps necessary to configure a multi-master solution for PostgreSQL.

Published in: Technology

0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,414
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
86
Comments
0
Likes
4
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1.
      Multi-Master Replication Using Slony
  • 2.
      Who Am I?
    • Jim Mlodgenski
      • Co-organizer of NYCPUG
      • 3. Founder of Cirrus Technologies
      • 4. Former Chief Architect of EnterpriseDB
  • 5.
      What is Mult-Master?
    • Multiple phyiscal servers in a cluster allow for the updating of data
      • Increases Availability
      • 6. Increases performance
        • Most noticable over a WAN
  • 7.
      CAP Theorem
    • The theory put forth by Brewer that only 2 of the 3 are possible in a distributed system
      • C onsistency
      • 8. A vailability
      • 9. P artition Tolerance
  • 10.
      Sync vs. Async
    • Synchronous
      • All transactions applied to all server before a success is returned to the client
        • Simpler to design and architect
        • 11. At the expense of performance
  • 12.
      Sync vs. Async
    • Asynchronous
      • A transaction is applied to a single server before a success is returned to the client
        • About the same performance as a single server
        • 13. Need to deal with conflict resolution
  • 14.
      Existing Solutions
  • 17.
      Bucardo
    • Asynchronous multi-master, master-slave, event based replication solution
      • Used in several production deployments
      • 18. Limited to 2 masters
  • 19.
      PgPool
    • Synchronous statement based, multi-master replication solution
      • Much more than a replication solution
      • 20. Need to deal with Nondeterministic functions
  • 21.
      RubyRep
    • Asynchronous multi-master, event based replication solution
      • Not a PostgreSQL only solution
      • 22. Limited to 2 masters
  • 23.
      What is Slony?
    • Asynchronous master-slave, event based replication solution
      • Proven production use cases
      • 24. Cascading replication
  • 25.
      Retail Store Problem
    • Corporate HQ controls the pricing
    • 26. The stores control the daily sales information
  • 27.
      Retail Store Problem
    • The information pushed down for HQ is exactly what Slony is good at
      • The tables controlled at HQ replicates to the many stores
      • 28. The tables at the stores are read-only
  • 29.
      Retail Store Problem
    • A single replication set can control this
      • May need to cascade if there are many stores
    slonik <<_EOF_ cluster name = HQ; node 1 admin conninfo = 'dbname=hq'; node 2 admin conninfo = 'dbname=store1'; node 3 admin conninfo = 'dbname=store2'; node 4 admin conninfo = 'dbname=store3'; node 5 admin conninfo = 'dbname=store4'; node 6 admin conninfo = 'dbname=store5'; init cluster ( id=1, comment = 'HQ Node'); create set (id=1, origin=1, comment='All HQ tables'); set add table (set id=1, origin=1, id=1, fully qualified name = 'public.items', comment='items table'); set add table (set id=1, origin=1, id=2, fully qualified name = 'public.prices', comment='prices table'); store node (id=2, comment = 'Store1 node', event node=1); store path (server = 1, client = 2, conninfo='dbname=hq'); store path (server = 2, client = 1, conninfo='dbname=store1'); ...
  • 30.
      Retail Store Problem
    • Replicating all of the stores data to HQ is the challenge
      • Use table inheritance
  • 31.
      Retail Store Problem
    • Each store has its one partition as well as the master partition
      • The appropriate triggers or rules should be applied to keep the structure transparent to the application
  • 32.
      Retail Store Problem
    • A replication set is needed for each store
    • 33. Cascading will be necessary with many stores
    slonik <<_EOF_ cluster name = Store1; node 1 admin conninfo = 'dbname=store1'; node 2 admin conninfo = 'dbname=hq'; init cluster ( id=1, comment = 'Store1 Node'); create set (id=1, origin=1, comment='All Store1 sales tables'); set add table (set id=1, origin=1, id=1, fully qualified name = 'public.sales_store1', comment='sales partition'); set add table (set id=1, origin=1, id=2, fully qualified name = 'public.sales_line_store1', comment='sales detail partition'); store node (id=2, comment = 'HQ node', event node=1); store path (server = 1, client = 2, conninfo='dbname=store1'); store path (server = 2, client = 1, conninfo='dbname=hq'); _EOF_
  • 34.
      Retail Store Problem
    • Potential pitfalls
      • Many replication set and many slon daemons running
      • 35. Need to deal with the complexity of table inheritance
  • 36.
      Regional Office Problem
    • NY, London, Tokyo each control their own accounts
    • 37. All accounts need to be visible
    • 38. Changes to international accounts do occur
  • 39.
      Regional Office Problem
    • Challenges
      • Need to deal with conflict resolution
      • 40. Unique account identifier across all regions
      • 41. Ping-pong effect
  • 42.
      Regional Office Problem
    • The application needs to deal with the account table as it is designed but we need additional fields
      • SELECT *
      • 43. FROM accounts
      • 44. WHERE account_no = X
  • 45.
      Regional Office Problem
    • Create a table with the necessary origin
    • 46. Use a view to mask the field
      • CREATE VIEW accounts AS
      • 47. SELECT account_id, account_no, account_name,
      • 48. amount, created_date, last_update
      • 49. FROM accounts_tbl
  • 50.
      Regional Office Problem
    • Need a central broker to handle distribution of the changes
      • One place for conflict resolution
    slonik <<_EOF_ cluster name = CENTRAL_BROKER; node 1 admin conninfo = 'dbname=london'; node 2 admin conninfo = 'dbname=newyork'; node 3 admin conninfo = 'dbname=tokyo'; init cluster ( id=1, comment = 'Central Broker'); create set (id=1, origin=1, comment='Account Table'); set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts_tbl', comment='accounts table'); store node (id=2, comment = 'New York', event node=1); store path (server = 1, client = 2, conninfo='dbname=london'); store path (server = 2, client = 1, conninfo='dbname=newyork'); store node (id=3, comment = 'Tokyo', event node=1); store path (server = 1, client = 3, conninfo='dbname=london'); store path (server = 3, client = 1, conninfo='dbname=tokyo');
  • 51.
      Regional Office Problem
    • Use table inheritance to handle the changes in the offices
      • No account_id on the shadow tables
  • 52.
      Regional Office Problem
    • Adding rules to the accounts view allows the transactions to occur on the shadow tables
    CREATE OR REPLACE RULE accounts_i AS ON INSERT TO accounts DO INSTEAD INSERT INTO accounts_shadow_newyork VALUES (NEW.account_no, NEW.account_name, NEW.amount, NEW.created_date, NEW.last_update, 'newyork')
  • 53.
      Regional Office Problem
    • Replicate the local transactions to the central broker
    slonik <<_EOF_ cluster name = NEWYORK; node 1 admin conninfo = 'dbname=newyork'; node 2 admin conninfo = 'dbname=london'; init cluster ( id=1, comment = 'New York'); create set (id=1, origin=1, comment='Account Shadow Table'); set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts_shadow_newyork'); store node (id=2, comment = 'New York', event node=1); store path (server = 1, client = 2, conninfo='dbname=newyork'); store path (server = 2, client = 1, conninfo='dbname=london'); _EOF_
  • 54. CREATE OR REPLACE FUNCTION accounts_shadow_trig() RETURNS trigger AS $BODY$ DECLARE existing_account_id integer; BEGIN -- Business Logic for First-In Wins SELECT account_id INTO existing_account_id FROM accounts_tbl WHERE account_no = NEW.account_no; IF FOUND THEN RAISE INFO 'Account % already exists. Ignoring new INSERT', NEW.account_no; ELSE INSERT INTO accounts_tbl VALUES (nextval('account_id_seq'), NEW.account_no, NEW.account_name, NEW.amount, NEW.created_date, NEW.last_update, NEW.origin_code); END IF; RETURN NEW; END; $BODY$ LANGUAGE plpgsql;
      Regional Office Problem
    • Add the conflict resolution logic to the broker as triggers on the shadow tables
  • 55.
      Regional Office Problem
    • Potential pitfalls
      • Many moving parts to maintenance is a heavy burden
      • 56. Reliance on a central broker
  • 57.
      Moral of the Story
    • Slony is extremely flexible
    • 58. PostgreSQL is extremely flexible
    • 59. Together you can do some strange and powerful things
    • 60. Don't use multi-master unless absolutely necessary
  • 61.
      Questions?
      Jim Mlodgenski
        Email: [email_address] Twitter: @jim_mlodgenski