Building a High-Availability PostgreSQL Cluster at ARIN
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Building a High-Availability PostgreSQL Cluster at ARIN

  • 949 views
Uploaded on

Through a long and intense period of research, implementation, and testing, ARIN completed the migration from Oracle to PostgreSQL late last year. Learn more at:......

Through a long and intense period of research, implementation, and testing, ARIN completed the migration from Oracle to PostgreSQL late last year. Learn more at: http://teamarin.net/2014/04/01/building-high-availability-postgresql-cluster-arin/

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
949
On Slideshare
933
From Embeds
16
Number of Embeds
2

Actions

Shares
Downloads
12
Comments
0
Likes
1

Embeds 16

https://twitter.com 11
http://www.slideee.com 5

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Building a High-Availability PostgreSQL Cluster Presenter: Devon Mizelle System Administrator Co-Author: Steven Bambling System Administrator
  • 2. What is ARIN? •Regional Internet registry for Canada, US, and parts of the Caribbean •Distributes IPv4 & IPv6 addresses and Autonomous System Numbers (Internet number resources) in the region •Provides authoritative WHOIS services for number resources in the region 2
  • 3. ARIN’s Internal Data 3
  • 4. Requirements 4
  • 5. Why Not Slony or pgpool-II? • Slony replaces pgSQL‟s replication – Why do this? – Why not let pgSQL handle it? • Pgpool is not ACID-Compliant – Doesn‟t confirm writes to multiple nodes 5
  • 6. Our solution • CMAN / Corosync – Red Hat + Open-source solution for cross- node communication • Pacemaker – Red Hat and Novell‟s solution for service management and fencing • Both under active development by Clusterlabs 6
  • 7. CMAN/ Corosync • Provides a messaging framework between nodes • Handles a heartbeat between nodes – “Are you up and available?” – Does not provide „status‟ of service, Pacemaker does • Pacemaker uses Corosync to send messages between nodes 7
  • 8. CMAN / Corosync 8
  • 9. About Pacemaker • Developed / maintained by Red Hat and Novell • Scalable – Anywhere from a two-node to a 16- node setup • Scriptable – Resource scripts can be written in any language – Monitoring – Watches out for service state changes – Fencing – Disables a box and switches roles when failures occur • Shareable database between nodes about status of services / nodes 9
  • 10. Pacemaker 10 Master AsyncSync
  • 11. Other Pacemaker Resources 11 Fencing IP Addresses
  • 12. How does it all tie together? From the bottom up…
  • 13. Pacemaker 13 Client “vip”Replication “vip” Master Sync Async App
  • 14. Event Scenario 14 Master Sync AsyncMaster SyncAsync
  • 15. PostgreSQL • Still in charge of replicating data • The state of the service and how it starts is controlled by Pacemaker 15
  • 16. Layout 16 💙 💙 MasterSlave Slave cman cman cman Client
  • 17. Using Tools to Look Deeper Introspection…
  • 18. # crm_mon -i 1 -Arf
  • 19. # crm_mon –i 1 -Arf (cont)
  • 20. Questions? Devon Mizelle