Your SlideShare is downloading. ×
0
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

HA Clustering of PostgreSQL(replication)@2012.9.29 PG Study.

2,714

Published on

See https://github.com/t-matsuo/resource-agents/wiki

See https://github.com/t-matsuo/resource-agents/wiki

1 Comment
6 Likes
Statistics
Notes
  • Very good introduction for this resource agent, explain the internal mechanism of RA.
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total Views
2,714
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
8
Comments
1
Likes
6
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Clustering of PostgreSQL(replication)using Pacemaker2012/9/29 PG Study Takatoshi MATSUO
  • 2. 3 major functions 1. failover crash failover Master of Master crash sync 3 manage old data Master replication Slave old DBデータ DB data 2. switch setting between sync and async DB data DB data Async crash of Master replication Slave crash <- Japanese PostgreSQL mascot DB data old DBデータCopyright(c)2012 NTT, Inc. All Rights Reserved. 2
  • 3. Base configuration for load distribution (Read Query only) LAN for service Read/Write Read Only virtual IP virtual IP (vip-master) (vip-slave) LAN for replication PostgreSQL virtual IP PostgreSQL (vip-rep) (Master) (Slave) Slave connects manage to this IP manage pgsql RA pgsql RA Pacemaker LAN for Pacemaker Pacemaker LAN for STONITH server#1 server#2 local disk local diskCopyright(c)2012 NTT, Inc. All Rights Reserved. 3
  • 4. Basic action 1 : Masters failover vip-master vip-master vip-slave vip-master vip-master vip-slave vip-slave 1. crash PostgreSQL PostgreSQL 故障 PostgreSQL PostgreSQL vip-rep 2. STOP vip-rep vip-rep (Master) (Slave) (Master) 5. (Slave) (Master) pgsql RA pgsql RA pgsql RA pgsql RA pgsql RA Pacemaker Pacemaker Pacemaker Pacemaker Pacemaker server #1 server #2 サーバ#1 server #1 サーバ#2 server #2 4 old 2. stop PostgreSQL on #1 1. detect PostgreSQL crash 3. stop virtual IPs(vip-master, vip-rep, vip-slave) 4. record that #1s data is old 5. promote #2s PostgreSQL 6. start IPs (vip-master, vip-rep, vip-slave) on #2Copyright(c)2012 NTT, Inc. All Rights Reserved. 4
  • 5. Basic action 2 : switch setting between sync and async 2 vip-slave3. detect vip-master 5 vip-slave vip-master vip-slave sync 7. async 同期 PostgreSQL ①crash PostgreSQL PostgreSQL 故障 PostgreSQL vip-rep vip-rep ④STOP (Master) (Slave) (Master) (Slave) pgsql RA pgsql RA pgsql RA pgsql RA Pacemaker Pacemaker Pacemaker Pacemaker Pacemaker server #1 server #2 server #1 サーバ#2 server #2 6 old 1. crash of Slave (ex : kill walreceiver) 4. stop PostgreSQL on #2 2. Masters transaction is stopped 5. move vip-lave from #1 to #2 ~ wait replication timeout ~ 6. record that #2s data is old 3. detect cutoff of replication 7. swith setting from sync to async on #1 SELECT * from pg_stat_replication -> resume transaction Copyright(c)2012 NTT, Inc. All Rights Reserved. 5
  • 6. TimelineID  TimelineID is incremented when promote is called  Slave cant connect to Master if TimelineID is different failover failover Slave→Master Master replication Slave crash 5 5 5 5 6 different Need to copy DB data from new Master to old Master to make up the numberCopyright(c)2012 NTT, Inc. All Rights Reserved. 6
  • 7. operation 1: recovery after Masters failover vip-master vip-master 6 vip-master vip-slave vip-slave vip-slave vip-slave vip-slave vip-slave 5. sync replication 故障 crash ①故障復旧 PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL vip-rep vip-rep vip-rep (Master) (Master) (Slave) ④ (Slave) 4 (Slave) (Master) pgsql RA pgsql RA pgsql f故 ail pgsql RA pgsql RA pgsql RA pgsql RA pgsql RA pgsql RA Pacemaker Pacemaker Pacemaker Pacemaker Pacemaker Pacemaker 3. remove lock file and サーバ#1 サーバ#1 サーバ#1 server #1 clear fail-count サーバ#2 サーバ#2 サーバ#2 server #2 server #1 サーバ#1 server #2 2. copy data old new 1. failure restoration 2. copy data from #2 to #1 4. start PostgreSQL as slave on #1 → make up the number 5. start replication (manually) 3. remove lock file and clear -> connect as async at first Pacemakers fail-count on #1 -> swith from async to sync 6. move vip-slave from #2 to #1Copyright(c)2012 NTT, Inc. All Rights Reserved. 7
  • 8. Transition of PostgreSQL and Pacemaker no recovery.conf recovery.conf is exist pg_ctl promote STOP Slave Master stop stop start promote STOP Slave Master stop demote Pacemaker starts Master through Slave invariably Pacemaker starts Master through Slave invariably -> TimeineID is incremented when Master is started. -> TimeineID is incremented when Master is started.Copyright(c)2012 NTT, Inc. All Rights Reserved. 8
  • 9. operation 2 : start 6 vip-slave vip-slave 3. start virtual IPs vip-slave vip-master vip-master PostgreSQL PostgreSQL PostgreSQL 停止 vip-rep STOP 停止 停止 vip-rep 停止 (Slave) (Master) (Master)2. start pgsql RA pgsql RA pgsql RA pgsql RA 5. start Pacemaker Pacemaker Pacemaker server #1 サーバ#1 サーバ#2 server #2 server #1 server #2 サーバ#2 1 4. copy data new 新 old 古 new old 1. select server that has new data 4. copy data from #1 to #2 (manually) 2. start Pacemaker on selected server → make up the number (manually) → start as Slave → promote 5. start Pacemaker → occur gap of TimelineID → start replication 3. start virtual IPs 6. move vip-slave from #1 to #2 Copyright(c)2012 NTT, Inc. All Rights Reserved. 9
  • 10. detailCopyright(c)2012 NTT, Inc. All Rights Reserved. 10
  • 11. Term To distinguish precisely between PostgreSQLs status and Pacemakers status, I will use these terms. PostgreSQLs status ■ PRI : running as Primary(Master) ■ HS : running as Hot Standby(Slave) ■ STOP : stopped Pacemakers status ■ Master : manage PostgreSQL as Master ■ Slave : manage PostgreSQL as Slave ■ Stopped : manage PostgreSQL as StoppedCopyright(c)2012 NTT, Inc. All Rights Reserved. 11
  • 12. after fixing terms no recovery.conf recovery.conf is exist pg_ctl promote STOP HS PRI stop stop RA uses these transitions start promote Stopped Slave Master stop demoteCopyright(c)2012 NTT, Inc. All Rights Reserved. 12
  • 13. Status Mapping between Pacemaker and PostgreSQL start promote Stopped Slave Master stop demote start promote STOP HS PRI stop × do nothing stop STOP (unmatch) cant stay in this statusCopyright(c)2012 NTT, Inc. All Rights Reserved. 13
  • 14. multiple status in HS  There are multiple status in HS 1. no connection ・・・ HS:alone 2. connect to PRI HS:(others) a. except sync replication ・・・ HS:(others) ・ HS:connected ・・・ HS:sync ・ HS:async b. sync replication Pacemaker manages these status as "pgsql-status" attribute Pacemaker manages these status as "pgsql-status" attribute assume that resource ID is pgsqlCopyright(c)2012 NTT, Inc. All Rights Reserved. 14
  • 15. Status Mapping (before) Stopped Slave Master STOP HS PRI STOPCopyright(c)2012 NTT, Inc. All Rights Reserved. 15
  • 16. Status Mapping (after) Stopped Slave Master HS: HS: STOP (others) sync PRI HS: alone STOP If these is no PRI on other nodes and my data is newCopyright(c)2012 NTT, Inc. All Rights Reserved. 16
  • 17. Judgment whether HSs data is new  when PRI exists ■ record my data status based on connection status • If PRI is exist, HS cant be PRI, so RA only record the status.  when PRI crashes ■ use the status that was recorded just before PRI is broken  Initial start (PRI has never even existed) ■ If there is HS in other nodes • compare data ■ If there is no HS in other nodes • judge that my data is newCopyright(c)2012 NTT, Inc. All Rights Reserved. 17
  • 18. status of HSs data  recorded status when PRI exists on other nodes ・・ DISCONNECT 1. no connection old data 2. connect to PRI ・・ (ex) STREAMING|ASYNC a. except sync replication ・・ STREAMING|SYNC new data b. sync replication get from pg_stat_replication view  In PRI ・・ LATEST new data Pacemaker manages these status as "pgsql-data-status" attribute Pacemaker manages these status as "pgsql-data-status" attribute assume that resource ID is pgsqlCopyright(c)2012 NTT, Inc. All Rights Reserved. 18
  • 19. difference "pgsql-status" "pgsql-data-status" Use Use  pgsql-status - know running status of PostgreSQL - know running status of PostgreSQL ■ running status of PostgreSQL - work together with other resources - work together with other resources • STOP • HS:alone, HS:async, HS:sync • PRI • UNKNOWN Use Use  pgsql-data-status - record datas old and new - record datas old and new ■ data status based on relationship to PRI - judge wheter Pacemaker can be - judge wheter Pacemaker can be • DISCONNECTED Master Master • STREAMING|ASYNC, STREAMING|SYNC and so on • LATEST ■ status that was recorded just before PRI crashes remains • not always correspond with running status – Its not importable that pgsql-status=HS:alone and pgsql-data-status = LATESTCopyright(c)2012 NTT, Inc. All Rights Reserved. 19
  • 20. Status Mapping (final) Stopped Slave Master HS: HS: STOP (others) sync PRI HS: alone STOP When there is no PRI on other nodes and my pgsql-data-status is new (=LATEST or STREAMING|SYNC). Or I have newer data than other nodes one. (initial start-up)Copyright(c)2012 NTT, Inc. All Rights Reserved. 20
  • 21. Sample configurationCopyright(c)2012 NTT, Inc. All Rights Reserved. 21
  • 22. Example (simple ver.) ping target LAN for service 192.168.103.1 (192.168.103.0/24) eth3 eth3 pm01 pm02 Group LAN for replication (master-group) (192.168.104.0/24) virtual IP eth4 eth4 (vip-master) 192.168.103.110 virtual IP (vip-rep) 192.168.104.110 PostgreSQL PostgreSQL (msPostgresql) (msPostgresql) Master Slave virtual IP monitor NW monitor NW (vip-slave) (clnPingd) (clnPingd) 192.168.103.111 LAN for PacemakerCopyright(c)2012 NTT, Inc. All Rights Reserved. skip the STONITH because of simple version 22
  • 23. Sample Pacemaker configuration 1/2property primitive vip-rep ocf:heartbeat:IPaddr2 no-quorum-policy="ignore" params stonith-enabled="false" ip="192.168.104.110" crmd-transition-delay="0s" nic="eth4" cidr_netmask="24" rsc_defaults meta resource-stickiness="INFINITY" migration-threshold="0" migration-threshold="1" op start timeout="60s" interval="0s" on-fail="stop" op monitor timeout="60s" interval="10s" on-fail="restart" ms msPostgresql pgsql use Master/Slave op stop timeout="60s" interval="0s" on-fail="block" meta master-max="1" primitive vip-slave ocf:heartbeat:IPaddr2 master-node-max="1" params clone-max="2" ip="192.168.103.111" clone-node-max="1" nic="eth3" notify="true" cidr_netmask="24" meta group master-group resource-stickiness="1" vip-master op start timeout="60s" interval="0s" on-fail="restart" vip-rep op monitor timeout="60s" interval="10s" on-fail="restart" meta op stop timeout="60s" interval="0s" on-fail="block" ordered="false" primitive pgsql ocf:heartbeat:pgsql clone clnPingd params pingCheck pgctl="/usr/pgsql-9.1/bin/pg_ctl" Main setting psql="/usr/pgsql-9.1/bin/psql" primitive vip-master ocf:heartbeat:IPaddr2 pgdata="/var/lib/pgsql/9.1/data/" params rep_mode="sync" ip="192.168.103.110" node_list="pm01 pm02" nic="eth3" restore_command="cp /var/lib/pgsql/9.1/data/pg_archive/%f %p" cidr_netmask="24"    primary_conninfo_opt="keepalives_idle=60 op start timeout="60s" interval="0s" on-fail="restart" keepalives_interval=5 keepalives_count=5" op monitor timeout="60s" interval="10s" on-fail="restart" master_ip="192.168.104.110" op stop timeout="60s" interval="0s" on-fail="block" stop_escalate="0" op start timeout="30s" interval="0s" on-fail="restart" op stop timeout="30s" interval="0s" on-fail="block" op monitor timeout="30s" interval="11s" on-fail="restart" op monitor timeout="30s" interval="10s" on-fail="restart" role="Master" op promote timeout="30s" interval="0s" on-fail="restart" op demote timeout="30s" interval="0s" on-fail="block" op notify timeout="60s" interval="0s"Copyright(c)2012 NTT, Inc. All Rights Reserved. 23
  • 24. Sample Pacemaker configuration 2/2 primitive pingCheck ocf:pacemaker:pingd params name="default_ping_set" host_list="192.168.103.1" multiplier="100" op start timeout="60s" interval="0s" on-fail="restart" op monitor timeout="60s" interval="2s" on-fail="restart" op stop timeout="60s" interval="0s" on-fail="ignore" ### Resource Location ### location rsc_location-1 msPostgresql rule -inf: not_defined default_ping_set or default_ping_set lt 100 location rsc_location-2 vip-slave rule 200: pgsql-status eq HS:sync rule 100: pgsql-status eq PRI rule -inf: not_defined pgsql-status rule -inf: pgsql-status ne HS:sync and pgsql-status ne PRI ### Resource Colocation ### colocation rsc_colocation-1 inf: msPostgresql clnPingd colocation rsc_colocation-2 inf: master-group msPostgresql:Master ### Resource Order ### order rsc_order-1 0: clnPingd msPostgresql symmetrical=false order rsc_order-2 inf: msPostgresql:promote master-group:start symmetrical=false order rsc_order-3 0: msPostgresql:demote master-group:stop symmetrical=false II recommend adding STONITH resource on business. recommend adding STONITH resource on business.Copyright(c)2012 NTT, Inc. All Rights Reserved. 24
  • 25. pick upCopyright(c)2012 NTT, Inc. All Rights Reserved. 25
  • 26. Main setting of pgsql RAprimitive pgsql ocf:heartbeat:pgsql params pgctl="/usr/pgsql-9.1/bin/pg_ctl" psql="/usr/pgsql-9.1/bin/psql" pgdata="/var/lib/pgsql/9.1/data/" rep_mode="sync" use sync replication node_list="pm01 pm02" all hostname that get into repliation restore_command restore_command="cp /var/lib/pgsql/9.1/data/pg_archive/%f %p" of recovery.conf    primary_conninfo_opt="keepalives_idle=60 primary_conninfo keepalives_interval=5 keepalives_count=5" of recovery.conf (cant set application_name) master_ip="192.168.104.110" IP of vip-rep stop_escalate="0" use immediate shutdown to speed up failover op start timeout="30s" interval="0s" on-fail="restart" op stop timeout="30s" interval="0s" on-fail="block" op monitor timeout="30s" interval="11s" on-fail="restart" op monitor timeout="30s" interval="10s" on-fail="restart" role="Master" op promote timeout="30s" interval="0s" on-fail="restart" op demote timeout="30s" interval="0s" on-fail="block" op notify timeout="60s" interval="0s" Copyright(c)2012 NTT, Inc. All Rights Reserved. 26
  • 27. start-up setting of vip-slavelocation rsc_location-2 vip-slave rule 200: pgsql-status eq HS:sync if pgsql-status = HS:sync, start vip-slave rule 100: pgsql-status eq PRI if pgsql-status = PRI, start vip-lave rule -inf: not_defined pgsql-status rule -inf: pgsql-status ne HS:sync and pgsql-status ne PRI Pacemaker starts vip-slave on Slave (HS:sync) preferentially primitive vip-slave ocf:heartbeat:IPaddr2 params ip="192.168.103.111" You need to set resource-stickiness nic="eth3" cidr_netmask="24" not to fasten vip-slave. meta resource-stickiness="1" Copyright(c)2012 NTT, Inc. All Rights Reserved. 27
  • 28. restart setting of vip-repprimitive vip-rep ocf:heartbeat:IPaddr2 params ip="192.168.104.110" nic="eth4" cidr_netmask="24" meta migration-threshold="0" op start timeout="60s" interval="0s" on-fail="stop" op monitor timeout="60s" interval="10s" on-fail="restart" op stop timeout="60s" interval="0s" on-fail="block" If vip-rep is broken, Pacemaker attempts to restart it.Copyright(c)2012 NTT, Inc. All Rights Reserved. 28
  • 29. start and stop order setting between Master and master-group (vip-master,vip-rep)start virtual IPs afiter promoteorder rsc_order-2 inf: msPostgresql:promote master-group:start symmetrical=false This incllude virtual IPsorder rsc_order-3 0: msPostgresql:demote master-group:stop symmetrical=falsestop virtual IPs after demotenot to cut a replication connection.Copyright(c)2012 NTT, Inc. All Rights Reserved. 29
  • 30. display example of crm_mon command Online: [ pm01 pm02 ] vip-slave (ocf::heartbeat:IPaddr2): Started pm02 Resource Group: master-group virtual vip-master (ocf::heartbeat:IPaddr2): Started pm01 vip-rep (ocf::heartbeat:IPaddr2): Started pm01 IPs Master/Slave Set: msPostgresql Masters: [ pm01 ] state of PostgreSQL Slaves: [ pm02 ] Clone Set: clnPingd Started: [ pm01 pm02 ] Node Attributes: * Node pm01: + default_ping_set : 100 + master-pgsql:0 : 1000 pgsql-status, + pgsql-data-status : LATEST pgsql-data-status + pgsql-status : PRI on pm01 + pgsql-master-baseline : 00000000150000B0 + pm02-eth1 : up RA shows xlog location + pm02-eth2 : up when promote is called * Node pm02: + default_ping_set : 100 + master-pgsql:1 : 100 pgsql-status, + pgsql-data-status : STREAMING|SYNC pgsql-data-status + pgsql-status : HS:sync on pm02 + pm01-eth1 : up + pm01-eth2 : up Migration summary: * Node pm01:Copyright(c)2012Node Inc. All Rights Reserved. * NTT, pm02: 30
  • 31. Setting of PostgreSQLCopyright(c)2012 NTT, Inc. All Rights Reserved. 31
  • 32. postgresql.conf (Only an important point)  listen_address = * ■ PostgreSQL cant listen particular IP because Slave dosent have vip- master  delete synchronous_standby_names ■ RA uses this parameter to switch between sync and async • RA insert "include /var/lib/pgsql/tmp/rep_mode.conf" into postgresql.conf  restart_after_crash = off ■ Pacemaker manages state of PostgreSQL  replication_timeout = 20s (about?) ■ If replication LAN or Slave are broken, Masters transaction is suspended until PostgreSQL cuts connection as this timeout  hot_standby = on ■ to monitor Slave  max_standby_streaming_delay = -1 max_standby_archive_delay = -1 ■ prevent cancel of monitor on SlaveCopyright(c)2012 NTT, Inc. All Rights Reserved. 32
  • 33. postgresql.conf (その他)  wal_level = hot_standby  wal_keep_segments = 64 (about?)  wal_receiver_status_interval = 5s (about?) ■ this parameter < replication_timeout  hot_standby_feedback = on  archive_mode = onCopyright(c)2012 NTT, Inc. All Rights Reserved. 33
  • 34. lock-fileCopyright(c)2012 NTT, Inc. All Rights Reserved. 34
  • 35. lock-file  This file is created when promote is called ■ To avoid data inconsistency • Pacemaker cant start PostgreSQL if this file exists • path(default) : /var/lib/pgsql/tmp/PGSQL.lock ■ The file is removed when demote is called and HS dosent exist.Copyright(c)2012 NTT, Inc. All Rights Reserved. 35
  • 36. flaw example Application NG ■ write a data at location 7 on PRI PRI 1 2 3 4 5 6 7 ■ PRI is broken during replication HS 1 2 3 4 5 6 fail-over is occurred HS ■ Application writes a data at ↓ 1 2 3 4 5 6 7 8 PRI location 7, 8 .... on new PRI ■ recover old PRI as HS PRI 1 2 3 4 5 6 7 8 9 • data is replicated from 8 ■ data inconsistency is occured HS 1 2 3 4 5 6 7 8 9 inconsistencyCopyright(c)2012 NTT, Inc. All Rights Reserved. 36
  • 37. After Application ■ create lock file during promote NG ■ write a data at location 7 PRI 1 2 3 4 5 6 7 ■ PRI is broken and fail-over is occured HS 1 2 3 4 5 6 ■ recover old PRI as HS PRI 1 2 3 4 5 6 7 8 9 → start ERROR HS 1 2 3 4 5 6 7 start NG You need to copy data from new PRI to old PRI You need to copy data from new PRI to old PRI and remove the file and remove the fileCopyright(c)2012 NTT, Inc. All Rights Reserved. 37
  • 38. Conclusion  3 major functions ■ failover of Master ■ switch setting between sync and async ■ manage old data  Setting of Pacemaker ■ need a virtual IP (vip-rep) for replication ■ if necessary you can use vip-slave to handle read-only query  Causion ■ cant connect to PRI if TimelineID is defferent ■ data inconsistency may be occured after fail-over • need to copy data from new PRI to old PRI and remove lock fileCopyright(c)2012 NTT, Inc. All Rights Reserved. 38
  • 39. reference : my environment to develop  Pacemaker 1.0.12 ■ 1.0.11s Master/Slave function has a bug  PostgreSQL 9.1 ■ non-PostgreSQL9.0-compliantCopyright(c)2012 NTT, Inc. All Rights Reserved. 39

×