So, philip gave me the title for the talk and I've to run with it! ;)
Mein Name ist Michael Renner
Twitter Handle - der mich auch schon in Probleme gebracht hat.
Web Operations, starkes Interesse an Datenbanken, Skalierung und
PG-Enthusiast seit 2004
If you've got questions - please just ask!
Who's using Postgres?
A free RDBMS done right
Relational database management system
It does SELECT, INSERT, UPDATE, DELETE
In a sane & maintainable way.
Tries hard to not surprise users, hype resistant.
No single commercial entity behind the project.
Multiple consulting companies, distros, large companies who are core
developers and have commit access.
One major release per year
Five years maintenance
Multiple maintenance releases per year
Friendly & Competent
• Freenode: #postgresql(-de)
more often than not the consultants from various companies are hanging out
in the channels
9.4 ante portas
That being said, the next major release will come after the summer,
extrapolating from past releases it should be here around September.
It'll bring quite a bit of new features, I selected a few interesting ones.
... are aggregate functions over ordered sets!
Aggregate functions are things like sum or count which can operate on random
sets of data.
If the set is ordered you can do additional things like...
Calculate 95th percentile
postgres=# SELECT percentile_disc(0.95) WITHIN GROUP(ORDER BY i) FROM
generate_series(1,100) AS s(i);
Most importantly - native datatype with jsonb
In the past, stored only text which was validated as correct json
Now separate on-disk representation format
Bit more expensive while writing (serialization)
but much faster while querying, since json doesn't need to be reparsed each
time while accessing.
New JSON functions
$ SELECT * FROM json_to_recordset(
AS x (name text, value numeric);
name | value
e | 2.718
pi | 3.141
tau | 6.283
...and to complement the new data type, there are also new accessor functions
and quite a bit of new replication features.
which we'll cover later
Which brings us right up to the replication
A tale of sorrows
or: "Brewer hates us"
If you've got a strong stomach, read through:
which is a tale of sorrows, and this is not limited to Postgres or SQL databases.
Getting distributed database systems right is _HARD_.
And even the distributed database poster childs get it wrong
Brewer's CAP Theorem
• it is impossible for a distributed system to
simultaneously provide these guarantees:
• Partition tolerance
In a nutshell
Consistency - all nodes see the same data at the same time
Availability - a guarantee that every request receives a response about
whether it was successful or failed
Partition tolerance - the system continues to operate despite arbitrary message
loss or failure of part of the system
Brewer says: It's impossible to get all three
Managers like things available & partition tolerant
Scale up, not out
Postgres, in the past, solved this problem by not dealing with it in the ﬁrst
So that we don't have to bother with this, most people will usually tell you to
just scale up
Throw more/bigger hardware at the problem and be done with it.
Real world says:
But that's not always possible.
You might need to have geo-redundant database servers, you might run in an
environment where "scaling up" is no feasible option (hello ec2!)
So we need replication.
What are our options?
So we need replication... Postgres has a bit of a Perl problem - TMTOWTDI
...one of the oldest options
Usually achieved by using a SAN or DRBD
HA solution tacked on top of it, if one server goes down, other starts up
Add a trigger to all replicated tables
Changes get written to a separate table
Daemon reads changes from source DB and writes to destination DB
or "The proxy approach"
Connect to middleware instead of real database
All queries executed on middleware will be sent to many databases
That's ﬁne until one of the servers isn't reachable!
(Write Ahead) Log-based
And the most common ones
* Postgres writes all changes it does to the table & index ﬁles into a log, which
would be used during crash recovery
* Send log contents to a secondary server
* Secondary server does "continuous crash recovery"
What should you use?
With all those options the question that comes up is...
and since "it depends" is probably not a sufficient answer for most of you
I'd recommend to look at log-based replication ﬁrst and only reconsider this
when you're sure it won't ﬁt you
Has it's own bag of things to look out for, but the stuff where most of
development and operations resources are spent nowadays
• Completed WAL-segments are copied to
slave and applied there
• Streaming replication
• Transactions are streamed to slave
• Can also be conﬁgured for synchronous
Log-based replication in Postgres comes in two ﬂavors
On WAL handling
• Server generates WAL with every
modifying operation, 16MB segments
• Normally gets rotated after successful
• Lots of conditions and conﬁg settings
that can change the behaviour
• Slave needs base copy from master + all
WAL ﬁles to reach consistent state
$ $EDITOR pg_hba.conf
host replication replication 192.0.2.0/24 trust
$ $EDITOR postgresql.conf
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 32
This is a strict streaming replication example, no log archiving
If the slave server is offline too long, it needs to be freshly initialized from the
• Slaves are 100% identical to master
• No selective replication (DBs,Tables, etc.)
• No slave-only indexes
• WAL segment handling can be tricky
• Slave Query conﬂicts due to master TXs
• Excessive disk space usage on master
• Broken replication due to already-recycled
segments on master
But when running with log based replication there are things to look out for
Coming in 9.4
All of the stuff works out of the box with 9.3
There are a few new things coming in postgres 9.4
One of the most interesting additions is logical decoding
Master Server generates a list of tuple modiﬁcations
Similar to trigger-based replication, but much more efficient and easier to
Almost identical to "row based replication" format in MySQL
$ INSERT INTO z (whatever) VALUES ('row2');
INSERT 0 1
$ SELECT * FROM pg_logical_slot_get_changes('depesz', null, null, 'include-xids', '0');
location | xid | data
0/5204A858 | 932 | BEGIN
0/5204A858 | 932 | table public.z: INSERT: id[integer]:1 whatever[text]:'row2'
0/5204A928 | 932 | COMMIT
Here's an example of what logical decoding will produce
You can ﬁnd more extensive examples at Hubert Depesz blog
Replication slots are an additional feedback mechanism between slave and
master to communicate which WAL ﬁles are still needed
Also the backbone for logical replication
Time-delayed rep allows an additional mechanism against operational
commit/checkpoint records are only applied after a conﬁgured time value has
passed since the TX has been completed
What's coming in 9.5+?
These were the things that are already included in 9.4,
for the coming development cycles there're already a few things in the pipeline
What's currently missing is a reliable consumer for the data generated by 9.4
People, mostly Andres Freund from 2nd Quadrant, are working on this topic
and I expect that there's more to talk about next year with 9.5
Will be possible to build Galera-Like systems with the infrastructure
...or INSERT ON DUPLICATE KEY ...
Was planned for 9.4, but turned out to be more complicated than anticipated
Developer meeting later this year where the course of action will be decided
That's all for now
Any questions, ideas?
You can hit me up on twitter or via Mail
And there's also a link collection of tools and projects to look at when you're
building your own replication setup