Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

The ninja elephant, scaling the analytics database in Transwerwise

185 views

Published on

Business intelligence and analytics is the core of any great company and Transferwise is not an exception.

The talk will start with a brief history on the legacy analytics implemented with MySQL and how we scaled up the performance using PostgreSQL. In order to get fresh data from the core MySQL databases in real time we used a modified version of pg_chameleon which also obfuscated the PII data.

The talk will also cover the challenges and the lesson learned by the developers and analysts when bridging MySQL with PostgreSQL.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

The ninja elephant, scaling the analytics database in Transwerwise

  1. 1. The ninja elephant Scaling the analytics database in Transferwise Federico Campoli Transferwise 25th January 2017 Federico Campoli (Transferwise) The ninja elephant 25th January 2017 1 / 1
  2. 2. First rule about talks, don’t talk about the speaker Born in 1972 Passionate about IT since 1982 mostly because of TRON movie Joined the Oracle DBA secret society in 2004 Fell in love with PostgreSQL in 2006 Currently runs the Brighton PostgreSQL User group Works at Transferwise as Data Engineer Federico Campoli (Transferwise) The ninja elephant 25th January 2017 2 / 1
  3. 3. Table of contents Federico Campoli (Transferwise) The ninja elephant 25th January 2017 3 / 1
  4. 4. Table of contents Federico Campoli (Transferwise) The ninja elephant 25th January 2017 4 / 1
  5. 5. We have an appointment, and we are late! Federico Campoli (Transferwise) The ninja elephant 25th January 2017 5 / 1
  6. 6. The Gordian Knot of analytics db The data engineer started in July 2016 He was involved in a task not customer facing However the task was very critical to the business Federico Campoli (Transferwise) The ninja elephant 25th January 2017 6 / 1
  7. 7. The Gordian Knot of analytics db The data engineer started in July 2016 He was involved in a task not customer facing However the task was very critical to the business To solve the performance issues on the MySQL analytics database Which were bad despite the resources assigned to the VM were considerable And the data set was medium size Federico Campoli (Transferwise) The ninja elephant 25th January 2017 6 / 1
  8. 8. Tactical assessment The existing database had the following configuration MySQL 5.6 on innodb Innodb buffer size 60 GB RAM available 70 GB 20 CPU 600 GB used on disk Analytic queries performed via Looker and Tableau The main live MySQL schema replicated into the analytics database Several schemas from the service database imported on a regular basis One schema used for obfuscating PII and denormalising the heavy queries Federico Campoli (Transferwise) The ninja elephant 25th January 2017 7 / 1
  9. 9. The frog effect If you drop a frog in a pot of boiling water, it will of course frantically try to clamber out. But if you place it gently in a pot of tepid water and turn the heat will be slowly boiled to death. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 8 / 1
  10. 10. The frog effect If you drop a frog in a pot of boiling water, it will of course frantically try to clamber out. But if you place it gently in a pot of tepid water and turn the heat will be slowly boiled to death. The performance issues worsened over a two years span The obfuscation was made via custom views The data size on the MySQL master increased over time Causing the optimiser to switch on materialise when accessing the views The analytics tools struggled just under normal load In busy periods the database became almost unusable Analysts were busy to tune existing queries rather writing new A new solution was needed Federico Campoli (Transferwise) The ninja elephant 25th January 2017 8 / 1
  11. 11. Table of contents Federico Campoli (Transferwise) The ninja elephant 25th January 2017 9 / 1
  12. 12. The eye of the storm Federico Campoli (Transferwise) The ninja elephant 25th January 2017 10 / 1
  13. 13. One size doesn’t fits all It was clear that MySQL was no longer a good fit. However the new solution’s requirements had to meet some specific needs. Data updated in almost real time from the live database PII obfuscated for the analysts PII available in clear for the power users The system should be able to scale out for several years Modern SQL for better analytics queries Federico Campoli (Transferwise) The ninja elephant 25th January 2017 11 / 1
  14. 14. May the best database win The analysts team shortlisted few solutions. Each solution covered partially the requirements. Google BigQuery Amazon RedShift Snowflake PostgreSQL Federico Campoli (Transferwise) The ninja elephant 25th January 2017 12 / 1
  15. 15. Shortlisting the shortlist Google BigQuery and Amazon RedShift did not suffice the analytics requirements and were removed from the list. Both PostgreSQL and Snowflake offered very good performance and modern SQL. Neither of them offered a replication system from the MySQL system. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 13 / 1
  16. 16. Straight into the cloud Snowflake is a cloud based data warehouse service. It’s based on Amazon S3 and comes with different sizing. Their pricing system is very appealing and the preliminary tests shown Snowflake outperforming PostgreSQL1 . 1PostgreSQL single machine vs cloud based parallel processing Federico Campoli (Transferwise) The ninja elephant 25th January 2017 14 / 1
  17. 17. Streaming copy Using FiveTran, an impressive multi technology data pipeline, the data would flow in real time from our production server to Snowflake. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 15 / 1
  18. 18. Streaming copy Using FiveTran, an impressive multi technology data pipeline, the data would flow in real time from our production server to Snowflake. Unfortunately there was just one little catch. There was no support for obfuscation. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 15 / 1
  19. 19. Customer comes first In Transferwise we really care about the customer’s data security. Our policy for the PII data is that any personal information moving outside our perimeter shall be obfuscated. The third party extraction and replica for Snowflake required full read access to our live systems or at least a database configured in cascading replica . The data should have been obfuscated before allowing the third party replicator access. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 16 / 1
  20. 20. Proactive development The data engineer foreseeing the issue developed in his spare time a proof of concept based on the replica tool pg chameleon which uses a python library to read the MySQL replica. The tests on a small copy of the live database were successful. The tool’s simple structure allowed to add the obfuscation in real time with minimal changes. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 17 / 1
  21. 21. And the winner is... In this scenario PostgreSQL would be the replicated and obfuscated data source for FiveTran. However, because the performance on PostgreSQL were quite good, and the system have good margin for scaling up, the decision was to keep the data analytics data behind our perimeter. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 18 / 1
  22. 22. Table of contents Federico Campoli (Transferwise) The ninja elephant 25th January 2017 19 / 1
  23. 23. MySQL Replica in a nutshell Federico Campoli (Transferwise) The ninja elephant 25th January 2017 20 / 1
  24. 24. A quick look to the replication system Let’s have a quick overview on how the MySQL replica works and how the replicator interacts with it. The following slides are related to pg chameleon because the custom obfuscator tool share with pg chameleon most of its concepts and code. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 21 / 1
  25. 25. MySQL Replica The MySQL replica protocol is logical When MySQL is configured properly the RDBMS saves the data changed into binary log files The slave connects to the master and gets the replication data The replication’s data are saved into the slave’s local relay logs The local relay logs are replayed into the slave Federico Campoli (Transferwise) The ninja elephant 25th January 2017 22 / 1
  26. 26. MySQL Replica Federico Campoli (Transferwise) The ninja elephant 25th January 2017 23 / 1
  27. 27. A chameleon in the middle pg chameleon mimics a mysql slave’s behaviour Connects to the master and reads data changes It stores the row images into a PostgreSQL table using the jsonb format A plpgSQL function decodes the rows and replay the changes Federico Campoli (Transferwise) The ninja elephant 25th January 2017 24 / 1
  28. 28. A chameleon in the middle pg chameleon mimics a mysql slave’s behaviour Connects to the master and reads data changes It stores the row images into a PostgreSQL table using the jsonb format A plpgSQL function decodes the rows and replay the changes PostgreSQL acts as relay log and replication slave With an extra cool feature. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 24 / 1
  29. 29. A chameleon in the middle pg chameleon mimics a mysql slave’s behaviour Connects to the master and reads data changes It stores the row images into a PostgreSQL table using the jsonb format A plpgSQL function decodes the rows and replay the changes PostgreSQL acts as relay log and replication slave With an extra cool feature. Initialises the PostgreSQL replica schema in just one command Federico Campoli (Transferwise) The ninja elephant 25th January 2017 24 / 1
  30. 30. MySQL replica + pg chameleon Federico Campoli (Transferwise) The ninja elephant 25th January 2017 25 / 1
  31. 31. Log formats MySQL supports different formats for the binary logs. The STATEMENT format. It logs the statements which are replayed on the slave. It seems the best solution for performance. However replaying queries with not deterministic elements generate inconsistent slaves (e.g. insert with uuid). The ROW format is deterministic. It logs the row image and the DDL queries. This is the format required for pg chameleon to work. MIXED takes the best of both worlds. The master logs the statements unless a not deterministic element is used. In that case it logs the row image. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 26 / 1
  32. 32. Table of contents Federico Campoli (Transferwise) The ninja elephant 25th January 2017 27 / 1
  33. 33. Maximum effort Federico Campoli (Transferwise) The ninja elephant 25th January 2017 28 / 1
  34. 34. Replica and obfuscation The data engineer worked on pg chameleon and built a minimum viable product. The project was forked into a transferwise owned repository for adding the obfuscation capabilities and other specific functionalities like the daily procedures for the pre aggregated schema. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 29 / 1
  35. 35. Mighty morphing power elephant The replica initialisation locks the mysql tables in read only mode. To avoid the main database to be locked for several hours a secondary MySQL replica is setup with the local query logging enabled. The cascading replica also allowed to use the ROW binlog format as the master uses MIXED for performance reasons. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 30 / 1
  36. 36. This is what awesome looks like! A MySQL master is replicated into a MySQL slave The slave’s data is copied and obfuscated using a PostgreSQL database! Federico Campoli (Transferwise) The ninja elephant 25th January 2017 31 / 1
  37. 37. This is what awesome looks like! A MySQL master is replicated into a MySQL slave The slave’s data is copied and obfuscated using a PostgreSQL database! Federico Campoli (Transferwise) The ninja elephant 25th January 2017 31 / 1
  38. 38. Replica initialisation The replica initialisation follows the same rules of any mysql replica setup Flush the tables with read lock Get the master’s coordinates Copy the data Release the locks Federico Campoli (Transferwise) The ninja elephant 25th January 2017 32 / 1
  39. 39. Tricky SQL The data copy pulls the data out from mysql using the CSV format with a very tricky SQL statement. SELECT CASE WHEN data_type="enum" THEN SUBSTRING(COLUMN_TYPE,5) END AS enum_list, CASE WHEN data_type IN (’"""+"’,’".join(self.hexify)+"""’) THEN concat(’hex(’,column_name,’)’) WHEN data_type IN (’bit’) THEN concat(’cast(‘’,column_name,’‘ AS unsigned)’) ELSE concat(’‘’,column_name,’‘’) END AS column_csv FROM information_schema.COLUMNS WHERE table_schema=%s AND table_name=%s ORDER BY ordinal_position ; Federico Campoli (Transferwise) The ninja elephant 25th January 2017 33 / 1
  40. 40. Fallback on failure The CSV data is pulled out in slices in order to avoid memory overload. The file is then pushed into PostgreSQL using the COPY command. However... COPY is fast but is single transaction One failure and the entire batch is rolled back If this happens the procedure loads the same data using the INSERT statements Which can be very slow But at least discards only the problematic rows Federico Campoli (Transferwise) The ninja elephant 25th January 2017 34 / 1
  41. 41. Obfuscation when initialising The obfuscation process is quite simple and uses the extension pgcrypt for hashing in sha256. When the replica is initialised the data is copied into the schema in clear The table locks are released The tables with PII are copied and obfuscated in a separate schema The process builds the indices on the schemas with data in clear and obfuscated The tables without PII data are exposed to the normal users using simple views All the varchar fields in the obfuscated schema are converted in text fields Federico Campoli (Transferwise) The ninja elephant 25th January 2017 35 / 1
  42. 42. Obfuscation on the fly The obfuscation is also applied when the data is replicated. The approach is very simple. When a row image is captured the process checks if the table contains PII data In that case the process generates a second jsonb element with the PII data obfuscated The jsonb element carries the complete informations about the destination schema The plpgSQL function executes the change on the schema in clear and the schema with obfuscated data Federico Campoli (Transferwise) The ninja elephant 25th January 2017 36 / 1
  43. 43. The DDL. A real pain in the back The DDL replica is possible with a little trick. MySQL even in ROW format emits the DDL as statements A regular expression traps the DDL like CREATE/DROP TABLE or ALTER TABLE. The mysql library gets the table’s metadata from the information schema The metadata is used to build the DDL in the PostgreSQL dialect This approach may not be elegant but is quite robust. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 37 / 1
  44. 44. Timing Query MySQL PostgreSQL PostgreSQL cached Master procedure 20 hours 4 hours N/A Extracting sharing ibans didn’t complete 3 minutes 1 minute Adyen notification 6 minutes 2 minutes 6 seconds Federico Campoli (Transferwise) The ninja elephant 25th January 2017 38 / 1
  45. 45. Resource comparison Resource MySQL PostgreSQL Storage Size 940 GB 664 GB Server CPUs 18 8 Server Memory 68 GB 48 GB Shared Memory 50 GB 5 GB Max connections 500 100 Federico Campoli (Transferwise) The ninja elephant 25th January 2017 39 / 1
  46. 46. Advantages using PostgreSQL Stronger security model Better resource optimisation (See previous slide) No invalid views No performance issues with views Complex analytics functions partitioning (thanks pg pathman!) BRIN indices Federico Campoli (Transferwise) The ninja elephant 25th January 2017 40 / 1
  47. 47. Advantages using PostgreSQL Stronger security model Better resource optimisation (See previous slide) No invalid views No performance issues with views Complex analytics functions partitioning (thanks pg pathman!) BRIN indices some code was optimised inside, but actually very little - maybe 10-20% was improved. We’ll do more of that in the future, but not yet. The good thing is that the performance gains we have can mostly be attributed just to PG vs MySQL. So there’s a lot of scope to improve further. Jeff McClelland - Growth Analyst, data guru Federico Campoli (Transferwise) The ninja elephant 25th January 2017 40 / 1
  48. 48. Table of contents Federico Campoli (Transferwise) The ninja elephant 25th January 2017 41 / 1
  49. 49. Lessons learned Federico Campoli (Transferwise) The ninja elephant 25th January 2017 42 / 1
  50. 50. init replica tune The replica initialisation required several improvements. The first init replica implementation didn’t complete. The OOM killer killed the process when the memory usage was too high. In order to speed up the replica, some large tables not required in the analytics db were excluded from the init replica. Some tables required a custom slice size because the row length triggered again the OOM killer. Estimating the total rows for user’s feedback is faster but the output can be odd. Using not buffered cursors improves the speed and the memory usage. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 43 / 1
  51. 51. init replica tune The replica initialisation required several improvements. The first init replica implementation didn’t complete. The OOM killer killed the process when the memory usage was too high. In order to speed up the replica, some large tables not required in the analytics db were excluded from the init replica. Some tables required a custom slice size because the row length triggered again the OOM killer. Estimating the total rows for user’s feedback is faster but the output can be odd. Using not buffered cursors improves the speed and the memory usage. However.... even after fixing the memory issues the initial copy took 6 days. Tuning the copy speed with the unbuffered cursors and the row number estimates improved the initial copy speed which now completes in 30 hours. Including the time required for the index build. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 43 / 1
  52. 52. Strictness is an illusion. MySQL doubly so MySQL’s lack of strictness is not a mystery. The replica broke down several times because of the funny way the NOT NULL is managed by MySQL. To prevent any further replica breakdown the fields with NOT NULL added with ALTER TABLE, in PostgreSQL are always as NULLable. MySQL truncates the strings of characters at the varchar size automatically. This is a problem if the field is obfuscated on PostgreSQL because the hashed string could not fit into the corresponding varchar field. Therefore all the character varying on the obfuscated schema are converted to text. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 44 / 1
  53. 53. I feel your lack of constraint disturbing Rubbish data in MySQL can be stored without errors raised by the DBMS. When this happens the replicator traps the error when the change is replayed on PostgreSQL and discards the problematic row. The value is logged on the replica’s log, available for further actions. Federico Campoli (Transferwise) The ninja elephant 25th January 2017 45 / 1
  54. 54. Table of contents Federico Campoli (Transferwise) The ninja elephant 25th January 2017 46 / 1
  55. 55. Wrap up Federico Campoli (Transferwise) The ninja elephant 25th January 2017 47 / 1
  56. 56. Did you say hire? WE ARE HIRING! https://transferwise.com/jobs/ Federico Campoli (Transferwise) The ninja elephant 25th January 2017 48 / 1
  57. 57. That’s all folks! QUESTIONS? Federico Campoli (Transferwise) The ninja elephant 25th January 2017 49 / 1
  58. 58. Contacts and license Twitter: 4thdoctor scarf Transferwise: https://transferwise.com/ Blog:http://www.pgdba.co.uk Meetup: http://www.meetup.com/Brighton-PostgreSQL-Meetup/ This document is distributed under the terms of the Creative Commons Federico Campoli (Transferwise) The ninja elephant 25th January 2017 50 / 1
  59. 59. Boring legal stuff The 4th doctor meme - source memecrunch.com The eye, phantom playground, light end tunnel - Copyright Federico Campoli The dolphin picture - Copyright artnoose Deadpool Maximum Effort - source Deadpool Zoeiro Deadpool Clap - source memegenerator Federico Campoli (Transferwise) The ninja elephant 25th January 2017 51 / 1
  60. 60. The ninja elephant Scaling the analytics database in Transferwise Federico Campoli Transferwise 25th January 2017 Federico Campoli (Transferwise) The ninja elephant 25th January 2017 52 / 1

×