Bodybuilding.com
•  Information
•  Motivation
•  Supplementation

S
About Me..

•  Ratheesh Kaniyala
•  MySQL Database Admin @ Bodybuilding.com
•  Responsible for keeping the site up and run...
Agenda

S  About Bodybuilding.com
S  Mysql Architecture @ Bodybuilding.com
S  Backups
S  Monitoring
S  Some Benchmark...
Bodybuilding.com
S  What is Bodybuilding.com?

We:
• 

Provide Information + Motivation + Supplementation for leading
a h...
Statistics
S  Ranked #319 US website and #709 worldwide
S  15th largest forum with over 100 million posts
S  About 65TB...
APPLICATIONS

S  Our main applications:
S  Busy Forum – Powered by vBulletin, Mysql
S  Bodyspace – Php, Java, MySQL, Mo...
Why Percona?
S  This is what Percona says:
• 
• 
• 
• 
• 
• 

Your queries will run faster and more consistently.
You wil...
High Level Tech Diagram
SECURITY LAYER

WEB SERVERS
CACHING LAYER

MYSQL
RDBMS

NOSQL
DATA LAYER

SPHINX
/SOLR
DATA LAYER

Cluster A

Cluster B

Cluster C

Cluster D
A Typical MYSQL cluster
Reads

Writes & some reads

Search Indexes
Query replay for
lru_dump

Ÿ Ÿ Ÿ
Backups

Offsite
Ba...
HA

•  Multi-master asynchronous replication
•  Active – Passive mode
•  What happens when any slave dies?
•  What happens...
Active Master Died - Case1

•  M2 and all slaves
will end up being
at the same
position
•  Point all slaves to
the M2 by c...
Active Master Died – Case 2

•  M2 and all slaves
are in different
states
•  Promote the most
up-to-date slave as
master
•...
Maintenance on Active master

•  Move all slaves
to M2
•  Change DNS to
make M2 active

Read only slaves
Maintenance on active contd…

•  Move all slaves
to M2
•  Change DNS to
make M2 active

Read only slaves
Maintenance on active contd…

•  It is not that easy to move N slaves to M2 manually because

the binlog file and position...
Slave migration algorithm
S 

step 1: For each slave node do STOP SLAVE

S 

step 2: Sleeps for 10 sec so that the secon...
Slave migration algorithm
contd..
S  step 6: do SHOW MASTER STATUS on secondary master and record

the position
S  step ...
Backups
• 

Nightly logical backups (mysqldump)

• 

Every 2 hours - lvm snapshot backups of mysql datadir
• 
• 

• 

Stop...
Snapshot Process
Preconnect

Premount

Backupsuccess
Update
Monitoring
Backupsuccess

• 

Start mysql on the snapshot data – This does mysql recovery and keep
the snapshot ready-to-use

• 

Val...
Monitoring

• 

We use zabbix for monitoring – alerting + trending

• 

Close to 100,000 checks

• 

Mysql Performance Mon...
Monitoring Contd..
DBVIEW
DBVIEW contd..
Benchmarking

S  To understand the capacity of the hardware and software
S  To understand what happens when you have a h...
Benchmarking with linkbench
• 

Linkbench from Facebook simulates a socio graphic workload

• 

The default workload patte...
Linkbench results

• 

Ops/Sec per thread
•  [Thread-122]: 2190000/100000000 requests finished: 2.2%
complete at 28311.8 o...
Benchmark Parameters

enchmark Cases 40 GB
Database Size – 175GB, 400GB,
• 
• 

RAM – 64 GB

• 

Innodb_buffer_pool_size –...
Innodb settings for fusionio

•  Innodb_read_ahead=0
•  Innodb_read_io_threads=8
•  Innodb_adaptive_checkpoint=keep_averag...
Benchmark results

Ops per sec

Linkbench 175G database on fusion iodrive2
40000
35000
30000
25000
20000
15000
10000
5000
...
Innodb Mutex in 5.1
Bigger Database

•  200 threads
•  RW mix (70:30)

Percona 5.5.33 - 400G db
13000

14000

Ops Per Sec

12000
10000

10000
...
Fusion Fullness
•  175G db + 400G junk
•  200 threads
•  RW Mix (70:30)

Ops Per Sec

Percona 5.5.33 – 175G DB
40000
35000...
Fragmentation Effect
•  200 threads
•  RW mix (70:30)

Percona 5.5.33 – 175G DB
40000

36000

35000
Ops Per Sec

30000

27...
Doublewrite Effect
Percona 5.5.33 – 175G DB
46000

45000

Ops Per Sec

44000
42000

Doublewrite ON

40000
38000
36000
3400...
READ Workload
•  90% Reads
•  10% Writes
•  200 threads

Percona 5.5.33 – 175G DB

Ops Per Sec

60000
59500

59500

59000
...
WRITE Workload
•  90% writes
•  10% reads
•  200 threads

Percona 5.5.33 – 175G DB
16000

14000

14000
Ops Per Sec

12000
...
More threads

Ops Per Sec

Percona 5.5.33 – 175G DB
40000
35000
30000
25000
20000
15000
10000
5000
0

35000

18000

XFS

E...
Bufferpool Instances
Percona 5.5.33 – 175G DB
40000

Ops Per Sec

39500
39000
38500
38000
37500
37000
1

2
Bufferpool Inst...
1 bufferpool
instance
Low concurrency

S  Is important because the workload may not always be high

concurrency
S  The system resources are be...
Low concurrency contd..

12000

11000

10500

10000
8000

7500

7300

6000

4 threads

4000

8 threads

2000
0
Percona 5.1...
Ops Per Sec

ODIRECT Vs Buffered

40000
35000
30000
25000
20000
15000
10000
5000
0

38000
33000

30000
Buffered

16500

XF...
EXT4 Buffered Vs XFS Buffered
CPU USAGE
Await
Fusion Throughput
Innodb Semaphores
Innodb Data RW
EXT4 ODIRECT Vs XFS
ODIRECT
CPU Usage
Fusion Throughput
Innodb data RW
Innodb Semaphores
Innodb Transactions
NOSQL
• 

User feeds, Nested comments, Personalization

• 

User feeds can be nasty

• 

Some users have thousands of frie...
User Feeds
Feeds Collection

Events Collection

_id:,
Text:,
Date:,
Event_ids:

_id:,
Weight:,
Id1,
Id2,
.
.

_id:,
BodyFa...
Thank You
Pluk2013 bodybuilding ratheesh
Pluk2013 bodybuilding ratheesh
Pluk2013 bodybuilding ratheesh
Pluk2013 bodybuilding ratheesh
Pluk2013 bodybuilding ratheesh
Upcoming SlideShare
Loading in …5
×

Pluk2013 bodybuilding ratheesh

368
-1

Published on

Slides from my talk @ Percona Live London 2013. This talk is about database administration and how we manage percona xtradb at bodybuilding.com. There are a few benchmarks about percona, fusionio and xfs/ext4 file systems.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
368
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Pluk2013 bodybuilding ratheesh

  1. 1. Bodybuilding.com •  Information •  Motivation •  Supplementation S
  2. 2. About Me.. •  Ratheesh Kaniyala •  MySQL Database Admin @ Bodybuilding.com •  Responsible for keeping the site up and running •  Work with system engineers, developers, product owners to get the best out of the database system
  3. 3. Agenda S  About Bodybuilding.com S  Mysql Architecture @ Bodybuilding.com S  Backups S  Monitoring S  Some Benchmarks on Fusionio S  NoSQL @ BBcom
  4. 4. Bodybuilding.com S  What is Bodybuilding.com? We: •  Provide Information + Motivation + Supplementation for leading a healthy lifestyle •  Are not just for bodybuilders •  Have hundreds of vendors •  Have thousands of content (articles, exercises, videos) •  Have millions of users – still growing
  5. 5. Statistics S  Ranked #319 US website and #709 worldwide S  15th largest forum with over 100 million posts S  About 65TB served from Akamai and 57 TB from cloudfront S  About 25 Million Unique visitors per month S  About 5000 Bodyspace registrations per day
  6. 6. APPLICATIONS S  Our main applications: S  Busy Forum – Powered by vBulletin, Mysql S  Bodyspace – Php, Java, MySQL, MongoDB S  Store – ATG (Art Technology Group), Oracle 11G S  Content – Php, MySQL
  7. 7. Why Percona? S  This is what Percona says: •  •  •  •  •  •  Your queries will run faster and more consistently. You will consolidate servers on powerful hardware. You will delay sharding, or avoid it entirely. You will spend less time tuning and administering. You will achieve higher uptime. You will troubleshoot without guesswork. S  And we found this to be true for every point.
  8. 8. High Level Tech Diagram SECURITY LAYER WEB SERVERS CACHING LAYER MYSQL RDBMS NOSQL DATA LAYER SPHINX /SOLR
  9. 9. DATA LAYER Cluster A Cluster B Cluster C Cluster D
  10. 10. A Typical MYSQL cluster Reads Writes & some reads Search Indexes Query replay for lru_dump Ÿ Ÿ Ÿ Backups Offsite Backup Analytics
  11. 11. HA •  Multi-master asynchronous replication •  Active – Passive mode •  What happens when any slave dies? •  What happens when passive master dies? •  What happens when Active master dies?
  12. 12. Active Master Died - Case1 •  M2 and all slaves will end up being at the same position •  Point all slaves to the M2 by change master Read only slaves •  Resume traffic
  13. 13. Active Master Died – Case 2 •  M2 and all slaves are in different states •  Promote the most up-to-date slave as master •  Fix the slave pool by looking into binary logs Read only slaves
  14. 14. Maintenance on Active master •  Move all slaves to M2 •  Change DNS to make M2 active Read only slaves
  15. 15. Maintenance on active contd… •  Move all slaves to M2 •  Change DNS to make M2 active Read only slaves
  16. 16. Maintenance on active contd… •  It is not that easy to move N slaves to M2 manually because the binlog file and position on M2 does not match that on M1 •  Can we automate this slave migration? •  Yes •  How? •  There is an algorithm that I can share
  17. 17. Slave migration algorithm S  step 1: For each slave node do STOP SLAVE S  step 2: Sleeps for 10 sec so that the secondary master is slightly ahead than the stopped slaves S  step 3: STOP SLAVE on the secondary master and record the SHOW SLAVE STATUS result on it. S  step 4: For each slave node do START SLAVE UNTIL the recorded position in step 3 S  step 5: For each slave node do STOP SLAVE (required for changing master later) S  S  a) check the slave status and stop slave only when the current position = recorded position in step 3 b) If the slave is behind by too much and does not catch up to the UNTIL position in 20 secs then skip the node and make a note about it in the console and the log file. This node has to be taken care of manually later. Look for directions in the log file on how to change master on this node.
  18. 18. Slave migration algorithm contd.. S  step 6: do SHOW MASTER STATUS on secondary master and record the position S  step 7: For each slave node do a CHANGE MASTER TO secondary master with the position recorded in step 6 S  Step 8: do START SLAVE on secondary master S  step 9: For each slave node do START SLAVE
  19. 19. Backups •  Nightly logical backups (mysqldump) •  Every 2 hours - lvm snapshot backups of mysql datadir •  •  •  Stop slave Take snapshot All backups are done on a dedicated backup slave (B1) •  Copy to NAS •  Be ready for next snapshot •  mylvmbackup script provides various hook points to call a custom script •  Note: If you are not validating your backups then you don’t care about your data?
  20. 20. Snapshot Process Preconnect Premount Backupsuccess Update Monitoring
  21. 21. Backupsuccess •  Start mysql on the snapshot data – This does mysql recovery and keep the snapshot ready-to-use •  Validate backup by running custom queries, counts, statistics etc so that you are sure that it can be used for server rebuilds and recovery
  22. 22. Monitoring •  We use zabbix for monitoring – alerting + trending •  Close to 100,000 checks •  Mysql Performance Monitor plugin for Innodb •  Custom scripts can use zabbix trapper – e.g. data size trend •  Dbview (php based tool) for realtime processlist and slow query history •  Red Light is ON – Means trouble
  23. 23. Monitoring Contd..
  24. 24. DBVIEW
  25. 25. DBVIEW contd..
  26. 26. Benchmarking S  To understand the capacity of the hardware and software S  To understand what happens when you have a high traffic day S  To understand the impact of configuration changes S  To test the storage unit
  27. 27. Benchmarking with linkbench •  Linkbench from Facebook simulates a socio graphic workload •  The default workload pattern kind of matches our workload •  https://github.com/facebook/linkbench •  https://www.facebook.com/notes/facebook-engineering/linkbench-a-databasebenchmark-for-the-social-graph/10151391496443920 •  Read-Write operation mix, Throttle requests •  Target hot nodes •  Extendable and you can write plugins – not tried! •  Note: Linkbench partitions the linktable. If you want to benchmark without partition, create the table without partition and then load data.
  28. 28. Linkbench results •  Ops/Sec per thread •  [Thread-122]: 2190000/100000000 requests finished: 2.2% complete at 28311.8 ops/sec 77.4/100000 •  Ops/Sec Total •  [main]: REQUEST PHASE COMPLETED. 100000000 requests done in 2641 seconds. Requests/second = 37852
  29. 29. Benchmark Parameters enchmark Cases 40 GB Database Size – 175GB, 400GB, •  •  RAM – 64 GB •  Innodb_buffer_pool_size – 50GB for all tests •  Storage – Fusion iodrive2 •  File System – XFS, EXT4 •  Database Server Specs – Dell R620, 2 Chip-8 core-HT => 32 procs, Intel Xeon E5 2680 processor, debian Linux Squeeze, 2.6.32 Kernel
  30. 30. Innodb settings for fusionio •  Innodb_read_ahead=0 •  Innodb_read_io_threads=8 •  Innodb_adaptive_checkpoint=keep_average •  Innodb_flush_method=O_DIRECT •  Innodb_io_capacity=10000 •  Innodb_flush_neighbor_pages=0
  31. 31. Benchmark results Ops per sec Linkbench 175G database on fusion iodrive2 40000 35000 30000 25000 20000 15000 10000 5000 0 •  200 threads •  RW mix (70:30) 38000 33000 30000 25000 Percona 5.1 Percona 5.5 XFS EXT4
  32. 32. Innodb Mutex in 5.1
  33. 33. Bigger Database •  200 threads •  RW mix (70:30) Percona 5.5.33 - 400G db 13000 14000 Ops Per Sec 12000 10000 10000 8000 6000 4000 2000 0 XFS EXT4
  34. 34. Fusion Fullness •  175G db + 400G junk •  200 threads •  RW Mix (70:30) Ops Per Sec Percona 5.5.33 – 175G DB 40000 35000 30000 25000 20000 15000 10000 5000 0 38000 33000 33000 More Full 21000 Less Full XFS EXT4
  35. 35. Fragmentation Effect •  200 threads •  RW mix (70:30) Percona 5.5.33 – 175G DB 40000 36000 35000 Ops Per Sec 30000 27000 25000 20000 15000 10000 5000 0 XFS EXT4
  36. 36. Doublewrite Effect Percona 5.5.33 – 175G DB 46000 45000 Ops Per Sec 44000 42000 Doublewrite ON 40000 38000 36000 34000 •  No atomicity without double write •  Percona supports atomic writes using DirectFS NVM Filesystem •  RW Mix (70:30) •  200 threads 38000
  37. 37. READ Workload •  90% Reads •  10% Writes •  200 threads Percona 5.5.33 – 175G DB Ops Per Sec 60000 59500 59500 59000 59000 58500 XFS EXT4
  38. 38. WRITE Workload •  90% writes •  10% reads •  200 threads Percona 5.5.33 – 175G DB 16000 14000 14000 Ops Per Sec 12000 10000 8000 6000 6000 4000 2000 0 XFS EXT4
  39. 39. More threads Ops Per Sec Percona 5.5.33 – 175G DB 40000 35000 30000 25000 20000 15000 10000 5000 0 35000 18000 XFS EXT4 •  400 Threads •  RW Mix (70:30)
  40. 40. Bufferpool Instances Percona 5.5.33 – 175G DB 40000 Ops Per Sec 39500 39000 38500 38000 37500 37000 1 2 Bufferpool Instances 8 •  200 threads •  70:30 RW mix •  NUMA has a role?
  41. 41. 1 bufferpool instance
  42. 42. Low concurrency S  Is important because the workload may not always be high concurrency S  The system resources are best utilized under high concurrency but there could be a lot of situations where you could just have 8 threads running or 4 threads running. S  How does the database behave in these situation? S  Could there be performance regression?
  43. 43. Low concurrency contd.. 12000 11000 10500 10000 8000 7500 7300 6000 4 threads 4000 8 threads 2000 0 Percona 5.1 Percona 5.5
  44. 44. Ops Per Sec ODIRECT Vs Buffered 40000 35000 30000 25000 20000 15000 10000 5000 0 38000 33000 30000 Buffered 16500 XFS O_DIRECT EXT4
  45. 45. EXT4 Buffered Vs XFS Buffered
  46. 46. CPU USAGE
  47. 47. Await
  48. 48. Fusion Throughput
  49. 49. Innodb Semaphores
  50. 50. Innodb Data RW
  51. 51. EXT4 ODIRECT Vs XFS ODIRECT
  52. 52. CPU Usage
  53. 53. Fusion Throughput
  54. 54. Innodb data RW
  55. 55. Innodb Semaphores
  56. 56. Innodb Transactions
  57. 57. NOSQL •  User feeds, Nested comments, Personalization •  User feeds can be nasty •  Some users have thousands of friends •  Millions of feeds •  Feeds can have events that cannot share a schema •  E.g. Somebody updated weight != somebody updated bodyfat % •  Mongodb a savior plus a killer •  Savior because of its schemaless nature •  Killer because of its locking issues, index immaturity •  Sharding may solve the problem but not a shardable problem.
  58. 58. User Feeds Feeds Collection Events Collection _id:, Text:, Date:, Event_ids: _id:, Weight:, Id1, Id2, . . _id:, BodyFat:,
  59. 59. Thank You
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×