• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
My sql key partition and mongodb test
 

My sql key partition and mongodb test

on

  • 581 views

 

Statistics

Views

Total Views
581
Views on SlideShare
581
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    My sql key partition and mongodb test My sql key partition and mongodb test Presentation Transcript

    • MySQL key partition and MongoDB TEST对于业务的激活码需求做了一次关于 mysql,mongodb 的比对.mysql 分为 normal,key partition 数量分别是1亿和10亿数据,mysql 采用直接访问 PK 键,partition key为 PK,mysql table size 为90G,mongodb table size 为157G。 [liuyang@yhdem ~]$ cat /proc/cpuinfo |grep processor |wc -l 24 [liuyang@yhdem ~]$ cat /etc/issue Oracle Linux Server release 5.8 Kernel r on an mmysql evn: mysql> select version(); +-----------+ | version() | +-----------+ | 5.5.25a | +-----------+ 1 row in set (0.00 sec)
    • log_bin[OFF] innodb_flush_log_at_trx_commit [2] query_cache_type[OFF]max_connect_errors[10] max_connections[214] max_user_connections[0]sync_binlog[0] table_definition_cache[400]table_open_cache[400] thread_cache_size[8] open_files_limit[30000]innodb_adaptive_flushing[ON] innodb_adaptive_hash_index[ON]innodb_buffer_pool_size[30.234375G]innodb_file_per_table[ON] innodb_flush_log_at_trx_commit[2] innodb_flush_method[]innodb_io_capacity[200] innodb_lock_wait_timeout[100]innodb_log_buffer_size[128M]innodb_log_file_size[200M] innodb_log_files_in_group[2]innodb_max_dirty_pages_pct[75]innodb_open_files[1600] innodb_read_io_threads[4] innodb_thread_concurrency[0]innodb_write_io_threads[4]
    • 以下图片均为 QPS 统计,TPS 测试暂时没有做no partition table with one billion rows –> small random select by pk
    • xDiskName Busy Read WriteKB|0 |25 |50 |75 100|xsda 1% 2.0 35.9|> |xsda1 0% 0.0 0.0|> |xsda2 0% 0.0 0.0|> |xsda3 0% 0.0 0.0|> |xsda4 0% 0.0 0.0|>disk busy not available |xsda5 0% 0.0 0.0|> |xsda6 1% 2.0 35.9|> |xsdb 0% 0.0 55.9|> |xsdb1 0% 0.0 55.9|> |xTotals Read-MB/s=0.0 Writes-MB/s=0.2 Transfers/sec=18.0
    • partition table with one billion rows –> small random select by pk
    • xDiskName Busy Read WriteKB|0 |25 |50 |75 100|xsda 0% 0.0 8.0|> |xsda1 0% 0.0 0.0|> |xsda2 0% 0.0 8.0|> |xsda3 0% 0.0 0.0|> |xsda4 0% 0.0 0.0|>disk busy not available |xsda5 0% 0.0 0.0|> |xsda6 0% 0.0 0.0|> |xsdb 0% 0.0 201.5| > |xsdb1 0% 0.0 201.5|W > |xTotals Read-MB/s=0.0 Writes-MB/s=0.4 Transfers/sec=46.9
    • no partition table with one billion rows –> full random select by pk
    • xDiskName Busy Read WriteMB|0 |25 |50 |75 100|xsda 0% 0.0 0.0| > |xsda1 0% 0.0 0.0|> |xsda2 0% 0.0 0.0|> |xsda3 0% 0.0 0.0|> |xsda4 0% 0.0 0.0|>disk busy not available |xsda5 0% 0.0 0.0|> |xsda6 0% 0.0 0.0| > |xsdb 100% 86.8 0.2|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR>xsdb1 100% 86.8 0.2|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR>xTotals Read-MB/s=173.6 Writes-MB/s=0.4 Transfers/sec=6448.1
    • partition table with one billion rows –> full random select by pk
    • xDiskName Busy Read WriteMB|0 |25 |50 |75 100|xsda 0% 0.0 0.0| > |xsda1 0% 0.0 0.0|> |xsda2 0% 0.0 0.0| > |xsda3 0% 0.0 0.0|> |xsda4 0% 0.0 0.0|>disk busy not available |xsda5 0% 0.0 0.0|> |xsda6 0% 0.0 0.0| > |xsdb 100% 89.6 0.2|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR>xsdb1 100% 89.6 0.2| >xTotals Read-MB/s=179.2 Writes-MB/s=0.3 Transfers/sec=6539.3
    • no partition table with 100 million rows –> full random select by pk
    • 下面基于 mongodb 的 TEST.同样为 10 亿的表,157G. TEST.同样为 亿的表,157G.[root@db-13 tmp]# mongoMongoDB shell version: 2.0.8connecting to: test> db.foo.totalSize();157875838416> db.foo.find().count();1000000000
    • 第一次 使用128G 满额内存 16thread,10亿 random query: 使用128G 16thread,10亿[root@db-13 tmp]# mongo test ./mongodb_benchmark_query.jsMongoDB shell version: 2.0.8connecting to: testthreads: 16 queries/sec: 126151.69666666667第二次 使用128G 内存 24 thread,10亿中的前1亿数据 random query: 使用128G thread,10亿中的前 亿中的前1[root@db-13 tmp]# mongo test ./mongodb_benchmark_query.jsMongoDB shell version: 2.0.8connecting to: testthreads: 24 queries/sec: 166527.42333333334
    • 第三次 使用 mysql 用户启动 mongo 限制 mysql 用户的 mem 为24G 24 thread , 10亿中的前1亿数据 random query : 10亿中的前 亿中的前1[mysql@db-13 ~]$ ulimit -acore file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 1052672max locked memory (kbytes, -l) 26055452max memory size (kbytes, -m) 26055452open files (-n) 131072pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 10240cpu time (seconds, -t) unlimitedmax user processes (-u) unlimitedvirtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited[mysql@db-13 tmp]$ mongo test ./mongodb_benchmark_query.jsMongoDB shell version: 2.0.8connecting to: testthreads: 24 queries/sec: 161358.03333333333
    • 第四次 使用 mysql 用户启动 mongo 限制 mysql 用户的 mem 为24G 24 thread , 10亿 random query : 10亿 [mysql@db-13 tmp]$ mongo test ./mongodb_benchmark_query.js MongoDB shell version: 2.0.8 connecting to: test threads: 24 queries/sec: 2549.2 ----------------------> 这里出现了物理 IO 读写----- 以下是不用缓存磁盘随机 IO 下的测试结果:mongodb 初始只用了13M 内存测试结果:
    • —提供查询脚本ops = [{op:"findOne", ns:"test.foo", query: {_id : {"#RAND_INT": [ 1 , 100000000 ] } }}]x=24 { res = benchRun( { parallel : x , seconds : 60 , ops : ops } ); print("threads: "+ x +"t queries/sec: "+ res.query );}
    • END :10亿 normal table 对于1亿 normal table 在内存基于 PK 的访问没有衰减,10亿的 partition table 对于 10亿的 normal table 在内存中衰减了2/3,10亿的 partitiontable 对于10亿的 normal table 在 full table out of memory 的情况下 性能有所提升 (另外注意激活码基本只会被访问1次)对于 mongodb 来说,这种业务模式需求完全可以搞定,在内存充足的情况下 QPS 达到了16W+/s,但是在内存不足的情况下,暴跌至2549,在 mem cache 完全清除的情况下,QPS 为149/s,基于这次业务需求为一张 very large table (暂估500GB),因此决定使用 mysql 而不是 mongodb.