Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Kauli SSPにおけるVyOSの導入事例


Published on

Published in: Technology

Kauli SSPにおけるVyOSの導入事例

  1. 1. Case studies of VyOS in Kauli SSP Flandre Scarlet favorite Platform Engineer Kazuhito Ohkawa at Kauli, Inc.
  2. 2. Agenda - Self‐Introduction - About Kauli SSP - Case studies in Kauli SSP of VyOS - Tuning tips - About microburst traffic(digress)
  3. 3. Self‐Introduction - おおかわ かずひと Kazuhito Ohkawa (twitter@SatchanP) - Aug 2012 Joined Kauli, Inc. Platform Engineer - My Lover THE IDOLM@STER : Yayoi, Mami Touhou Project : Flandre, Sakuya - Private Rallyist This is a my co-driver and three-dimensional parking of impreza.
  4. 4. About Kauli SSP
  5. 5. SSPとは SSPとは、「Supply Side Platform」(サプライサイドプラット フォーム)の略で、オンライン広告において、広告枠を提供している メディア(Webサービス、アプリデベロッパー)など媒体社の広告枠 販売や広告収益最大化などを支援するツールのこと。 主に、広告の インプレッションが発生するごとに最適な広告を自動的に選択し、収 益性の向上を図るという仕組みが提供されるが、アドネットワーク、 アドエクスチェンジの一元的管理、リアルタイム入札(RTB)への対 応など、具体的な提供機能はサービスによって異なる。 DSP、SSP - SMMLab(ソーシャルメディアマーケティングラボ)
  6. 6. About SSP A supply-side platform or sell-side platform (SSP) is a technology platform with the single mission of enabling publishers to manage their advertising impression inventory and maximize revenue from digital media. As such, they offer an efficient, automated and secure way to tap into the different sources of advertising income that are available, and provide insight into the various revenue streams and audiences. Many of the larger web publishers of the world use a supply-side platform to automate and optimize the selling of their online media space.[1] A supply-side platform on the publisher side interfaces to an ad exchange, which in turn interfaces to a demand-side platform (DSP) on the advertiser side. This system allows advertisers to put online advertising before a selected target audience.[2] Often, real-time bidding (RTB) is used to complete DSP transactions.[3]。
  7. 7. About RTB Audience Media AD Select the DSP in conditions. Request in parallel. Request for SSP Browse Kauli connected DSPs Bid winner is DSP B Displayed DSP B's AD
  8. 8. Many connections for Ad delivery. Up to 400 million Ad per day. All traffic via the VyOS.
  9. 9. Agony of SSP Platform Engineer Very very very many many many traffics... As well internal and external... Various traffics, cookie sync, banner, flash and movies, JS tags...etc... About 80 % traffic is short packet... Claim for delay of Ad... SSP isn't profitable! Many media rewards!
  10. 10. SSP Handmade Servers
  11. 11. Infrastructure engineers of SSP. I can not recommend!
  12. 12. Case studies in Kauli SSP of VyOS
  13. 13. Mainly running on a physical server Gen-1 Intel Core i7 870 RAM 16G Intel 82574L x2 M/B ASUS HDD Gen-2 Intel Xeon E3-1280 v3 RAM 32G Intel I350/I210 M/B Supermicro SSD
  14. 14. Using at the Default Gateway for all servers Internet L3 Core LVS DR Real Server nginx VyOS DMZ Default GW IP Masquarede LAN RTB Requests SSP Server
  15. 15. Peak traffic graphs of Default Gateway
  16. 16. Logic of LVS-DR SRC : DST : LVS VIP : Client A IP : Internet Real Server IP : SRC : DST : MAC : 0000.0000.0000 lo : MAC : 0000.0000.0000 VyOS Default GW IP : SRC : DST : Source address is Solved by MAC Address LVS VIP Make possible by loopback SRC : DST : LAN FP Filter off
  17. 17. Router is unnecessary, If server have global IPs SRC : DST : LVS VIP : Client A IP : SRC : Internet DMZ DST : Real Server IP : SRC : DST : MAC : 0000.0000.0000 lo : MAC : 0000.0000.0000
  18. 18. Scaling VyOS router by OSPF/ECMP after replacement L3 Core LVS DR VyOS VyOS VyOS Real Server L3 Switch Default GW Internet OSPF ECMP Other Vlan Real Server LVS DR
  19. 19. Checking new data center application by Cloud Bridge Vyatta Vyatta Internet SSP Server LVS-DR DB KVS Index Cloud Bridge SSP Server New Data Center Old Data Center DB KVS Index Internet
  20. 20. Sakura cloud between VPN Internet Internet Data Center Sakura Cloud VyOS VyOS API Server IPSec Crawler Crawler
  21. 21. Tuning Tips
  22. 22. NUMA I/O NAPI circular buffer CPU Affinity conntrack
  23. 23. Use a uni-processor server (NUMA I/O) PCI Express controller is integrated into the CPU in the sandy bridge. High access costs between processors. or using memory mirroring... RAM CPU1 CPU2 RAM PCI Express NIC QPI
  24. 24. It is printed on motherbord
  25. 25. Reconsider the polling of buffer (NAPI) Buffer overflows even Intel's I350.(Amazing!) It is set the maximum value at 4096. Confirmed with ifconfig and ethtool -S. ifconfig: RX packets:1215382409979 errors:0 dropped:9836789 overruns:9836789 frame:0 ethtool -S: rx_no_buffer_count: 220474
  26. 26. Change the NAPI kernel parameters - net.core.netdev_budget Increase the processing queue. - net.core.dev_weight Shorten the polling sensation. However CPU usage rises.
  27. 27. circular buffer igb is not set to the maximum value. And too large buffer will cause a delay. Consider the balance to CPU by NAPI and circular buffers. # ethtool -g eth0 Ring parameters for eth0: Pre-set maximums: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096 Current hardware settings: RX: 256 RX Mini: 0 RX Jumbo: 0 TX: 256 # ethtool -G eth0 rx 4096 tx 4096
  28. 28. CPU Affinity Case of multi-queue, specific cpu core only high load. Adjust these manually. $ cat /proc/interrupts | egrep 'eth|CPU' CPU0 CPU1 CPU2 CPU3 50: 1406514518 0 0 0 PCI-MSI-edge eth0-rx-0 51: 84923776 383727140 0 0 PCI-MSI-edge eth0-tx-0 52: 2951 0 0 0 PCI-MSI-edge eth0 53: 2 31961537 1787069187 0 PCI-MSI-edge eth1-rx-0 54: 1 6218033 0 510452860 PCI-MSI-edge eth1-tx-0 55: 115 0 0 0 PCI-MSI-edge eth1 $ sudo cat /proc/irq/5[0-1,3-4]/smp_affinity 0001 0002 0004 0008
  29. 29. conntrack tuning Here is the essential part in the IP Masquarede. Maybe 10G-40G class of IP Masquarede also possible. Established time is very short. The high cost of connection open and close processing. Setting value depends on the memory.
  30. 30. conntrack parameter - hash-size conntrack table hashes. Processed faster conntracks scan by hashed. Hash algorithm is chaining scheme. - table-size Raw conntrack tables. - expect-table-size Use FTP, SIP, H.323...
  31. 31. Raw conntrack table samples tcp 6 128 TIME_WAIT src=10.x.x.xx dst=1xx.xx.xx.xx sport=43860 dport=80 packets=6 bytes=698 src=1xx.xx.xx.xx dst=1x.x.x.xx sport=80 dport=43860 packets=4 bytes=419 [ASSURED] mark=0 secmark=0 use=2
  32. 32. Setting conntrack tables and hash size - table-size CONNTRACK_MAX = RAMSIZE (bytes) / 16384 / (x / 32) x = 32bit or 64bit - hash-size tablesize / 8 - expect-table-size In preference
  33. 33. True upper limit of conntrack Focus on the status of the conntrack table. [ASSURED] is not dropping from conntrack tables. Comparison with the [ASSURED] total value and the maximum value. Sample: tcp 6 23 TIME_WAIT src=10.x.x.xx dst=1xx.xx.xx.xx sport=43708 dport=80 packets=6 bytes=663 src=1xx.xx.xx.xx dst=1x.x.x.xx sport=80 dport=43708 packets=4 bytes=542 [ASSURED] mark=0 secmark=0 use=2
  34. 34. Shorten the timeout of conntrack table conntrack table is supposed to be used recursively. But our traffic has very many hosts. Unable to keep conntrack table. Short set a time-out so it not overflow conntrack table. timeout { icmp 3 other 600 tcp { close 10 close-wait 1 established 10 fin-wait 10 last-ack 30 syn-recv 60 syn-sent 5 time-wait 3 } udp { other 30 stream 10 } }
  35. 35. Microburst traffic (digress)
  36. 36. About microburst traffic Microburst is not visible, but our network have it. Can be confirmed by various phenomena. One example is a packet discard of switchs.
  37. 37. Read the signs of microburst Expand the graph in a narrow range. Spikes confirm.
  38. 38. Read the signs of microburst This is a poll of 1 minute sensation. Ave 85 Packets discard/sec = 85Packets * 60 = 5160 5160 packets lost in a moment. I have prepared a movie today.
  39. 39. Thank you for your attention!