The document discusses ISUCON5 and its benchmark tools. It describes how the benchmark tools were developed with two main requirements - high performance and integrity checks. It then provides details on the key features and implementation of the benchmark tools using Java and the Jetty client library to send requests and check responses in parallel. Distributed benchmarking using multiple nodes is also covered. Performance results from ISUCON5 are shared showing it could handle around 194,000 requests per minute using up to 30 nodes.
7. ISUCON5 Main Topics
• Qualify: "ISUxi"
• Good old SNS
• Friend relations, Footprint, Many N+1 queries, ...
• Final: "AirISU"
• API aggregate server
• Parallel requests, Application processes/threads,
Cache based on data/protocol, HTTP/2, ...
8. How to Get High Score
in ISUCON5
• Qualify
• Add index, Cache master data, Remove N+1, ...
• Final
• Massive threads, Cache invariable data, Async/
Parallel requests to APIs, If-Modified-Since,
HTTP/2, ...
10. Bugs in Organizer Side
• Qualify
• Nothing serious
• Last bug was fixed at 1st day 11:30am
• Final
• Some serious bugs in scenario to make effects
for top N players
12. Why ISUCON Bench Tools
Should Be Written Newly?
• Two inconsistent requirements:
• high performance
• integrity check
• 1 request pattern for 2 requirements
• players can cheat w/ different requests for
purposes
13. Requirements in detail
• Performance
• throughput, concurrency, low latency
• Content check
• HTML parser, JSON parser, CSS/JS check, Image, other binaries, ...
• Complex scenario coding
• tools should simulate user behavior
• Protocol handling in detail
• HTTP protocols, HTTP headers, keepalive, cache control, timeouts, ...
• Variable source data
• disable "cache all requests/response" strategy
14. Features
• Sending GET/POST requests
• w/ various query params a/o content body
• w/ various HTTP headers
• Sending request series for a session
• Sending request series for several sessions
• Checking response integrity/consistency
• Skipping response check for performance if needed
15. Rough Sketch
• http_load + custom ruby script
• http_load: requests for performance
• ruby script: requests for checks
17. Overview
• Java: jetty-client + Java8 Lambda
• jetty-client for performance
• lambda for content check
• jackson to parse input data
• jsoup to parse response html (CSS selector)
• json-path to parse response json (JsonPath)
22. Distributed Benchmarking
• N nodes for 1 benchmark
• Can a node perform fast enough? (CPU bounded)
• Y -> 1 vs 1, N -> 2 vs 1
• "GET /": 5000req/thread on localhost -> enough
• N nodes for many benchmarks
• Scale out strategy
• Queue/worker system
24. Recorded Performance
• about 194,000 requests / 60 sec
• OK: 185,000
• Redirects: 9,300
• Peak 30 nodes (Qualify)
• 11,500 benchmarks in 2days
25. Far more: Scenario
• Checking for critical issues / non-critical issues
• Critical/non-critical mode of Checker class
• Checks w/ dependencies
• If a check fails, following checks throws NPE :(
26. Some More Topics
• Async
• Gigantic parallel requests
• 2 or more simultaneous requests under control