Successfully reported this slideshow.
Your SlideShare is downloading. ×

WMS Performance Shootout 2011

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Upcoming SlideShare
MapServer #ProTips 2015
MapServer #ProTips 2015
Loading in …3
×

Check these out next

1 of 30 Ad

WMS Performance Shootout 2011

Download to read offline

WMS Benchmarking presentation and results, from the FOSS4G 2011 event in Denver. 6 different development teams participated in this exercise, to display common data through the WMS standard the fastest. http://2011.foss4g.org/sessions/web-mapping-performance-shootout

WMS Benchmarking presentation and results, from the FOSS4G 2011 event in Denver. 6 different development teams participated in this exercise, to display common data through the WMS standard the fastest. http://2011.foss4g.org/sessions/web-mapping-performance-shootout

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Advertisement

Similar to WMS Performance Shootout 2011 (20)

More from Jeff McKenna (17)

Advertisement

Recently uploaded (20)

WMS Performance Shootout 2011

  1. 1. <ul>WMS Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul><ul>Cadcorp GeognoSIS, Constellation-SDI, GeoServer, Mapnik, MapServer, QGIS Server </ul>
  2. 2. <ul>Executive summary </ul><ul><li>Compare the performance of WMS servers </li></ul><ul><ul><li>6 teams </li></ul></ul><ul><li>In a number of different workloads: </li></ul><ul><ul><li>Vector: projected (Google Mercator – EPSG:3857) street level
  3. 3. Raster: EPSG:4326 DEMs projected (Google Mercator – EPSG:3857) </li></ul></ul><ul><li>Data backends: </li></ul><ul><ul><li>Vector: PostGIS
  4. 4. Raster: BIL </li></ul></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  5. 5. <ul>Benchmarking History </ul><ul><li>5th FOSS4G benchmarking exercise. Past exercises included: </li></ul><ul><ul><li>FOSS4G 2007: Refractions Research run - published the first comparison with the help of GeoServer and MapServer developers. Focus on big shapefiles, postgis, minimal styling (Brock Anderson & Justin Deoliveira)
  6. 6. FOSS4G 2008: OpenGeo run - published the second comparison with some review from the MapServer developers. Focus on simple thematic mapping, raster data access, WFS and tile caching (Justin Deoliveira & Andrea Aime)
  7. 7. FOSS4G 2009: MapServer and GeoServer teams in a cooperative benchmarking exercise (Andrea Aime & Jeff McKenna) </li></ul></ul><ul><li>Friendly competition: goal is to improve all software </li></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  8. 8. <ul>Benchmarking 2010 </ul><ul><li>8 Teams
  9. 9. Dedicated servers
  10. 10. Area specific data set (Spain) </li></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  11. 11. <ul>Benchmarking 2011 </ul>6 Teams Dedicated hardware Area specific dataset – Colorado <ul>Open Source Geospatial Foundation </ul><ul></ul>
  12. 12. <ul>Rules of engagement </ul><ul><li>Each server is tested in its latest version
  13. 13. Each server performs exactly the same workload </li></ul><ul><ul><li>Same set of WMS requests
  14. 14. Same data backends
  15. 15. Same image output format </li></ul></ul><ul><li>All modifications made to improve performance are to be included in a future release of the software
  16. 16. Data used cannot be modified for display, other than indexing
  17. 17. All testing to be done on the same benchmarking machines </li></ul><ul><ul><li>Windows and Linux servers, 2 separate identical servers </li></ul></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  18. 18. <ul>Datasets Used: Vector </ul><ul>Open Street Map for Colorado </ul><ul><li>IMPOSM to import data to PostGIS </li><ul><li>Optimized for rendering </li></ul><li>Styling from MapServer Utils Imposm branch (Googly style) </li></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  19. 19. <ul>Datasets Used: Extents </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  20. 20. <ul>Datasets Used: Raster </ul><ul>USGS DEMs NED 1 arc second </ul><ul><li>30m (approx) Resolution
  21. 21. 16bit Band Interleved Line (BIL)
  22. 22. Color range dynamically applied based on elevation </li></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  23. 23. <ul>Hardware specs </ul><ul><li>JMeter: </li></ul><ul><ul><li>Dell Precision Workstation 390 from 9/7/2006 </li></ul></ul><ul><ul><li>Processor, 6300, 1.86, 2M, Core Duo-conroe, Burn 2
  24. 24. 2Gb RAM 160 Gb Hard drive 7200 rpm OS: Centos 5.5 i386 </li></ul></ul><ul><li>WMS(2): </li></ul><ul><ul><li>Dell PowerEdge R410 - Ship Date: 7/7/2010
  25. 25. Processor: Intel® Xeon® E5630 2.53Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem
  26. 26. 8GB Memory (4x2GB)
  27. 27. 2TB 7.2K RPM SATA
  28. 28. OS: Windows Server 64bit, Centos 5.5 x86-64 </li></ul></ul><ul><li>Database: </li></ul><ul><ul><li>Gateway E6610D Intel Core2 Duo - E6750 2.66 Ghz
  29. 29. 250Gb Hard Drive 7200 rpm, 4Gb Ram
  30. 30. OS: Centos 5.5 x86-64 </li></ul></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  31. 31. <ul>Hardware Configuration </ul><ul>Open Source Geospatial Foundation </ul><ul></ul><ul>JMeter </ul><ul>Rasters </ul><ul>PostGIS </ul><ul>Bench </ul><ul>WMS Linux/Windows </ul><ul>Database </ul>
  32. 32. <ul>Methodology </ul><ul><li>Each test run performs requests with 1, 2, 4, 8, 16, 32, 64, 128 and 256 parallel clients (for a total of 3688 requests)
  33. 33. Each test uses a random set of requests stored in a CSV file: no two requests in the same run are equal, but all servers perform the same workload
  34. 34. Two separate tests </li></ul><ul><ul><li>Normal request </li></ul></ul><ul><ul><ul><li>The image size (between 64x64 and 1024x768)
  35. 35. The geographic envelope (extent) </li></ul></ul></ul><ul><ul><li>Seed type request </li></ul></ul><ul><ul><ul><li>Image size fixed (2248x2248) </li></ul></ul></ul><ul><li>Each test is run three times in a row, the results of the third run are used for the comparison: this benchmark assumes full file system caches ( “ hot ” benchmark) </li></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  36. 36. <ul>Results </ul><ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  37. 37. <ul>Vector Results – OSM/PostGIS </ul><ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  38. 38. <ul>Vector Seeding Results – OSM/PostGIS </ul><ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  39. 39. <ul>Vector+Raster Results - DEM+OSM </ul><ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  40. 40. <ul>Vector+Raster Seeding Results - DEM+OSM </ul><ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  41. 41. <ul>Team Reports </ul><ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  42. 42. <ul>Benchmarking 2011 </ul><ul><li>The Cadcorp GeognoSIS team had to withdraw at the last minute due to a serious family medical emergency
  43. 43. They will resume the tests if at all possible, and publish their results in due course
  44. 44. So watch this space
  45. 45. Not that any of you care about the results </li></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  46. 46. <ul>Benchmarking 2011 </ul><ul><li>Had to write a MapFile to SLD converter
  47. 47. Had to write a BIL reader for rasters </li><ul><li>Work was not completed in time </li></ul><li>Profiling observations </li><ul><li>CPU usage between 50 and 80%
  48. 48. 2/3 of time is spent reading the result of SQL </li><ul><li>Byte arrays are sent as text (base 64 encoding) </li></ul></ul></ul><ul><ul><ul><li>Known PostgreSQL-JDBC issue ( http://postgresql.1045698.n5.nabble.com/bytea-performance-tweak-td4510843.html ) </li></ul></ul></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  49. 49. <ul>Benchmarking 2011 </ul><ul><li>Due to lack of time decided to participate only on the vector tests
  50. 50. No shoot-out specific improvements this year, although didn't want to drop
  51. 51. Borrowed SLD 1.1 styles from Constellation team (thanks!)
  52. 52. Hopefully GeoServer keeps doing well
  53. 53. Same bottleneck: Java2D antialiasing rasterizer. Workaround: load balance an instance each with two cores </li></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  54. 54. <ul>Benchmarking 2011 </ul><ul><li>Node.js : New async javascript server: landspeed.js </li><ul><li>8 processes (nginx), threadpool of 16 (libeio)
  55. 55. >10 r/s than c++ (paleoserver) </li></ul><li>Labels : </li><ul><li>Deferred rendering (avoid extra db queries)
  56. 56. Faster placement and halo rendering </li></ul><li>Raster : new raster reprojection (using AGG)
  57. 57. Encoding : ability to pass options for zlib/png perf
  58. 58. Line drawing : option for faster rasterization
  59. 59. Clipping : first attempt at polyline clipping
  60. 60. Future : parallel db queries, intelligent feature caching Thank you : Thomas Bonfort, Konstantin Kaefer, AJ Ashton, Artem Pavlenko, Alberto Valverde, Hermann Kraus, Rob Coup, Simon Tokumine </li></ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  61. 61. <ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  62. 62. <ul>Benchmarking 2011 </ul><ul><li>QGS project with 349 layers (200 for labelling)
  63. 63. Might be the reason for overhead with small tiles
  64. 64. Improvements of rule-based renderer
  65. 65. New raster provider with reprojection support performed well
  66. 66. Data preparation needed, because QGIS requires a unique primary key
  67. 67. mod_fcgid configuration: </li><ul><li>Best results with FcgidMaxProcesses = 32 </li></ul><li>QGIS uses benchmark server for performance regression testing </li></ul><ul>QGIS Server </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  68. 68. <ul>Benchmarking 2011 </ul><ul><li>Png compression parameters </li><ul><li>Tradeoff between performance and image size </li></ul><li>Apache / mod_fcgid configuration: </li><ul><li>Run on worker mpm instead of prefork
  69. 69. Set FcgidMaxProcessesPerClass to a reasonable value (32) to avoid overwhelming the server with too many processes </li><ul><li>Default value will eat up postgres connections and lead to failed requests </li></ul></ul><li>Patch to mapserv to not re-parse the (huge!) mapfile at each request
  70. 70. Patch to MapServer to not apply run-time substitutions (cgi)
  71. 71. Further optimizations (not done): minimize classification cost </li><ul><li>Order classes by order of occurence
  72. 72. Stop using regexes </li></ul></ul><ul>Tweaks & Enhancements </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  73. 73. <ul>Awards </ul><ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  74. 74. <ul>Last Run Award </ul><ul>Benchmark run at 12:23pm today </ul><ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  75. 75. <ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  76. 76. <ul>Benchmarking 2011 </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>
  77. 77. <ul><li>Wiki home: http://wiki.osgeo.org/wiki/Benchmarking_2011
  78. 78. Mailing list: join at http://lists.osgeo.org/mailman/listinfo/benchmarking
  79. 79. SVN: http://svn.osgeo.org/osgeo/foss4g/benchmarking/wms/2011/ </li></ul><ul>Benchmarking 2011 </ul><ul>Thank you to all 6 teams! </ul><ul>Open Source Geospatial Foundation </ul><ul></ul>

×