WMS Performance Shootout 2009
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

WMS Performance Shootout 2009

on

  • 24,338 views

Presented at FOSS4G 2009 in Sydney, Australia

Presented at FOSS4G 2009 in Sydney, Australia

Statistics

Views

Total Views
24,338
Views on SlideShare
18,731
Embed Views
5,607

Actions

Likes
11
Downloads
417
Comments
2

58 Embeds 5,607

http://www.creativa-consultores.com 1250
http://www.cnblogs.com 880
http://georezo.net 822
http://gis-lab.info 589
http://fausse-piste.net 392
http://blog.xiangjian.info 364
http://blog.giswhat.be 299
http://www.fernandoquadro.com.br 187
http://detlaphiltdic.blogspot.com 170
http://haihepeng.cnblogs.com 105
http://www.slideshare.net 100
http://www.geoinweb.com 70
http://xss.yandex.net 58
http://fuzzytolerance.info 54
http://archive.cnblogs.com 44
http://giswhat.be 35
http://www.earthblog.jendryke.de 28
http://gisdk.blogspot.com 23
http://www.haogongju.net 21
http://detlaphiltdic.blogspot.co.uk 12
http://translate.googleusercontent.com 11
http://aulavirtual.cnti.gob.ve 11
http://detlaphiltdic.blogspot.ca 9
http://cache.baidu.com 9
http://www.xiangjian.info 6
http://detlaphiltdic.blogspot.de 5
http://detlaphiltdic.blogspot.fr 5
http://detlaphiltdic.blogspot.com.au 4
http://detlaphiltdic.blogspot.nl 3
http://webcache.googleusercontent.com 3
http://xiangjiangis.blogspot.com 3
http://detlaphiltdic.blogspot.mx 2
http://detlaphiltdic.blogspot.ie 2
http://xss.yan 2
http://detlaphiltdic.blogspot.fi 2
http://xss.yandex 2
http://xiangjian.info 2
http://cache.baiducontent.com 2
http://203.208.37.132 2
http://xss.y 1
http://xss.ya 1
http://detlaphiltdic.blogspot.it 1
http://detlaphiltdic.blogspot.hu 1
http://detlaphiltdic.blogspot.in 1
http://detlaphiltdic.blogspot.be 1
http://www.verydemo.com 1
http://www.fausse-piste.net 1
http://detlaphiltdic.blogspot.co.nz 1
file:// 1
http:// 1
More...

Accessibility

Categories

Upload Details

Uploaded via as OpenOffice

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Very good sharing. Thanks for that.

    Mark Chang, www.free-ringtones.co.in/ www.free-ringtones-for-sprint.com/
    Are you sure you want to
    Your message goes here
    Processing…
  • Thanks , Thanks , Thanks . Very useful.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

WMS Performance Shootout 2009 Presentation Transcript

  • 1. WMS Performance Shootout Andrea Aime (GeoServer) Jeff McKenna (MapServer)
  • 2. Executive summary
    • Compare the performance of WMS servers
      • GeoServer
      • 3. MapServer
    • In a number of different workloads:
      • Vector: plain polygon, point with graphical icons, road network with cased lines, road network with labels
      • 4. Raster: global imagery
    • Against different data backends:
      • Vector: shapefiles, PostGIS, Oracle spatial, SDE over Oracle spatial
      • 5. Raster: ECW and GeoTiff mosaics
  • 6. Benchmarking History
    • 3 rd FOSS4G benchmarking exercise. Past exercises included:
      • FOSS4G 2007: Refractions Research run and published the first comparison with the help of GeoServer and MapServer developers. Focus on big shapefiles, postgis, minimal styling
      • 7. FOSS4G 2008: OpenGeo run and published the second comparison with some review from the MapServer developers. Focus on simple thematic mapping, raster data access, WFS and tile caching
    • Friendly competition: goal is to improve all software
  • 8. Past MapServer improvements
    • improvements in large shapefile indexing
    • 9. raster read optimization (single pass for multiple bands)
    • 10. enhancing polygon labelling placement
    • 11. EPSG codes caching
    • 12. PostGIS improvements
    • 13. Oracle improvements
  • 14. Past GeoServer improvements
    • Overall rendering pipeline improvements
    • 15. GML writing speed-ups
    • 16. Raster data access improvements
    • 17. PostGIS data store optimizations
  • 18. Rules of engagement
    • Each server is tested in its latest version
    • 19. Each server performs exactly the same workload
      • Same set of WMS requests
      • 20. Same data backends
      • 21. Same image output format
    • All modifications made to improve performance are to be included in a future release of the software
    • 22. Data used cannot be modified for display, other than indexing
    • 23. All testing to be done on the same benchmarking machines
  • 24. Hardware Configuration Bench WMS Database JMeter GeoServer MapServer Shapefiles ECW GeoTIFF Oracle PostGIS SDE
  • 25. Software Specs
    • MapServer:
      • 5.6.0-beta3, GDAL SVN
      • 26. Apache 2.2
      • 27. FastCGI - 8 static processes
    • GeoServer:
      • 2.0-RC2, Java 6 update18, GDAL 1.4.5
      • 28. Tomcat 6.0.20 - 8 threads
    • Database:
      • Oracle 11.1.0.7
      • 29. SDE 9.3.1
      • 30. PostGIS 1.4.0
  • 31. Hardware specs
    • Bench (2003):
      • Dell PowerEdge 1750
      • 32. 1 Xeon Nocona 3Ghz, 1 GB RAM, OS: RedHat Enterprise 5.3
    • WMS (2004):
      • Dell PowerwEdge 2850
      • 33. 4 Xeon Nocona 3.4Ghz, 8 GB RAM
      • 34. 6 73 GB, 15K RPM hard drives, OS: RedHat Enterprise 5.3
    • Database (2007):
      • Dell Optiplex 755 tower
      • 35. 1 x Intel Core2 Duo CPU E6750 @ 2.66GHz / Dual Core, 4Gb RAM
      • 36. 100Gb SATA 3.0Gb/s 7200 rpm HD, OS: RedHat Enterprise 5.3
  • 37. Hot vs Cold Benchmarks
    • Hot benchmark
      • The file system caches are fully loaded
      • 38. The software has had the occasion to perform whatever pre-processing is necessary (e.g., open connections to the data source)
    • Cold benchmark
      • The system has been just started
      • 39. There is no existing cache
    • Both are unrealistic, production is usually a mix of both
    • 40. Hot benchmarks are more practical to run
  • 41. Methodology
    • Each test run performs requests with 1, 10, 20 and 40 parallel clients (for a total of 1500 requests)
    • 42. Each test uses a random set of requests stored in a CSV file: no two requests in the same run are equal, but all servers perform the same workload
    • 43. For each request the random factors are:
      • The image size (between 640x480 and 1024x768)
      • 44. The geographic envelope (extent)
    • Each test is run three times in a row, the results of the third run are used for the comparison: this benchmark assumes full file system caches (“hot” benchmark)
    • 45. The other GIS server is shut down while the tests are run
  • 46. Datasets Used
    • Polygon layer: areawater_merge : the TIGER 2008 set of polygons describing water surfaces for the state of Texas. 380000 polygons, 195MB shapefile, EPSG:4269
    • 47. Point layer: gnis_names09 : all location and points of interest names for the state of Texas in the GNIS database. 103000 points, 2.8 MB shapefile, EPSG:4269
    • 48. Line layer, edges_merge : all line elements (rivers and roads) from the TIGER 2008 dataset for the state of Texas. over 5M lines, 1.2GB shapefile, 1.4GB dbf, EPSG:4269
    • 49. Raster layer : Bluemarble TNG, 86400 x 43200 pixels, 7 overview layers (TIF, ECW, tiled 512&8, GeoRaster)
  • 50. Equivalent traffic table
    • The presentation charts results as throughput in requests per second.
    • 51. This table translates hits per second at sustained peak load in terms of requests per hour and per day
  • 52. T1: plain polygon rendering
    • TIGER 2008 “areawater” layer, merged over Texas
    • 53. Simply fill the polygon in blue, no outline
    • 54. Image scales roughly between 1:10.000 and 1:100.000
  • 55. T1: Shapefiles
  • 56. T1: PostGIS
  • 57. T1: Oracle
  • 58. T1: SDE
  • 59. T1: observations
    • GeoServer is between 25% and 50% faster reading shapefiles out of the box, MapServer with FastCGI is slightly faster
    • 60. PostGIS performs roughly 5% better than Oracle on higher loads
    • 61. SDE seems to introduce no overhead compared to direct Oracle access for GeoServer, and a slight overhead for MapServer
    • 62. GeoServer and MapServer have exactly the same performance when the backend is PostGIS and Oracle
  • 63. T2: point rendering
    • GNIS names dataset over Texas
    • 64. Each point is rendered using a little GIF/PNG graphics
  • 65. T2: shapefiles
  • 66. T2: PostGIS
  • 67. T2: Oracle
  • 68. T2: SDE
  • 69. T3: roads map
    • TIGER 2008 edges, just roads (rivers filtered out), three different line styles based on the road class
  • 70. T3: shapefiles
  • 71. T3: PostGIS
  • 72. T3: Oracle
  • 73. T3: SDE
  • 74. T4: labelled roads
    • Same as T3, but higher scale (between 1:10k and 1:3k) and with labels following lines
  • 75. T4: shapefiles
  • 76. T4: PostGIS
  • 77. T4: Oracle
  • 78. T4: SDE
  • 79. T5: raster
    • Bluemarble TNG, as single ECW file, 512 mosaic TIFF, and BigTIFF (MapServer) vs 8 tiles mosaic TIFF (GeoServer)
    • 80. ECW file is 260MB
    • 81. TIFFs are uncompressed, tiled, with overviews. Around 16GB for each mosaic
  • 82. T5: ECW
  • 83. T5: 512 tiles mosaic
  • 84. T5: BigTiff vs 8 tiles tiff GeoServer cannot do BigTiff, so a 8 tiles mosaic was used (almost 2G B per tile)
  • 85. FOSS4G 2009 improvements
    • Several code enhancements were made during this year's exercise:
      • MapServer
        • Fixed problem with labels following features
        • 86. Fixed problem with 5.6.0-beta for SDE layers
        • 87. Found problem with 5.6.0-beta for Oracle fastcgi and memory leak
      • GeoServer
        • Fixed problem with Oracle connection pooling and index scanning
        • 88. Fixed problem with ECW crashes under high load
  • 89. Hopeful Changes for Next Shootout
    • Involvement of more WMS servers
    • 90. Put the servers under a faster internet connection for quicker data transfers and tests setups
    • 91. Modify the number of parallel client requests performed so that the 2-4-8 parallel clients case gets exposed
    • 92. Phases/cycles for benchmarking – to allow for teams to interpret results and make software enhancements
    • 93. Reprojection in both vector and raster data
    • 94. Audience suggestions?
  • 95. More Information
    • Scripts, configuration files, results stored in OSGeo SVN: http://svn.osgeo.org/osgeo/foss4g/benchmarking/
    • 96. Wiki home: http://wiki.osgeo.org/wiki/Benchmarking_2009
    • 97. Mailing list: http://lists.osgeo.org/mailman/listinfo/benchmarking
  • 98. Additional MapServer Results (not presented)
  • 99. MapServer Roads/Lines Test Results
  • 100. MapServer Labelled Roads/Lines Test Results
  • 101. MapServer Polygon Test Results
  • 102. MapServer Point Test Results
  • 103. MapServer Raster Test Results
  • 104. FastCGI Configuration Used for MapServer Tests
    • FastCgiServer /var/www/cgi-bin/mapserv.fcgi -processes 8 -initial_env LD_LIBRARY_PATH=/usr/local/lib:/opt/build/instant_client_11_1/lib:/opt/build/arcsde_c_sdk/lib