WMS Performance Shootout 2010

17,633 views
17,419 views

Published on

WMS Benchmarking presentation and results, from the FOSS4G 2010 event in Barcelona. 8 different development teams participated in this exercise, to display common data through the WMS standard the fastest. http://2010.foss4g.org/wms_benchmarking.php

Published in: Technology

WMS Performance Shootout 2010

  1. 1. WMS Benchmarking Cadcorp GeognoSIS, Contellation-SDI, ERDAS APOLLO, GeoServer, Mapnik, MapServer, Oracle MapViewer, QGIS MapServer
  2. 2. Executive summary <ul><li>Compare the performance of WMS servers </li><ul><li>8 teams </li></ul><li>In a number of different workloads: </li><ul><li>Vector: native (EPSG:4326) and projected (Google Mercator) street level
  3. 3. Raster: native (EPSG:25831) and projected (Google Mercator) </li></ul><li>Against different data backends: </li><ul><li>Vector: shapefiles, PostGIS, Oracle Spatial
  4. 4. Raster: GeoTiff, ECW Raster </li></ul></ul>
  5. 5. Benchmarking History <ul><li>4th FOSS4G benchmarking exercise. Past exercises included: </li><ul><li>FOSS4G 2007: Refractions Research run and published the first comparison with the help of GeoServer and MapServer developers. Focus on big shapefiles, postgis, minimal styling
  6. 6. FOSS4G 2008: OpenGeo run and published the second comparison with some review from the MapServer developers. Focus on simple thematic mapping, raster data access, WFS and tile caching
  7. 7. FOSS4G 2009: MapServer and GeoServer teams in a cooperative benchmarking exercise </li></ul><li>Friendly competition: goal is to improve all software </li></ul>
  8. 8. Datasets Used: Vector Used a subset of BTN25, the official Spanish 1:25000 vector dataset <ul><li>6465770 buildings (polygon)
  9. 9. 2117012 contour lines
  10. 10. 270069 motorways & roads (line)
  11. 11. 668066 toponyms (point)
  12. 12. Total: 18 GB worth of shapefiles </li></ul>
  13. 13. Datasets Used: Raster <ul>Used a subset of PNOA images <li>50cm/px aerial imagery, taken in 2008
  14. 14. 56 GeoTIFFs, around Barcelona
  15. 15. Total: 120 GB </li></ul>
  16. 16. Datasets Used: Extents
  17. 17. Datasets Used: Extents Canary Islands are over there, but they are always left out
  18. 18. Datasets Used: Credits <ul>Both BTN25 and PNOA are products of the Instituto Geográfico Nacional . Any non-commercial use of the data (such as benchmarks) is allowed. You too can download the data used in the benchmark by visiting: <ul>http://centrodedescargas.cnig.es/CentroDescargas/ </ul></ul>
  19. 19. Datasets Used: Issues <ul><li>Real data, real problems
  20. 20. .shp files bigger than 2 GB </li><ul><li>Contours had to be split in 7 shapefiles </li></ul><li>.dbf files bigger than 2GB </li><ul><li>Problems accessing the attributes of some features
  21. 21. Caused some servers to not filter features by attribute </li></ul><li>Non-ASCII characters (ó, ñ, ç) not in UTF-8 </li><ul><li>Some servers had problems rendering these characters </li></ul></ul>
  22. 22. Rules of engagement <ul><li>Each server is tested in its latest version
  23. 23. Each server performs exactly the same workload </li><ul><li>Same set of WMS requests
  24. 24. Same data backends
  25. 25. Same image output format </li></ul><li>All modifications made to improve performance are to be included in a future release of the software
  26. 26. Data used cannot be modified for display, other than indexing
  27. 27. All testing to be done on the same benchmarking machines </li><ul><li>Windows and Linux servers, 2 separate identical servers </li></ul></ul>
  28. 28. Hardware Configuration Bench WMS Linux/Windows Database JMeter Shapefiles ECW GeoTIFF Oracle PostGIS
  29. 29. Hardware specs <ul><li>JMeter: </li><ul><li>Dell Precision Workstation 390 from 9/7/2006
  30. 30. Processor, 6300, 1.86, 2M, Core Duo-conroe, Burn 2
  31. 31. 2Gb RAM 160 Gb Hard drive 7200 rpm OS: Centos 5.5 i386 </li></ul><li>WMS(2): </li><ul><li>Dell PowerEdge R410 - Ship Date: 7/7/2010
  32. 32. Processor: Intel® Xeon® E5630 2.53Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem
  33. 33. 8GB Memory (4x2GB)
  34. 34. 2TB 7.2K RPM SATA
  35. 35. OS: Windows Server 64bit, Centos 5.5 x86-64 </li></ul><li>Database: </li><ul><li>Gateway E6610D Intel Core2 Duo - E6750 2.66 Ghz
  36. 36. 250Gb Hard Drive 7200 rpm, 4Gb Ram
  37. 37. OS: Centos 5.5 x86-64 </li></ul></ul>
  38. 38. Methodology <ul><li>Each test run performs requests with 1, 2, 4, 8, 16, 32 and 64 parallel clients (for a total of 2152 requests)
  39. 39. Each test uses a random set of requests stored in a CSV file: no two requests in the same run are equal, but all servers perform the same workload
  40. 40. For each request the random factors are: </li><ul><li>The image size (between 640x480 and 1024x768)
  41. 41. The geographic envelope (extent) </li></ul><li>Each test is run three times in a row, the results of the third run are used for the comparison: this benchmark assumes full file system caches (“hot” benchmark)
  42. 42. The other GIS server is shut down while the tests are run </li></ul>
  43. 43. Vector Output <ul><li>Vectors without projection (EPSG:4326), with (EPSG:3857)
  44. 44. PNG24 output AntiAliased, Scale dependent rendering </li></ul>
  45. 45. Raster Output <ul><li>Rasters without projection (EPSG:25831) with (EPSG:3857)
  46. 46. JPEG output (90% quality) </li></ul>
  47. 47. Resource consumption notes <ul><li>During a benchmark usually a single resource is used to its limits and plays the bottleneck
  48. 48. Common bottlenecks are the CPU, the disk access, the network access, the remote database access
  49. 49. Looking at system monitoring on the WMS machines during the benchmark runs, the disk was playing the role of the bottleneck in most tests. For some servers, for some runs, the data became fully cached and performance increased dramatically </li></ul>
  50. 50. Disk-bound v.s. unbound scenarios <ul><li>The initial bottleneck in the benchmark tests was the disk read access
  51. 51. Some teams managed to turn this disk-bound scenario into a disk-unbound scenario, allowing servers to fully run in memory, with no disc access and putting all load on the CPU
  52. 52. Results observed in the disk-unbound scenario were 5 to 10 times better than in the disk-bound scenario
  53. 53. This was possible thanks to the conjunction of the following: </li><ul><li>the Red Hat Enterprise Linux sever is efficiently caching data blocks at the OS level
  54. 54. the 2000 WMS requests are the exact same between all runs
  55. 55. the Server RAM memory is of 8GB </li></ul></ul>
  56. 56. Disk-bound v.s. unbound scenarios <ul><li>Unfortunately, the Windows server did not behave the same way as RHEL
  57. 57. The Windows server was down in the last benchmark days, preventing some teams to try to reproduce the disk unbound scenario
  58. 58. As apples cannot be compared to oranges, all participants did agree that teams were not mandatory to publish their benchmark results in the final presentation </li></ul>
  59. 59. I/O reading schematics File system cache Operating System <ul><li>File system cache stores the blocks
  60. 60. Read from disk in memory
  61. 61. Avoids repeated reads to the disk </li></ul>WMS server Disk
  62. 62. Disk bound situation File system cache <ul><li>New reads force older
  63. 63. data out of the cache
  64. 64. That data will have to
  65. 65. be read again, that will push
  66. 66. The next run starts from the
  67. 67. beginning </li></ul><ul><li>Older blocks
  68. 68. getting purged </li></ul>Operating System WMS server Disk
  69. 69. CPU bound situation File system cache <ul><li>All the data fits in the cache
  70. 70. Subsequent reads will be
  71. 71. performed from the cache only </li></ul><ul><li>All the data fits in the cache
  72. 72. Subsequent reads will be
  73. 73. performed from the cache only
  74. 74. No more disk access! </li></ul>Operating System <ul><li>No disk reads! </li></ul>WMS server Disk
  75. 75. Server/Team Findings La Rambla, Barcelona, 2010-09-07
  76. 76. GeoServer: Overview <ul>www.geoserver.org <li>Versions tested </li><ul><li>GeoServer 2.1 daily build (similar to 2.1 Beta, but not equal) </li></ul><li>Individuals involved </li><ul><li>Andrea Aime: setup, vector optimizations, raster optimizations
  77. 77. Simone Giannecchini, Daniele Romagnoli: raster optimizations
  78. 78. Ivan Grcic: extensive raster testing
  79. 79. Justin Deoliveira and Jody Garnett: suggestions and more vector tests </li></ul></ul>
  80. 80. GeoServer: Successes <ul><li>Software improvements </li><ul><li>Reading DBF files larger than 2GB
  81. 81. Large dataset/multilayer rendering speedups
  82. 82. Fast rectangular clipping of large geometries (clip before render)
  83. 83. Faster pure raster rendering path
  84. 84. General improvements in the shapefile reader to open less files and read less from disk
  85. 85. Improving spatial indexing to get compact yet selective indexes </li></ul></ul>
  86. 86. GeoServer: Challenges <ul><li>Disk vs CPU bound </li><ul><li>Lot of work went into optimizing I/O assuming a disk bound test
  87. 87. The test easily flips from being disk bound to being CPU bound
  88. 88. At some point the optimizations made it possible for all data to sit in the operating system cache </li></ul><li>Raster reprojection </li><ul><li>Other teams use local approximation techniques to speed up transformation (use of local linear transform)
  89. 89. We did not make it in time to implement it this year, but it's in the pipeline </li></ul></ul>
  90. 90. Disk bound, warming up HotSpot Unstable, snapping out of disk bound-ness CPU bound
  91. 91. Java servers Achille's heel <ul><li>Sun Java2D can rasterize only one shape at a time in the whole process when antialiasing is enabled
  92. 92. OpenJDK can rasterize multiple in parallel, but it's very slow
  93. 93. http://bugs.sun.com/view_bug.do?bug_id=6508591
  94. 94. Affects APOLLO, Constellation, GeoServer, MapViewer
  95. 95. Suggested setup: when you have more than 4 CPU, and if you are normally CPU bound, setup multiple processes and use a load balancer to distribute the load among them </li></ul>GeoServer 1 GeoServer 2 Load balancer
  96. 96. MapServer: Overview <ul><li>Background www.mapserver.org </li><ul><li>Tested on Linux and Windows (x64) </li></ul><li>Versions tested </li><ul><li>MapServer 5.7-dev
  97. 97. Apache 2.2 with mod_fcgid
  98. 98. GDAL 1.7-dev </li></ul><li>Individuals involved </li><ul><li>Jeff, Mike, Daniel, Frank, Alan, Julien, SteveL </li></ul></ul>
  99. 99. MapServer: Improvements <ul><li>Previous years </li><ul><li>improvements in large shapefile indexing
  100. 100. raster read optimization (single pass for multiple bands)
  101. 101. enhancing polygon labelling placement
  102. 102. EPSG codes caching
  103. 103. PostGIS improvements
  104. 104. Oracle improvements </li></ul><li>Current </li><ul><li>large DBF support (> 2GB)
  105. 105. improving labels on curved lines
  106. 106. Improved label formatting
  107. 107. discovered raster reprojection sampling parameter </li></ul></ul>
  108. 108. Labelling Contours (1)
  109. 109. Labelling Contours (1)
  110. 110. MapServer: Label Overlap Angle
  111. 111. Labelling Contours (2)
  112. 112. Labelling Contours (2)
  113. 113. MapServer – Raster Resampling <ul><li>2x oversampling - ~ 2.5 requests/sec
  114. 114. MapServer Default </li></ul>
  115. 115. MapServer – Raster Resampling <ul><li>Added PROCESSING &quot;OVERSAMPLE_RATIO=1&quot;
  116. 116. Result: 50 requests/sec – only due to disk bound problem </li></ul>
  117. 117. MapServer: Challenges <ul><li>Not enough time in the day to run all tests
  118. 118. Debate: speed optimization vs map image output quality
  119. 119. No time spent on a “best run”, although a lot of optimization was performed for “base run”
  120. 120. Having been involved in previous years, able to complete more tests than some newcomers, but we were still incomplete </li></ul>
  121. 121. Mapnik: Overview <ul><li>Background www.mapnik.org </li><ul><li>Tested on Linux (windows next year)
  122. 122. Written in C++, boost, agg, cairo </li></ul><li>Versions tested </li><ul><li>Rendering library: Mapnik trunk (aka Mapnik2)
  123. 123. Server: Paleoserver 0.1 (boost::asio multithreaded http)
  124. 124. Server2: mod_mapnik_wms (apache module) </li></ul><li>Individuals involved </li><ul><li>Matt Kenny (mkgeomatics.com/) designed styles
  125. 125. Artem Pavlenko and Robert Coup - features and fixes </li></ul></ul>
  126. 126. Mapnik: Successes <ul><li>Lessons: </li><ul><li>First year – awesome, humbling, excited for next
  127. 127. Collaboration takes commitment and positive attitude </li></ul><li>Software improvements </li><ul><li>New C++ async, threaded WMS server (paleoserver)
  128. 128. More efficient shapefile reading
  129. 129. Caching of features when >1 style applied to layer </li></ul><li>Benefits for team, for users, for community... </li><ul><li>Paleoserver scales excellently, light threading model ++
  130. 130. Mapnik + PostGIS + 32 threads (Cores * 4) = FAST </li></ul></ul>
  131. 131. Mapnik: Challenges <ul><li>Tiles, PostGIS, OSM
  132. 132. Too many threads (Cores*2) with shapefiles == disk bound
  133. 133. Rasters and vector reprojection need attention </li></ul>
  134. 134. Oracle MapViewer: Overview <ul><li>Background oracle.com/technetwork/middleware/mapviewer </li><ul><li>Oracle MapViewer is a J2EE server component, and a part of Oracle's Fusion Middleware offering
  135. 135. Tests were done on Linux, connecting to Oracle 11g R2
  136. 136. MapViewer can run on all J2EE compatible containers </li></ul><li>Versions tested </li><ul><li>MapViewer 11g R2 development builds </li></ul><li>Individuals involved </li><ul><li>LJ Qian, Dan Geringer, and a big “Thank you” to Mike Smith who setup the initial MapViewer/Oracle DB and did all the map layer styling. </li></ul></ul>
  137. 137. Oracle MapViewer: Successes <ul><li>Lessons: </li><ul><li>Overall we found this event extremely helpful </li></ul><li>Software improvements </li><ul><li>Fixed a CASED line bug where features were merging cross themes.
  138. 138. Added generic middle-tier CS transformation and user configurable DPI
  139. 139. Completed shapefile support (but did not have time to test it).
  140. 140. Identified some major bottlenecks in MapViewer </li></ul></ul>
  141. 141. Oracle MapViewer: Challenges <ul><li>Problems or difficulties encountered </li><ul><li>How to better utilize large amount of memory in the middle tier (e.g., caching)
  142. 142. How to make use of GDAL and OGR tools
  143. 143. Plan to continue use of benchmark servers for further testing
  144. 144. Plan to participate next year
  145. 145. No raster results, db server not equivalent to file servers in performance, not fair comparison. Will have equivalent db server next year </li></ul></ul>
  146. 146. Oracle MapViewer: Results <ul><li>We average 17 requests /s for both 4326 and 3857 vector tests, connecting to Oracle Spatial via JDBC thin driver.
  147. 147. MapViewer was running with JDK 6, with 1.2GB of heap memory.
  148. 148. A database connection pool of 16 was used by MapViewer
  149. 149. There is definitely room for improvements! </li></ul>
  150. 150. QGIS Mapserver: Overview <ul>www.qgis.org FCGI WMS server for publishing QGIS projects <li>Focused on integration with desktop applicaton
  151. 151. QGIS desktop and WMS server use same rendering library </li><ul><li>Uses libqgis_core / QImage for map symbolisation and rendering
  152. 152. Desktop users benefit from all improvements in QGIS mapserver
  153. 153. Server image looks the same as in desktop GIS (exception: image compression) </li></ul><li>WMS server will be included in QGIS 1.6
  154. 154. QGIS Mapserver is on the OSGeo live DVD </li></ul>
  155. 155. QGIS Mapserver: Successes <ul><li>Versions </li><ul><li>Threading branch (gsoc)
  156. 156. SVN trunk
  157. 157. Symbology-ng (except for contours) </li></ul><li>Software improvements during benchmarks </li><ul><li>Fixed some memory leaks
  158. 158. Adapted scale calculation to be comparable with other servers
  159. 159. Symbol levels in rule based renderer </li></ul></ul>
  160. 160. QGIS Mapserver: Challenges <ul><li>Benchmark problems </li><ul><li>Fallback to svn trunk version
  161. 161. Problems with the contour layer ( probably related to geometry clipping )
  162. 162. Not enough time for getting consistent results </li></ul><li>Work to be done after the conference: </li><ul><li>Testing each layer individually to find bottlenecks
  163. 163. More examination on contour layer, with/without labeling
  164. 164. See what can be improved on QGIS mapserver / QGIS level and what needs work on render engine level (Qt libraries) </li></ul></ul>
  165. 165. Constellation-SDI <ul>www.constellation-sdi.org <li>Background </li><ul><li>OGC web services platform: CSW, SOS, WMS,WMTS, WCS...
  166. 166. JEE server deployed on gnu/linux and windows
  167. 167. Core based on Geotoolkit , web apps based on MapFaces
  168. 168. Free software (LGPL) </li></ul><li>Versions tested </li><ul><li>Improvements will land in version 0.7, after formal review </li></ul><li>Individuals involved </li><ul><li>Cedric Briançon, Martin Desruisseaux, Johann Sorel </li></ul></ul>
  169. 169. Constellation-SDI: Successes <ul><li>Lessons: </li><ul><li>First serious work on performance </li></ul><li>Software improvements </li><ul><li>Benchmarking: building test designs and tools
  170. 170. GeoTiff reader, TileManagers for unstructured mosaics
  171. 171. Raster performance: direct readers, fast reprojection
  172. 172. Vector performance: clean pipeline, file traversal issues </li></ul><li>Benefits </li><ul><li>focus on profiling, load testing, stressing, vm/os envt </li></ul></ul>...not the last
  173. 173. Constellation-SDI: Challenges <ul><li>Good testing is hard </li><ul><li>WMS is a portrayal service. </li><ul><li>WMS output is not identical, cannot be basis of test
  174. 174. Output image size and compression vary
  175. 175. Raster images vary in quality
  176. 176. Vector plots vary in generalization, symbology
  177. 177. => Tried to harmonize styles, forgot to check image sizes and compression levels, avoided discussions of quality although it is key to some participants </li></ul></ul></ul>
  178. 178. Constellation-SDI: Challenges <ul><li>Good testing is hard </li><ul><li>WMS is a portrayal service
  179. 179. WMS servers designed for many scales and uses </li><ul><li>serve small datasets or national data repositories
  180. 180. use existing (ancillary) data or build custom data
  181. 181. => Tests will be different to address performance of these different environment </li></ul></ul></ul>
  182. 182. Constellation-SDI: Challenges <ul><li>Good testing is hard </li><ul><li>WMS is a portrayal service
  183. 183. WMS servers designed for many scales
  184. 184. Testing metrics must satisfy several criteria </li><ul><li>repeatable,
  185. 185. reliable,
  186. 186. specific,
  187. 187. discriminatory
  188. 188. => Need good tests, multiple runs </li></ul></ul></ul>
  189. 189. Constellation-SDI: Challenges <ul><li>Good testing is hard </li><ul><li>WMS is a portrayal service
  190. 190. WMS servers designed for many scales
  191. 191. Testing metrics must satisfy several criteria
  192. 192. Testing takes time, addressing issues takes work </li><ul><li>The 2010 testing regime was inprecise and insufficient, servers were down in the last critical hours, community organization unclear.
  193. 193. => some teams did not obtain final results
  194. 194. => mean as metric not great, results unstable
  195. 195. => next time around we hope to do better </li></ul></ul></ul>
  196. 196. Constellation-SDI: Results
  197. 197. Open Source Geospatial Foundation Cadcorp GeognoSIS: Overview <ul>www.cadcorp.com <li>Background </li></ul><ul><ul><li>Entered to compete and to learn
  198. 198. Tested on Windows
  199. 199. C++ </li></ul></ul><ul><li>Versions tested </li></ul><ul><ul><li>7.0 (current production release) </li></ul></ul><ul><li>Individuals involved </li></ul><ul><ul><li>Two Martins: Daly and Schäfer </li></ul></ul>
  200. 200. Open Source Geospatial Foundation Cadcorp GeognoSIS: Successes <ul><li>Lessons: </li></ul><ul><ul><li>Rock solid stability up to 256 threads (in some unofficial tests) </li></ul></ul><ul><li>Software improvements </li></ul><ul><ul><li>Handle DBF files > 2Gb; Support PostGIS geometry columns with 4D (XYZM) geometry type; Improved DBF string encoding handling; Cache DBF record data for the current SHP record; Allow Label Theme to use fill brush and/or outline pen from other Overlay Themes; Improved Label Theme &quot;along lines&quot; option, to better suit contour data </li></ul></ul><ul><li>Benefits for team, for users, for community... </li></ul><ul><ul><li>Programmers using the product suite for a “real” project led to a few UI/UX improvements in desktop SIS
  201. 201. Miscellaneous bugfixes and improvements, above </li></ul></ul>
  202. 202. Open Source Geospatial Foundation Cadcorp GeognoSIS: Challenges <ul><li>Our raster layer (above GDAL) is not as efficient as it could be </li></ul><ul><ul><li>mloskot – behind you! - has already rewritten it since joining, for the next version of SIS: 7.1 </li></ul></ul><ul><li>Very large shapefiles is not a typical use case for us
  203. 203. We could not handle DBF files > 2Gb, so we fixed it
  204. 204. Our Label Theme was poor for contour lines and only supported homogeneous styling, so we fixed those too </li></ul>
  205. 205. Open Source Geospatial Foundation Cadcorp GeognoSIS: Results <ul><li>Getting a kicking, as predicted 
  206. 206. We should just use SHP .QIX files (viz shptree) and be done with it
  207. 207. We’ve got plenty of work to do
  208. 208. You can help! </li></ul><ul><ul><li>http://blog.lostinspatial.com/2010/09/07/cadcorp-is-recruiting-2/ </li></ul></ul><ul><li>We will be careful not to target this one (shapefiles and TIF mosaic) scenario </li></ul>
  209. 209. 09/09/2010 Open Source Geospatial Foundation ERDAS : Overview <ul><li>Background </li></ul><ul><ul><li>First participation
  210. 210. Tested on Windows server
  211. 211. Java based for vector and native code for raster </li></ul></ul><ul><li>Versions tested </li></ul><ul><ul><li>ERDAS APOLLO Essentials SDI & IWS 10.1 </li></ul></ul><ul><li>Individuals involved </li></ul><ul><ul><li>Anne-Sophie Collignon for the configurations, issues follow-up and tests runs
  212. 212. Liège and Perth APOLLO development teams for software improvements
  213. 213. Luc Donea for the overall follow-up </li></ul></ul>
  214. 214. 09/09/2010 Open Source Geospatial Foundation ERDAS : Successes <ul><li>Lessons & benefits: </li><ul><li>Exciting project, good discussion/collaboration between teams
  215. 215. Allow to concentrate on the software performance and rendering quality improvements.
  216. 216. Allow to experiment different server setup and document the preferred configuration for this kind of use-case </li></ul><li>Software improvements </li><ul><li>For APOLLO IWS : </li><ul><li>Upgrade to GDAL 1.7.2
  217. 217. Direct access to GDAL for Geotiff reader
  218. 218. Improved the mosaicing of TIF datasets to remove seam lines </li></ul><li>For APOLLO SDI : </li><ul><li>Tomcat 6 and Java 6: improved performances over Tomcat 5.5 and Java 5 Rendering : </li><ul><li>Clash management for contour labels
  219. 219. Multipath rendering optimisation
  220. 220. Low level code optimisation
  221. 221. New option for shape index in memory </li></ul></ul></ul></ul>
  222. 222. 09/09/2010 Open Source Geospatial Foundation ERDAS: Challenges <ul><li>Problems or difficulties encountered </li></ul><ul><ul><li>Low resources : product release period coupled with holidays
  223. 223. First participation in FOSS4G WMS Shootout
  224. 224. Huge disk read bottleneck reducing rendering optimizations effects on the overall performances
  225. 225. Tests started very late because of data availability. As a result, issues were discovered late in the test design leaving no time to fix. </li></ul></ul>
  226. 226. 09/09/2010 Open Source Geospatial Foundation ERDAS: Disk bound vs. CPU bound <ul><li>Some teams do experiment disk bounds scenarios while some others are running in a CPU bounds scenario </li></ul><ul><li>ERDAS did experiment disk bound scenario on windows and was unable to investigate a CPU bound scenario : not enough time and Windows server down the last days. </li></ul><ul><li>A usual use-case would be to have different requests using random BBOX. In this case, the OS data block caching would be a lot less effective bringing the servers back to a disk bound scenario </li></ul>
  227. 227. 09/09/2010 Open Source Geospatial Foundation ERDAS: Disk bound vs. CPU bound <ul><li>The use-cases met by ERDAS usually involve datasets that are bigger than the machine memory (RAM), and random distribution of the request BBOX. That’s why ERDAS believes that the CPU bound scenario is not representative of ERDAS customers needs.
  228. 228. As apples cannot be compared to oranges, disk bound scenario cannot be compared to CPU bound scenario. </li></ul>
  229. 229. 09/09/2010 Open Source Geospatial Foundation ERDAS: Conclusion <ul><li>As the benchmark participants could not reach a consensus on the results validity, the participants agreed that teams were not mandatory to publish their benchmark results in the final presentation
  230. 230. Given the inconsistencies in the tests conception that were discussed, ERDAS is concerned that the different throughput results between server applications might confuse the community and mislead the community.
  231. 231. ERDAS plans to conduct a webinar in the future to review the methodology and results of the FOSS4G benchmark. ERDAS will also provide analysis and ideas for future improvements. </li></ul>
  232. 232. Results
  233. 233. Vector – No Reprojection (EPSG:4326)
  234. 234. Vector – Reprojection (EPSG:3857)
  235. 235. Raster – No Reprojection (EPSG:25831)
  236. 236. Raster – Reprojection (EPSG:3857)
  237. 237. Vector/Raster – No Reproj (EPSG:25831)
  238. 238. Vector/Raster – Reproj (EPSG:3857)
  239. 239. Vector – PostGIS – No Reproj (EPSG:4326)
  240. 240. Vector – Oracle – No Reproj (EPSG:4326)
  241. 241. Hopeful Changes for Next Shootout <ul><li>Servers available early in the process (now)
  242. 242. Data available VERY early in the process
  243. 243. Initial test runs performed early in the process
  244. 244. More participation/review throughout entire process
  245. 245. Design of testing parameters agreed upon early on </li></ul>
  246. 246. More Information <ul><li>Scripts, configuration files, results stored in OSGeo SVN: http://svn.osgeo.org/osgeo/foss4g/benchmarking/
  247. 247. Wiki home: http://wiki.osgeo.org/wiki/Benchmarking_2010
  248. 248. Mailing list: http://lists.osgeo.org/mailman/listinfo/benchmarking </li></ul>
  249. 249. Post Presentation Slides
  250. 250. 09/09/2010 Open Source Geospatial Foundation FOSS4G benchmark: Navigation Movie <ul><li>The following screen demo was captured on 8th September when no other testing was taking place. </li></ul><ul><li>The test client is built using OpenLayers widgets
  251. 251. All maps are synchronized to the top left master.
  252. 252. There is no delay in requesting maps so it is a true performance indicator based on when maps are returned
  253. 253. Two servers are running on the Windows box </li></ul><ul><ul><li>MapServer Windows
  254. 254. ERDAS APOLLO </li></ul></ul><ul><li>The two other servers are running on Red Hat Enterprise Linux </li></ul><ul><ul><li>MapServer RHEL
  255. 255. GeoServer </li></ul></ul><ul><li>This movie also demonstrates the efforts that were done by each team to reach rendering quality, and styling compliance with the benchmark specifications </li></ul>

×