Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Like this? Share it with your network

Share

Best practices for hosting and deploying Plone

on

  • 4,821 views

Plone is one of the easiest CMSes to download and install on your local machine, but when you need to deploy it to a production server you need to learn about Apache vhosts, caching proxies, load ...

Plone is one of the easiest CMSes to download and install on your local machine, but when you need to deploy it to a production server you need to learn about Apache vhosts, caching proxies, load balancing, logfile rotation, database packing, backups and security issues. That's a lot to learn just to host a Plone site!

The good news is that much of this complexity can be solved by the effective use of buildout recipes and OS packages. We will look at a working buildout example to see how to put these recipes [1] to use in a production environment, including tools [2] to monitor your site and alert you when it's having problems.

Once your site is deployed, you need to perform regular updates both to Plone and the OS. We will discuss how to best deploy changes to your production server in a consistent and failsafe way, including releasing eggs to your own egg server so that you aren't dependent on the cheeseshop.

Imagine with one command being able to launch a new virtual machine that installs an entire production environment including your custom Plone site in a matter of minutes!? We will look at some of these emerging tools [3] and discuss how they simplify the deployment story.

This talk will benefit those who want to learn about various Plone hosting options (own servers, VPS, Amazon EC2), and ways to optimize the monitoring and deployment of your Plone sites. And for those Plone hosting veterans, you might learn a few new tricks to make your job easier and save time and money.

Statistics

Views

Total Views
4,821
Views on SlideShare
4,727
Embed Views
94

Actions

Likes
12
Downloads
162
Comments
5

4 Embeds 94

http://www.slideshare.net 87
http://paper.li 4
http://www.linkedin.com 2
http://a0.twimg.com 1

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

CC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />

Best practices for hosting and deploying Plone Presentation Transcript

  • 1. Best practices for Plone hosting and deployment Nate Aune Twitter: @natea www.jazkarta.com Plone Symposium East 2010 May 27, 2010 State College, PA
  • 2. Agenda • Hosting (self-hosting, dedicated,VPS, cloud) • Deploying (hostout, mr.awsome, silverlining) • Development (virtualenv, buildout, version control) • Production (caching, load balancing, zeo, supervisor) • Maintenance (backups, packing, logfile rotation) • Monitoring (supervisor, Munin, Nagios, ZenOss)
  • 3. Hosting • Self-hosting (on-premise or co-lo) • Shared server (Webfaction) • VPS (virtual private server) • Dedicated machine (physical hardware) • On-demand cloud hosting provider (EC2)
  • 4. Self-hosting • Pros: • Total control • Inexpensive if already paying for bandwidth • Cons: • Bandwidth constrained • Maintenance burden • Security
  • 5. Shared hosting or VPS • Pros: • Inexpensive • Quick to set up • Cons: • resource constrained • might not support high traffic site • potential security risk
  • 6. Dedicated server • Pros: • Can use all machine’s resources • Potentially better performance • Cons: • Paying for resources not used • Availability of additional servers to scale • Disaster recovery usually means restoring from backup = downtime
  • 7. On-demand cloud provider Amazon EC2, Rackspace Cloud, etc. • Pros: • Only pay for what you use • Procure a new machine instantly • Easy to architect fault tolerant system • Can easily scale on demand • Cons: • Learning curve • Support may be limited unless you pay for it
  • 8. Deployment tools • collective.hostout (by Dylan Jay) • mr.awsome (by Florian Schulze) • silverlining (by Ian Bicking)
  • 9. collective.hostout • Control multiple application environments with the minimum amount of effort. • Manage local, staging and deployment environments with one easy tool. • Deploy your buildout to Amazon EC2, Rackspace Cloud or any other provider. http://pypi.python.org/pypi/collective.hostout
  • 10. hostout architecture • collective.hostout • apache-libcloud • hostout.cloud • hostout.supervisor • hostout.ubuntu
  • 11. hostout architecture • collective.hostout • apache-libcloud • hostout.cloud • hostout.supervisor • hostout.ubuntu
  • 12. Libcloud libcloud.org
  • 13. Libcloud libcloud.org
  • 14. Libcloud libcloud.org
  • 15. Libcloud libcloud.org
  • 16. deploy.cfg - Rackspace Cloud [buildout] extends = buildout.cfg keys.cfg parts += host1 [host1] recipe = collective.hostout extends = hostout.cloud hostout.ubuntu hostout.supervisor hostname = mynewploneserver hosttype = Rackspace key = ${keys:rackspace_key} secret = ${keys:rackspace_secret} parts = instance supervisor
  • 17. deploy.cfg - Rackspace Cloud [buildout] extends = buildout.cfg keys.cfg parts += host1 [host1] recipe = collective.hostout extends = hostout.cloud hostout.ubuntu hostout.supervisor hostname = mynewploneserver hosttype = Rackspace key = ${keys:rackspace_key} secret = ${keys:rackspace_secret} parts = instance supervisor
  • 18. $ ./bin/hostout host1 deploy Deploy with one command
  • 19. cmdline is: bin/hostout host1 [host2...] [all] cmd1 [cmd2...] [arg1 arg2...] Valid commands are: NodeState : Provider : bootstrap buildout : Run the buildout on the remote server create : Construct node on nominated provider deploy : predeploy, uploadeggs, uploadbuildout, buildout, postdeploy destroy initcommand postdeploy : Perform any final plugin tasks predeploy : Perform any initial plugin tasks. Call bootstrap if needed printnode reboot restart run : Execute cmd on remote as login user setaccess setowners start status stop sudo : Execute cmd on remote as root user supervisorboot : Ensure that supervisor is started on boot supervisorctl : Runs remote supervisorlctl with given args supervisorshutdown : Shutdown the supervisor daemon supervisorstartup : Start the supervisor daemon tail uploadbuildout : Upload buildout pinned version of buildouts to host uploadeggs : Release developer eggs and send to host
  • 20. Rackspace Cloud dashboard
  • 21. Amazon EC2 Cloud deployment [ec2] recipe = collective.hostout extends = hostout.cloud hostout.ubuntu hostout.supervisor key_filename = ${keys:ec2_key_filename} key = ${keys:ec2_key} secret = ${keys:ec2_secret} secure = True hosttype = ec2 hostname = myec2server hostos = ubuntu-images/ubuntu-lucid-10.04-i386-server hostsize = m1.small parts = instance supervisor
  • 22. Amazon EC2 Cloud deployment [ec2] recipe = collective.hostout extends = hostout.cloud hostout.ubuntu hostout.supervisor key_filename = ${keys:ec2_key_filename} key = ${keys:ec2_key} secret = ${keys:ec2_secret} secure = True hosttype = ec2 hostname = myec2server hostos = ubuntu-images/ubuntu-lucid-10.04-i386-server hostsize = m1.small parts = instance supervisor
  • 23. Amazon EC2 dashboard
  • 24. pasteweb.joelburton.com
  • 25. pasteweb.joelburton.com
  • 26. pasteweb.joelburton.com
  • 27. Sprint idea? Use zopeskel.webui with collective.hostout to make a point-n-click Plone deployment tool
  • 28. mr.awsome • commandline-tool (aws) to manage and control Amazon EC2 instances. • create, delete, monitor and ssh into instances • perform scripted tasks on them (via fabfiles) • automated software deployments • creating backups - each with just one call from the commandline http://pypi.python.org/pypi/mr.awsome
  • 29. Make a buildout • [buildout] parts = aws [aws] recipe = zc.recipe.egg eggs = mr.awsome entry-points = aws=mr.awsome:aws assh=mr.awsome:aws_ssh arguments = configpath="${buildout:directory}/etc/deployment"
  • 30. Make a buildout • [buildout] parts = aws [aws] recipe = zc.recipe.egg eggs = mr.awsome entry-points = aws=mr.awsome:aws assh=mr.awsome:aws_ssh arguments = configpath="${buildout:directory}/etc/deployment"
  • 31. Make an aws.conf file [securitygroup:demo-server] description = Our Demo-Server connections = tcp 22 22 0.0.0.0/0 tcp 80 80 0.0.0.0/0 [instance:demo-server] keypair = default securitygroups = demo-server region = us-east-1 placement = us-east-1a # Ubuntu 10.04 Lucid Canonical, ubuntu@ EBS boot image = ami-714ba518 startup_script = startup-demo-server fabfile = `fabfile.py`_
  • 32. Make an aws.conf file [securitygroup:demo-server] description = Our Demo-Server connections = tcp 22 22 0.0.0.0/0 tcp 80 80 0.0.0.0/0 [instance:demo-server] keypair = default securitygroups = demo-server region = us-east-1 placement = us-east-1a # Ubuntu 10.04 Lucid Canonical, ubuntu@ EBS boot image = ami-714ba518 startup_script = startup-demo-server fabfile = `fabfile.py`_
  • 33. startup-demo-server • #!/bin/bash set -e -x export DEBIAN_FRONTEND=noninteractive apt-get update && apt-get upgrade -y apt-get -y install build-essential python2.4-dev python-imaging libjpeg-dev libfreetype6-dev subversion varnish apache2 svn co ${buildout_url} buildout cd buildout sudo -u ubuntu python2.4 bootstrap.py sudo -u ubuntu ./bin/buildout -v sudo -u ubuntu ./bin/supervisord
  • 34. $ ./bin/aws start demo-server One command to launch a new instance
  • 35. cnx.org
  • 36. rhaptos.org
  • 37. Silverlining • Inspired by Google App Engine • Can create and destroy virtual servers, using a Cloud service API (something supported by libcloud). • Sets up an Ubuntu server to a known configuration. • Deploys Python web applications to these servers. http://cloudsilverlining.org/
  • 38. Getting started $ virtualenv -p python2.6 silver $ silver/bin/pip install -r http://bitbucket.org/ianb/silverlining/raw/tip/ requirements.txt $ silver init myapp-app $ cd myapp-app $ ln -s src/myapp/silver-app.ini app.ini Edit the app.ini file: runner = src/myapp/silver-runner.py See examples for Django, Pylons, repoze.bfg and even PHP! Come to the sprints where we’ll get it working with Zope!
  • 39. Common deployment configurations
  • 40. Common deployment configurations Apache Zope
  • 41. Common deployment configurations Apache Zope Apache Varnish Zope
  • 42. Common deployment configurations Apache Zope Apache Varnish Zope Pound/ Apache Varnish Zope HAProxy
  • 43. Alternatives • Apache vs. Nginx • Varnish vs. Squid • Pound vs. HAProxy
  • 44. Vanilla Plone + Apache Vanilla Plone + Apache Internet • “Classic” • Practical (ports, statistics, etc.) Webserver • Configuration – Virtualhost – Advanced solutions – etc.
  • 45. Plone+Varnish+Apache Plone + Varnish + Apache Internet • Reduced response time • More user load Webserver • Lower machine Cache Sys load
  • 46. Architecture onserver Architecture on multicore multicore server Internet Webserver Cache Sys Load Balancer
  • 47. Distributed Architecture Internet s2 Webserver Cache Sys s3 Load Balancer s1 s4 s5
  • 48. Operating Systems Linux FreeBSD Mac OS X Windows
  • 49. Operating Systems Linux Most popular / most well-supported FreeBSD Mac OS X Windows
  • 50. Operating Systems Linux Most popular / most well-supported FreeBSD Second most popular Mac OS X Windows
  • 51. Operating Systems Linux Most popular / most well-supported FreeBSD Second most popular Mac OS X Mostly used for development Windows
  • 52. Operating Systems Linux Most popular / most well-supported FreeBSD Second most popular Mac OS X Mostly used for development Windows Least popular / Can use Enfold Server
  • 53. Ubuntu is our preferred OS • Python friendly • Debian compatible (apt-get package mgmt) • Great community • Commercially supported (Canonical) • Landscape for easy systems administration • Eucalyptus for private cloud • Official Amazon AMIs from Canonical
  • 54. zc.buildout Make repeatable deployment
  • 55. zc.buildout Make repeatable deployment http://plone.org/documentation/tutorial/buildout
  • 56. Buildout benefits • Provides a way to make a repeatable deployments • Manages the dependencies and versions • Many recipes available for doing common development and deployment tasks
  • 57. Default.cfg file Tells buildout to use a local cache of eggs and downloads. Next time you run a buildout it won’t have to download. $ mkdir ~/.buildout $ cd ~/.buildout $ mkdir eggs $ mkdir downloads $ gedit default.cfg Create a file /home/natea/.buildout/default.cfg [buildout] executable = /usr/bin/python2.4 eggs-directory = /home/natea/.buildout/eggs download-cache = /home/natea/.buildout/downloads download-directory = /home/natea/.buildout/downloads http://plone.org/documentation/tutorial/buildout/creating-a-buildout-defaults-file/
  • 58. virtualenv
  • 59. virtualenv Create isolated Python environments
  • 60. virtualenv Create isolated Python environments http://pypi.python.org/pypi/virtualenv
  • 61. Virtualenv • Virtualenv is a tool to create isolated Python environments. • Ex: One application needs version 1 of LibFoo, another app needs version 2. • How can you use both these applications?
  • 62. Virtualenv continued... • If you install everything into /usr/lib/ python2.4/site-packages, you end up in a situation where you unintentionally upgrade an app that shouldn't be upgraded. • Or more generally, what if you want to install an application and leave it be? If an application works, any change in its libraries or the versions of those libraries can break the application.
  • 63. Virtualenv continued... • Also, what if you can't install packages into the global site-packages directory? For instance, on a shared host. • creates an environment that has its own installation directories • doesn't share libraries with other virtualenv environments (and optionally doesn't use the globally installed libraries either).
  • 64. Create buildout from scratch • Create a virtualenv • Activate the virtualenv • Download and run the bootstrap.py file • Create the buildout.cfg file • Run the buildout
  • 65. Make virtualenv $ sudo easy_install virtualenv $ cd ~ $ virtualenv -p /usr/bin/python2.4 py24env $ source py24env/bin/activate (py24env)$ mkdir buildout (py24env)$ cd buildout (py24env)$ wget http://python-distribute.org/bootstrap.py Now make a buildout.cfg file from the next slide.
  • 66. buildout.cfg [buildout] extends = http://dist.plone.org/release/3.3.5/versions.cfg versions = versions find-links = http://dist.plone.org/thirdparty parts = zope2 instance [zope2] recipe = plone.recipe.zope2install url = ${versions:zope2-url} fake-zope-eggs = true [instance] recipe = plone.recipe.zope2instance zope2-location = ${zope2:location} user = admin:admin http-address = 8080 eggs = PIL Plone
  • 67. Run the buildout / start instance (py24env)$ python bootstrap.py Creating directory '/home/natea/buildout/bin'. Creating directory '/home/natea/buildout/parts'. Creating directory '/home/natea/buildout/eggs'. Creating directory '/home/natea/buildout/develop-eggs'. Generated script '/home/natea/buildout/bin/buildout'. (py24env)$ bin/buildout -v ... Generated script '/home/natea/buildout/bin/instance'. Generated script '/home/natea/buildout/bin/repozo'. (py24env)$ bin/instance fg
  • 68. Version control Bring sanity to your software development team.
  • 69. Version control Bring sanity to your software development team. http://plone.org/documentation/tutorial/best-practices/source-code-management/
  • 70. Typical problem
  • 71. Copy-modify-merge
  • 72. Both users have each others' changes
  • 73. Why use version control?
  • 74. Why use version control? • Know what was changed in the source code
  • 75. Why use version control? • Know what was changed in the source code • Know who changed it and when they did it
  • 76. Why use version control? • Know what was changed in the source code • Know who changed it and when they did it • Know why they changed it (commit msg)
  • 77. Why use version control? • Know what was changed in the source code • Know who changed it and when they did it • Know why they changed it (commit msg) • Roll back to previous version (if breakage)
  • 78. Why use version control? • Know what was changed in the source code • Know who changed it and when they did it • Know why they changed it (commit msg) • Roll back to previous version (if breakage) • Easily merge changes from multiple authors
  • 79. Why use version control? • Know what was changed in the source code • Know who changed it and when they did it • Know why they changed it (commit msg) • Roll back to previous version (if breakage) • Easily merge changes from multiple authors • Complete history of your source code means easier for new developers to get up-to-speed
  • 80. Common version control systems Traditional VCS • CVS • Subversion • Perforce • Visual Sourcesafe
  • 81. Common version control systems Traditional VCS Distributed VCS • CVS • Git • Subversion • Mercurial • Perforce • Bazaar • Visual • Darc Sourcesafe
  • 82. Subversion • Natural successor to CVS • Open source but commercially supported • Adopted by the Plone community • svn.plone.org (official Plone code) • svn.plone.org/svn/collective • Best support in complementary tools (i.e. buildout)
  • 83. Subversion works on all operating systems • Ubuntu/Debian Linux: • apt-get install subversion • Windows • TortoiseSVN (tortoisesvn.tigris.org) • Mac OS X • SCPlugin (scplugin.tigris.org) • TextMate Subversion bundle
  • 84. Typical work cycle 1. Update your working copy svn update 2. Make changes svn add svn delete svn copy svn move
  • 85. Examine your changes 3. Examine your changes svn status svn diff 4. Possibly undo some changes svn revert
  • 86. Resolve conflict and commit changes 5. Resolve conflicts (merge others' changes) svn update svn resolve 6. Commit your changes svn commit
  • 87. Revisions are snapshots of the code repository Revision
  • 88. Buildout configs Make separate buildout profiles for development, staging, production
  • 89. Create profiles dir buildout/ buildout.cfg (extends devel, prod or staging) profiles/ base.cfg devel.cfg (extends base.cfg) prod.cfg (extends base.cfg)
  • 90. Zope Enterprise Objects (ZEO)
  • 91. Zope Enterprise Objects (ZEO) Scale your web application easily with ZEO
  • 92. Zope Enterprise Objects (ZEO) Scale your web application easily with ZEO http://zope2.zope.org/about-zope-2/six-reasons-for-using-zope/zope-is-highly-scalable
  • 93. ZEO • client / server technology • split up frontend servers from backend database • simply add more front- end servers to scale • use load balancer to distribute incoming requests
  • 94. Add to Zeo to prod.cfg [buildout] extends = base.cfg parts += instance zeoserver [instance] zeo-client = True zeo-address = 8100 [zeoserver] recipe = plone.recipe.zope2zeoserver zope2-location = ${zope2:location} zeo-address = 8100
  • 95. Supervisor
  • 96. Supervisor Easily monitor and control running processes
  • 97. Supervisor Easily monitor and control running processes http://supervisord.org/
  • 98. Supervisor • one place to start, stop and monitor all processes • provides both command line and web interface • recipes for starting/stopping zope, zeo, pound, varnish, etc.
  • 99. Add supervisor to prod.cfg [buildout] ... parts += supervisor [supervisor] recipe = collective.recipe.supervisor port = 9001 user = admin password = admin plugins = superlance supervisord-conf = ${buildout:directory}/etc/supervisord.conf programs = 10 zeoserver ${zeoserver:location}/bin/runzeo ${zeoserver:location} true 20 instance ${instance:location}/bin/runzope ${instance:location} true eventlisteners = Memmon TICK_60 ${buildout:bin-directory}/memmon [-p instance=400MB]
  • 100. Re-run buildout & start supervisor $ bin/buildout -v ... Generated script '/Users/nateaune/Documents/instances/budapesttraining/ buildout/bin/supervisorctl'. $ bin/supervisord $ bin/supervisorctl start all $ bin/supervisorctl status Memmon RUNNING pid 8856, uptime 0:00:07 instance RUNNING pid 8858, uptime 0:00:07 zeoserver RUNNING pid 8857, uptime 0:00:07
  • 101. Other supervisor commands Stop all processes $ bin/supervisorctl stop all Stop just the instance $ bin/supervisorctl stop instance Restart all processes $ bin/supervisorctl restart all Restart just the instance $ bin/supervisorctl restart instance Shutdown supervisor $ bin/supervisorctl shutdown
  • 102. Supervisor web interface Go to http://localhost:9001 user: admin pw: admin
  • 103. Apache2 World's most popular web server
  • 104. Vanilla Plone + Apache Vanilla Plone + Apache Internet • “Classic” • Practical (ports, statistics, etc.) Webserver • Configuration – Virtualhost – Advanced solutions – etc.
  • 105. Benefits of Apache • Provide services in addition to Zope from the same IP/domain • Complex configuration is easy with Apache • Apache is mature and well-supported (vs. Nginx which is newer and not as many people are familiar with it) • You can support SSL connections
  • 106. Run Apache bench $ ab -c 10 -n 200 http://localhost/ (please note that you need the trailing slash after "localhost") ... Server Software: Zope/(Zope Server Hostname: localhost Server Port: 80 Document Path: / Document Length: 20499 bytes Concurrency Level: 10 Time taken for tests: 25.092 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 4164000 bytes HTML transferred: 4099800 bytes Requests per second: 7.97 [#/sec] (mean) Time per request: 1254.596 [ms] (mean) Time per request: 125.460 [ms] (mean, across all concurrent requests) Transfer rate: 162.06 [Kbytes/sec] received
  • 107. CacheFu Wonder pack to make your Plone site snappy!
  • 108. CacheFu • Provides common caching components for Zope/Plone • Does Zope-based caching using RAMCache and PageCache • Also sends proper headers to upstream caching proxy • Can send purge requests to caching proxy to remove a modified item from the cache
  • 109. Install CacheFu Add Products.CacheFu to prod.cfg [buildout] extends = base.cfg ... [instance] zeo-client = True zeo-address = 8100 eggs += Products.CacheSetup Re-run buildout and restart processes with Supervisor $ bin/buildout -v $ bin/supervisorctl restart all
  • 110. Activate CacheFu • Go to http://localhost:8080/Plone • Click on Site Setup • Click on Add/remove products • Click to install CacheSetup • Click on Cache Configuration Tool • Check the Enable CacheFu checkbox • Select Without Caching Proxy • Click Save
  • 111. Run Apache bench again $ ab -c 10 -n 200 http://localhost/ (please note that you need the trailing slash after "localhost") ... Server Software: Zope/(Zope Server Hostname: localhost Server Port: 80 Document Path: / Document Length: 20504 bytes Concurrency Level: 10 Time taken for tests: 5.143 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 4202966 bytes HTML transferred: 4100800 bytes Requests per second: 38.89 [#/sec] (mean) Time per request: 257.146 [ms] (mean) Time per request: 25.715 [ms] (mean, across all concurrent requests) Transfer rate: 798.08 [Kbytes/sec] received
  • 112. ~5x Speed improvement with CacheFu Zope only With CacheFu 7.97 req/sec 38.89 req/sec
  • 113. Varnish Modern caching proxy.
  • 114. Plone+Varnish+Apache Plone + Varnish + Apache Internet • Reduced response time • More user load Webserver • Lower machine Cache Sys load
  • 115. [buildout] extends = base.cfg Edit the prod.cfg file: ... parts += ... instance instance1 varnish-build varnish [hosts] zeoserver = 127.0.0.1 instance1 = 127.0.0.1 varnish = 127.0.0.1 supervisor = 127.0.0.1 [ports] zeoserver instance1 • = = ` 8100 8401 varnish = 8001 supervisor = 9001 [users] zope = plone varnish = plone supervisor = plone [downloads] varnish = http://downloads.sourceforge.net/varnish/varnish-2.0.4.tar.gz
  • 116. Update instance settings [instance] zeo-client = true zeo-address = ${zeoserver:zeo-address} effective-user = ${users:zope} zodb-cache-size = 5000 zeo-client-cache-size = 300MB eggs += Products.CacheSetup
  • 117. Add instance1 and zeoserver [instance1] recipe = collective.recipe.zope2cluster instance-clone = instance http-address = ${hosts:instance1}:${ports:instance1} [zeoserver] # See http://pypi.python.org/pypi/plone.recipe.zope2zeoserver recipe = plone.recipe.zope2zeoserver zope2-location = ${zope2:location} zeo-address = ${hosts:zeoserver}:${ports:zeoserver}
  • 118. Add varnish parts [varnish-build] recipe = zc.recipe.cmmi == 1.2.1 url = ${downloads:varnish} [varnish] # http://pypi.python.org/pypi/plone.recipe.varnish recipe = plone.recipe.varnish:instance daemon = ${buildout:directory}/parts/varnish-build/sbin/varnishd backends = ${instance1:http-address} bind = ${hosts:varnish}:${ports:varnish} cache-size = 50M user = ${users:varnish} mode = foreground
  • 119. Edit the supervisor part [supervisor] # http://pypi.python.org/pypi/collective.recipe.supervisor recipe = collective.recipe.supervisor port = ${ports:supervisor} user = admin password = admin plugins = superlance supervisord-conf = ${buildout:directory}/etc/supervisord.conf serverurl = http://${hosts:supervisor}:${ports:supervisor} programs = 10 zeoserver ${zeoserver:location}/bin/runzeo ${zeoserver:location} true ${users:zope} 20 instance1 ${instance1:location}/bin/runzope ${instance1:location} true ${users:zope} 30 varnish ${buildout:bin-directory}/varnish ${buildout:directory} true ${users:varnish} eventlisteners = Memmon TICK_60 ${buildout:bin-directory}/memmon [-p instance1=400MB]
  • 120. Enable caching proxy • Go to http://localhost:9001/Plone • Click on Site Setup • Click on Cache Configuration Tool • Select With Caching Proxy • Choose Purge with VHM URLs (Varnish behind Apache) • Use http://localhost:80 for the Site Domains field • Use http://127.0.0.1:8001 for the Proxy Cache domains field • Click Save
  • 121. Re-run buildout and restart supervisor $ bin/buildout -v ... $ bin/supervisorctl shutdown $ bin/supervisord $ bin/supervisorctl status Memmon RUNNING pid 7001, uptime 0:19:25 instance1 RUNNING pid 7003, uptime 0:19:25 varnish RUNNING pid 7004, uptime 0:19:25 zeoserver RUNNING pid 7002, uptime 0:19:25 Go to http://localhost:8001/ to see if Varnish proxies to Zope.
  • 122. Run Apache bench again $ ab -c 10 -n 200 http://localhost:8001/Plone ... Server Software: Zope/(Zope Server Hostname: localhost Server Port: 8001 Document Path: /Plone Document Length: 21003 bytes Concurrency Level: 10 Time taken for tests: 3.021 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 4307001 bytes HTML transferred: 4200600 bytes Requests per second: 66.21 [#/sec] (mean) Time per request: 151.030 [ms] (mean) Time per request: 15.103 [ms] (mean, across all concurrent requests) Transfer rate: 1392.46 [Kbytes/sec] received
  • 123. ~8x Speed improvement with Varnish Zope only With CacheFu With Varnish 7.97 req/sec 38.89 req/sec 66.21 req/sec
  • 124. Load balancing Distributing the load across multiple Zeo clients
  • 125. Architecture onserver Architecture on multicore multicore server Internet Webserver Cache Sys Load Balancer
  • 126. Load balancing options • HAProxy • Nginx • Hardware load balancer • Pound
  • 127. Pound Simple but effective load balancer
  • 128. Why Pound? • Written in Python • Well-supported by Zope/Plone community • Lightweight and easy to configure • Buildout recipes available to build and configure
  • 129. [buildout] Edit the prod.cfg file: extends = base.cfg ... parts += ... instance2 pound-build pound [hosts] ... instance2 = 127.0.0.1 pound = 127.0.0.1 [ports] ... instance2 pound • ` = 8402 = 8002 [users] ... instance2 = plone pound = plone [downloads] ... pound = http://www.apsis.ch/pound/Pound-2.4.4.tgz
  • 130. Add the instance2 part [instance2] recipe = collective.recipe.zope2cluster instance-clone = instance http-address = ${hosts:instance2}:${ports:instance2}
  • 131. Add the Pound parts [pound-build] recipe = plone.recipe.pound:build url = ${downloads:pound} [pound] # http://pypi.python.org/pypi/plone.recipe.pound/ recipe = plone.recipe.pound:config # Run as a daemon to let supervisord manage daemon = 0 dynscale = 1 timeout = 30 bind = ${hosts:pound}:${ports:pound} balancers = one ${pound:bind} ${instance1:http-address} ${instance2:http-address}
  • 132. Tell Varnish to proxy to pound [varnish] # http://pypi.python.org/pypi/plone.recipe.varnish recipe = plone.recipe.varnish:instance daemon = ${buildout:directory}/parts/varnish-build/sbin/varnishd backends = ${pound:bind} bind = ${hosts:varnish}:${ports:varnish} cache-size = 50M user = ${users:varnish} mode = foreground
  • 133. Update the supervisor part [supervisor] ... programs = 10 zeoserver ${zeoserver:location}/bin/runzeo ${zeoserver:location} true ${users:zope} 20 instance1 ${instance1:location}/bin/runzope ${instance1:location} true ${users:zope} 30 instance2 ${instance2:location}/bin/runzope ${instance2:location} true ${users:zope} 40 pound ${buildout:bin-directory}/poundrun ${buildout:directory} true ${users:pound} 50 varnish ${buildout:bin-directory}/varnish ${buildout:directory} true ${users:varnish} eventlisteners = Memmon TICK_60 ${buildout:bin-directory}/memmon [-p instance1=400MB] Memmon TICK_60 ${buildout:bin-directory}/memmon [-p instance2=400MB]
  • 134. Re-run buildout and restart supervisor $ bin/buildout -v ... $ bin/supervisorctl shutdown $ bin/supervisord $ bin/supervisorctl status Memmon RUNNING pid 7001, uptime 0:19:25 instance1 RUNNING pid 7003, uptime 0:19:25 instance2 RUNNING pid 7003, uptime 0:19:25 varnish RUNNING pid 7004, uptime 0:19:25 zeoserver RUNNING pid 7002, uptime 0:19:25 Go to http://localhost:8002/ to see if Pound proxies to Zope.
  • 135. Multiple server architecture Use multiple machines to handle different processes.
  • 136. Distributed Architecture Internet s2 Webserver Cache Sys s3 Load Balancer s1 s4 s5
  • 137. logical server architecture browser clients internet • HTTP:80 • HTTP:9999 • HTTP:80 • HTTP:9999 • HTTPS:443 Apache • HTTPS:443 Apache • HTTP:8000 • HTTP:8000 Varnish Varnish • HTTP:8889 • HTTP:8889 Deliverance Deliverance • HTTP:8220 • HTTP:8220 Pound Supervisor Pound Supervisor plone-fe-0.seas.harvard.edu plone-fe-1.seas.harvard.edu frontend • HTTP:8222 • HTTP:8223 • HTTP:9999 • HTTP:8222 • HTTP:8223 • HTTP:9999 • LDAP:389 Zope Client Zope Client Supervisor Zope Client Zope Client Supervisor LDAP plone-zope-0.web.private plone-zope-1.web.private ldap • HTTP:8221 • HTTP:9999 ZEO Server Supervisor ZEO Server backend plone-zeo-0.web.private plone-zeo-1.web.private active passive standard failover host resource host resource connection connection document jazhar_diagrams.graffle modified 26 Oct 2009 20:51+0100 page 2 ...
  • 138. Backups and logfile rotation Repozo
  • 139. Backup strategy • Daily incremental backups • Weekly full backups • Pack database every week before doing full backup • Use the backup script provided by collective.recipe.backup • Be sure to test a restore operation.
  • 140. Add to prod.cfg [buildout] extends = base.cfg parts += ... backup logrotate.conf [backup] # http://pypi.python.org/pypi/collective.recipe.backup/ recipe = collective.recipe.backup
  • 141. [logrotate.conf] Add to prod.cfg Add to prod.cfg recipe = zc.recipe.deployment:configuration text = rotate 4 weekly create compress delaycompress ${buildout:directory}/var/log/instance1*.log { sharedscripts postrotate /bin/kill -USR2 $(cat ${buildout:directory}/var/instance1.pid) endscript } ${buildout:directory}/var/log/instance2*.log { sharedscripts postrotate /bin/kill -USR2 $(cat ${buildout:directory}/var/instance2.pid) endscript } ${buildout:directory}/var/log/zeoserver.log { postrotate /bin/kill -USR2 $(cat ${buildout:directory}/var/zeoserver.pid) endscript }
  • 142. Re-run buildout and try making a backup $ cd /home/natea/buildout $ bin/buildout -v ... $ bin/backup INFO: Backing up database file: /home/natea/buildout/var/filestorage/ Data.fs to /home/natea/buildout/var/backups... $ ls -l var/backups total 8 -rw-r--r-- 1 plone plone 104 2009-10-27 02:54 2009-10-27-01-54-02.dat -rw-r--r-- 1 plone plone 81 2009-10-27 02:54 2009-10-27-01-54-02.fsz INFO: Making snapshot backup: /home/natea/buildout/var/filestorage/ Data.fs to /home/natea/buildout/var/snapshotbackups...
  • 143. Make a snapshot backup $ bin/snapshotbackup INFO: Making snapshot backup: /home/natea/buildout/var/filestorage/ Data.fs to /home/natea/buildout/var/snapshotbackups... $ ls -l var/snapshotbackups/ total 8 -rw-r--r-- 1 plone plone 112 2009-10-27 02:56 2009-10-27-01-56-50.dat -rw-r--r-- 1 plone plone 89 2009-10-27 02:56 2009-10-27-01-56-50.fsz
  • 144. [backup]Backups and logfile rotation recipe = collective.recipe.backup location = ${zope:datadir}/var/backups snapshotlocation = ${zope:datadir}/var/snapshotbackups [backup-daily] recipe = z3c.recipe.usercrontab times = 0 3 * * 0-6 command = ${buildout:directory}/bin/backup [backup-weekly] recipe = z3c.recipe.usercrontab times = 0 3 * * 6 command = ${buildout:directory}/bin/zeopack && ${buildout:directory}/bin/backup [logrotate] recipe = collective.recipe.template input = etc/logrotate.conf.tmpl output = etc/logrotate.conf [logrotate-daily] recipe = z3c.recipe.usercrontab times = 0 6 * * * command = /usr/sbin/logrotate --state ${buildout:directory}/var/logrotate.status ${buildout:direct [supervisor-reboot] recipe = z3c.recipe.usercrontab times = @reboot command = ${buildout:directory}/bin/supervisord -c ${buildout:directory}/etc/supervisord.conf
  • 145. Monitoring Munin, Nagios, ZenOss
  • 146. Munin overview • Surveys all your computers and remembers what it saw • Presents all the information in graphs through a web interface • Extensible through many plugins available at http://muninexchange.projects.linpro.no
  • 147. Munin graphs
  • 148. [buildout] parts += Add these parts to your buildout.cfg file munin-client1 munin-client2 munin-node-config [munin-client1] # http://pypi.python.org/pypi/munin.zope recipe = zc.recipe.egg eggs = munin.zope scripts = munin=munin1 arguments = http_address='${instance1:http-address}', user='zope' [munin-client2] # http://pypi.python.org/pypi/munin.zope recipe = zc.recipe.egg eggs = munin.zope scripts = munin=munin2 arguments = http_address='${instance2:http-address}', user='zope' [munin-node-config] recipe = collective.recipe.template input = ${buildout:directory}/etc/templates/munin-node.conf.in output = ${buildout:directory}/etc/munin-node.conf
  • 149. # Example config-file for munin-node.conf.in log_level 4 log_file ${buildout:directory}/var/log/munin/munin-node.log pid_file ${buildout:directory}/var/run/munin/munin-node.pid background 1 setseid 1 munin-node.conf.in user zope group wheel Put this in your setsid yes buildout/etc/templates # Regexps for files to ignore directory ignore_file ~$ ignore_file .bak$ ignore_file %$ ignore_file .dpkg-(tmp|new|old|dist)$ ignore_file .rpm(save|new)$ ignore_file .pod$ # A list of addresses that are allowed to connect. This must be a # regular expression, due to brain damage in Net::Server, which # doesn't understand CIDR-style network notation. You may repeat # the allow line as many times as you'd like allow ^127.0.0.1$ # Which address to bind to; host * # And which port port 4949
  • 150. How to use $ cd buildout $ sudo bin/munin1 install /opt/local/etc/munin/plugins/ customer1 instance1 $ sudo bin/munin2 install /opt/local/etc/munin/plugins/ customer1 instance2 http://localhost:8080/@@munin.zope.plugins/zopethreads
  • 151. Munin resources • Munin tutorial: http://waste.mandragor.org/munin_tutorial/munin.html • MacOSX installation instructions: http://munin.projects.linpro.no/wiki/DarwinInstallation • Linux installation instructions: http://munin.projects.linpro.no/wiki/LinuxInstallation • Munin on Ubuntu: http://github.com/jnstq/munin-nginx-ubuntu