WebLion Hosting: Leveraging Laziness, Impatience, and Hubris

1,757 views

Published on

Behind the scenes of WebLion's Plone hosting service, which uses Debian packages and a custom repository to deliver reliable, unattended updates to a cluster of heterogeneous departmental virtual servers. And it's all available for your own use for free.

Published in: Technology
1 Comment
2 Likes
Statistics
Notes
No Downloads
Views
Total views
1,757
On SlideShare
0
From Embeds
0
Number of Embeds
23
Actions
Shares
0
Downloads
23
Comments
1
Likes
2
Embeds 0
No embeds

No notes for slide
  • (Don’t say anything; this is just a splash slide.)

    You can think of WL Hosting as…
  • a Plone hosting appliance

    came out of 2 realizations: lots more to a Plone deployment than Zope & Plone. \\ Python, Apache, Squid, cron jobs for DB maint & backups, SNMP for remote monitoring, …. Then kernel, libs, etc.

    2nd thing: I realized there’s a strangeness in WebLion’s business model…
  • clients vs. partners: don’t do stuff for them (except multi-dept usefulnesses) \\ advantages: scalability, distribution of knowledge across the organization, keeping our own team lean and agile.

    Didn’t realize: Plone apparently hard to sysadmin
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • started w/just a few partner departments \\ teach individually how to set up production-worthy Plone stack \\ wiki multipled our efforts \\ {CentOS, Ubuntu, Solaris, Red Hat}, strange problems w/SELinux, slight differences take all day to figure out

    However, as we accumulated more and more partners, this didn’t scale. We don’t have any dedicated sysadmins on the team, and I was spending a huge chunk of my time teaching and debugging people’s setups and not enough coding. I’m really a programmer, after all, and only a sysadmin by necessity. So, bringing a programmer’s point of view to the problem, I thought “How I do all this stuff once instead of repeating it for every partner?”

    It was evident we’d have to change our don’t-do-anything business model, but how to do it? Well, in an ideal world, everybody’d have…
  • …cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)

    But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.

    So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
  • …cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)

    But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.

    So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
  • …cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)

    But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.

    So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
  • …cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)

    But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.

    So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
  • …cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)

    But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.

    So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
  • …cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)

    But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.

    So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
  • …cookie cutter sites \\ few enormous servers \\ sharing Zope instances (same Products)

    But that ain’t gonna happen. They’re gonna need different… 1 2 (and versions of products) 3 4 5.

    So the question became not “How do we build a gigantic megaserver that can take care of everybody?” but “How do we deploy a bunch of similar-but-not-identical servers”
  • Gets the stuff on there. Upgrades?
  • Puppet & cfengine definitely contenders \\ couple things I didn’t like

    Command-&-control philosophy. assume every machine updates in lockstep: cluster-oriented. \\ I want the option of telling control freak sysadmins “Sure, you can use our stuff. Just set up your own box, and hit ‘update’ manually when you see fit.” without running into situations where a config file assumes a certain version of the software and is surprised.

    Cross-OS abstraction: major feature: manage Windows & UNIXes & Mac from 1 conf file \\ invent own language \\ we’ll pick 1 OS \\ Don’t need cross-OS abstraction. \\ Don’t pay for another language in learning time. \\ Want people to hack on this system as easily as possible.

    Non-concurrent: updates to config not synced with updates to the software it configures, which could conceivably cause problems, for example if a new version of a package changes the meaning of a config directive.
  • Puppet & cfengine definitely contenders \\ couple things I didn’t like

    Command-&-control philosophy. assume every machine updates in lockstep: cluster-oriented. \\ I want the option of telling control freak sysadmins “Sure, you can use our stuff. Just set up your own box, and hit ‘update’ manually when you see fit.” without running into situations where a config file assumes a certain version of the software and is surprised.

    Cross-OS abstraction: major feature: manage Windows & UNIXes & Mac from 1 conf file \\ invent own language \\ we’ll pick 1 OS \\ Don’t need cross-OS abstraction. \\ Don’t pay for another language in learning time. \\ Want people to hack on this system as easily as possible.

    Non-concurrent: updates to config not synced with updates to the software it configures, which could conceivably cause problems, for example if a new version of a package changes the meaning of a config directive.
  • Puppet & cfengine definitely contenders \\ couple things I didn’t like

    Command-&-control philosophy. assume every machine updates in lockstep: cluster-oriented. \\ I want the option of telling control freak sysadmins “Sure, you can use our stuff. Just set up your own box, and hit ‘update’ manually when you see fit.” without running into situations where a config file assumes a certain version of the software and is surprised.

    Cross-OS abstraction: major feature: manage Windows & UNIXes & Mac from 1 conf file \\ invent own language \\ we’ll pick 1 OS \\ Don’t need cross-OS abstraction. \\ Don’t pay for another language in learning time. \\ Want people to hack on this system as easily as possible.

    Non-concurrent: updates to config not synced with updates to the software it configures, which could conceivably cause problems, for example if a new version of a package changes the meaning of a config directive.
  • Puppet & cfengine definitely contenders \\ couple things I didn’t like

    Command-&-control philosophy. assume every machine updates in lockstep: cluster-oriented. \\ I want the option of telling control freak sysadmins “Sure, you can use our stuff. Just set up your own box, and hit ‘update’ manually when you see fit.” without running into situations where a config file assumes a certain version of the software and is surprised.

    Cross-OS abstraction: major feature: manage Windows & UNIXes & Mac from 1 conf file \\ invent own language \\ we’ll pick 1 OS \\ Don’t need cross-OS abstraction. \\ Don’t pay for another language in learning time. \\ Want people to hack on this system as easily as possible.

    Non-concurrent: updates to config not synced with updates to the software it configures, which could conceivably cause problems, for example if a new version of a package changes the meaning of a config directive.
  • Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.

    buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
    Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
    At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
    On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
    Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
    Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.

    So buildout wasn’t really suitable for unattended deployment. But what about…
  • Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.

    buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
    Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
    At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
    On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
    Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
    Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.

    So buildout wasn’t really suitable for unattended deployment. But what about…
  • Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.

    buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
    Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
    At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
    On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
    Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
    Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.

    So buildout wasn’t really suitable for unattended deployment. But what about…
  • Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.

    buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
    Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
    At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
    On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
    Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
    Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.

    So buildout wasn’t really suitable for unattended deployment. But what about…
  • Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.

    buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
    Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
    At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
    On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
    Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
    Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.

    So buildout wasn’t really suitable for unattended deployment. But what about…
  • Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.

    buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
    Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
    At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
    On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
    Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
    Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.

    So buildout wasn’t really suitable for unattended deployment. But what about…
  • Considered buildout. Popular in Plone because Jim Fulton (Zope fame) wrote it. For building and configuring Zope instances. But people extended to build & config Apache, Squid, Varnish, cron jobs.

    buildout’s a fine development tool. I use it myself all the time. But it doesn’t work in my mass-deployment situation.
    Redoes existing work. There are already excellent packages of these, QA’d by thousands of Debian users. And wherever you stop, you’re going to have some kind of dependency impedence mismatch—are you going to repackage the kernel?
    At least 3 network points of failure for a default Plone buildout. About half a dozen times a week, I rescue some poor user who can’t run buildout because PyPI is down, plone.org is down, zope.org is down, or PSC is broken. You can mirror it all yourself, but geez.
    On failure, breaks the site. If any of the above—or any other kind of error—happens after buildout’s begun to change things, there’s no turning back. You can’t let local admins write to buildout.cfg, because they can make it run arbitrary, crashing code during nightly unattended updates.
    Package QA is lacking. There’s no vetting process for putting up new versions either; all the QA is the developer’s responsibility. Martin Aspeli recognizes this problem, saying “publishing known good sets of versions is quite painful”. (Ironically, he solved this problem by introducing yet another network service, good-py, which went down several days later.)
    Not truly repeatable. Seen people put up new versions on PyPI with the same version numbers as old. So even if you pin your versions, you’re hosed.

    So buildout wasn’t really suitable for unattended deployment. But what about…
  • …Debian packages?

    We need them anyway to manage the kernel, libraries, and basic services.
    Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
    Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
    Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
    Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
  • …Debian packages?

    We need them anyway to manage the kernel, libraries, and basic services.
    Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
    Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
    Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
    Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
  • …Debian packages?

    We need them anyway to manage the kernel, libraries, and basic services.
    Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
    Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
    Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
    Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
  • …Debian packages?

    We need them anyway to manage the kernel, libraries, and basic services.
    Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
    Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
    Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
    Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
  • …Debian packages?

    We need them anyway to manage the kernel, libraries, and basic services.
    Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
    Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
    Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
    Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
  • …Debian packages?

    We need them anyway to manage the kernel, libraries, and basic services.
    Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
    Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
    Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
    Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
  • …Debian packages?

    We need them anyway to manage the kernel, libraries, and basic services.
    Unbeatable QA. Just outrageous. Debian has 3 QA tiers: unstable, testing, stable. Immediate, 10 days, once every year and a half. We run stable. Actually, we’re one behind, but we still get another year of full security support.
    Nearly atomic. High-level stuff like Apache gets updated at darn close to the same time as low-level stuff like the libraries it depends on, making for fewer states. And fewer states means fewer unexpected behaviors.
    Tolerance of local changes. APT has been around since 1998 and is very mature. It has this sophisticated framework for tolerating local config changes during upgrades. No paving \\ asks
    Reliable. Downloads everything before changing anything. If something’s unreachable, the stuff that depends on it doesn’t happen. And if anything unexpected happens during installation…
  • …there are a whole bunch of bailout points that return things to a working state.

    This is a breakdown of how the APT system installs or upgrades a package. Each smiley face marks a point where something might go wrong, and there’s a remediation step to return things to a working state.

    And it’s not until way down here at this big red line that you’re committed to the upgrade; it can roll back at any point before that.

    Imagine if buildout did this! Imagine how many fewer people we’d have showing up in the #plone channel screaming about how it broke their install!
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • So, we went with Debian packages \\ University mirror \\ Local repo for our own stuff \\ GPG signed \\ Bootstrapping:
    hosting-node: Everything that should be on every box \\ Kerberos, ssh, ntp, kernel upgrader, sudo, snmpd \\ Want something on all the boxes? Add it to this thing’s dependencies.
    auto-update: Don’t want it? Don’t install it. Nightly automatics 4-5am.
    plone-3.1-stack: All the rest \\ Packaged Plone.
    center: “config packages” \\ shiny new way to package config using framework \\ Tim Abbott @ MIT
    massdeploy \\ I mean…
    config-package-dev
    0:20
  • framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
    wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
    Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
  • framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
    wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
    Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
  • framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
    wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
    Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
  • framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
    wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
    Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
  • framework for building Debian packages that replace existing configuration safely \\ for example, override stock Squid conf \\ divert-and-symlink \\ supports local changes (but you give up auto updates) \\ even if you try to keep auto updates, unattended upgrade fails safe \\ Diverted file can then continue to receive upstream updates from Debian Stable so that, if we were to remove a config package, operations would resume with an up-to-date upstream config. \\ In Lenny at least. \\ dpkg bug before that.
    wanted common caretaking system for Plone, Apache, Squid; kernel, libraries\\ buildout power users worked toward but couldn’t take the whole way \\ config-package-dev brings final piece
    Frankly, starting with Debian \\ already packages everything \\ adding Plone \\ easier than starting with buildout \\ packages Plone \\ trying to add everything else in the world.
  • overview of what we use it for

    auto-update: screwing with cron-apt
    squid-config: one conf file to rule them all
    plone-site-config: listen on localhost, hook up to ZEO, restart leaky Zope, pack DB

    Not on this diagram:
    weblion-krb5-config
    weblion-snmpd-config
    weblion-ssh-server-config

    Crown jewel: apache-config
  • Not cuz it uses config-package-dev fancily \\ fancy IOC framework \\ while Squid and zope.conf stay static, Apache is custom

    “primary” vhost \\ full of includes \\ fill out tiny conffiles included by the vhost \\ contracts \\ all made out of includes

    Example fixes so far: HTTP_REMOTE_USER hole, route auth’d stuff through Squid. \\ pattern worked really well. recommend.

    additional vhosts \\ alias vhosts
  • Not cuz it uses config-package-dev fancily \\ fancy IOC framework \\ while Squid and zope.conf stay static, Apache is custom

    “primary” vhost \\ full of includes \\ fill out tiny conffiles included by the vhost \\ contracts \\ all made out of includes

    Example fixes so far: HTTP_REMOTE_USER hole, route auth’d stuff through Squid. \\ pattern worked really well. recommend.

    additional vhosts \\ alias vhosts
  • Not cuz it uses config-package-dev fancily \\ fancy IOC framework \\ while Squid and zope.conf stay static, Apache is custom

    “primary” vhost \\ full of includes \\ fill out tiny conffiles included by the vhost \\ contracts \\ all made out of includes

    Example fixes so far: HTTP_REMOTE_USER hole, route auth’d stuff through Squid. \\ pattern worked really well. recommend.

    additional vhosts \\ alias vhosts
  • Not cuz it uses config-package-dev fancily \\ fancy IOC framework \\ while Squid and zope.conf stay static, Apache is custom

    “primary” vhost \\ full of includes \\ fill out tiny conffiles included by the vhost \\ contracts \\ all made out of includes

    Example fixes so far: HTTP_REMOTE_USER hole, route auth’d stuff through Squid. \\ pattern worked really well. recommend.

    additional vhosts \\ alias vhosts
  • and wait for Zenoss…
  • and wait for Zenoss…
  • and wait for Zenoss…
  • and wait for Zenoss…
  • …which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
  • …which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
  • …which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
  • …which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
  • …which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
  • …which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
  • …which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
  • …which totally rocks as a monitoring and trend-graphing system, btw \\ to send you a mail screaming about how the servers are down \\ I swear, that thing has ponies everywhere.
  • 3 dists on server \\ mirror Debian structure, except all Etch

    new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable

    When we get to lenny \\ etch -> lenny-unstable \\ work its way up
  • 3 dists on server \\ mirror Debian structure, except all Etch

    new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable

    When we get to lenny \\ etch -> lenny-unstable \\ work its way up
  • 3 dists on server \\ mirror Debian structure, except all Etch

    new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable

    When we get to lenny \\ etch -> lenny-unstable \\ work its way up
  • 3 dists on server \\ mirror Debian structure, except all Etch

    new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable

    When we get to lenny \\ etch -> lenny-unstable \\ work its way up
  • 3 dists on server \\ mirror Debian structure, except all Etch

    new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable

    When we get to lenny \\ etch -> lenny-unstable \\ work its way up
  • 3 dists on server \\ mirror Debian structure, except all Etch

    new stuff enters @ unstable after as much testing as possible \\ test for clean upgrade \\ move to testing \\ testing moves as whole to stable

    When we get to lenny \\ etch -> lenny-unstable \\ work its way up
  • how we manage project: Trac \\ 1 milestone per release of stable
  • WebLion Hosting: Leveraging Laziness, Impatience, and Hubris

    1. 1. WebLion Hosting Leveraging laziness, impatience, and hubris ErikRose@psu.edu http://weblion.psu.edu/wiki/ErikRose
    2. 2. What is WebLion Hosting?
    3. 3. What is WebLion Hosting? A Plone hosting appliance
    4. 4. The Dark Secret of WebLion We don’t actually do anything.*
    5. 5. The Dark Secret of WebLion We don’t actually do anything.* *Ssh, don’t tell my boss.
    6. 6. A scalable solution To save consulting effort
    7. 7. A scalable solution To save consulting effort College of Business Dairy and Animal Science The Huck Institutes Teaching and Learning with Technology
    8. 8. A scalable solution To save consulting effort Penn State ts nt of rtme gy Operatio ns and eral Ar LibCampus Erie pa DeCollegeolo Physica Outrea ch eteor of MScience l Plan College of Business t ITS Marketing World a nd Dairy and Animal Science Communications Campus Inn of College ova Group tion Education rk The Huck Institutes ary Pa Chemistry l Libr es igita logi Department and D TeachingTechno and Learning inary al Veter edic College of with Technology Biom Solutions IST es Agricultural cienc S Institute Sciences e of Human Offic Res ourcesConsulting and Support Services i Computer Alumn College of Associa Communications Population Research Institute tioScience and Office of n Peacock Care Engineering
    9. 9. Cookie cutter sites
    10. 10. Partners need different…
    11. 11. Partners need different… Plone versions
    12. 12. Partners need different… Plone versions Products
    13. 13. Partners need different… Plone versions Products Apache configs
    14. 14. Partners need different… Plone versions Products Apache configs Firewall settings
    15. 15. Partners need different… Plone versions Products Apache configs Firewall settings Other services
    16. 16. Mass-installation tools Disk images Fully Automatic Installation (FAI)
    17. 17. Mass-installation tools ☹ Disk images Fully Automatic Installation (FAI) Upgrades?
    18. 18. Configuration management tools
    19. 19. Configuration management tools Puppet and cfengine
    20. 20. Configuration management tools Puppet and cfengine ☹ Command-and-control philosophy
    21. 21. Configuration management tools Puppet and cfengine ☹ Command-and-control philosophy ☹ A new language
    22. 22. Configuration management tools Puppet and cfengine ☹ Command-and-control philosophy ☹ A new language ☹ Non-concurrent with software updates
    23. 23. Buildout
    24. 24. Buildout The right tool for the wrong job
    25. 25. Buildout The right tool for the wrong job Redoes existing work…worse
    26. 26. Buildout The right tool for the wrong job Redoes existing work…worse Every server is a point of failure.
    27. 27. Buildout The right tool for the wrong job Redoes existing work…worse Every server is a point of failure. On failure, breaks the site
    28. 28. Buildout The right tool for the wrong job Redoes existing work…worse Every server is a point of failure. On failure, breaks the site Package QA is lacking.
    29. 29. Buildout The right tool for the wrong job Redoes existing work…worse Every server is a point of failure. On failure, breaks the site “Publishing known good sets Package QA is lacking. of versions is quite painful.” —Martin Aspeli
    30. 30. Buildout The right tool for the wrong job Redoes existing work…worse Every server is a point of failure. On failure, breaks the site “Publishing known good sets Package QA is lacking. of versions is quite painful.” —Martin Aspeli Not repeatable
    31. 31. Advanced Packaging Tool Or “APT”
    32. 32. Advanced Packaging Tool Or “APT” We need them anyway.
    33. 33. Advanced Packaging Tool Or “APT” We need them anyway. Excellent QA record
    34. 34. Advanced Packaging Tool Or “APT” We need them anyway. Excellent QA record High-level, low-level, and config stuff are close to atomic.
    35. 35. Advanced Packaging Tool Or “APT” We need them anyway. Excellent QA record High-level, low-level, and config stuff are close to atomic. Tolerance of local changes
    36. 36. Advanced Packaging Tool Or “APT” Configuration file `/etc/my-bologna-conf.d/firstname'  ==> File on system created by you or by a script. We need them anyway.  ==> File also in package provided by package maintainer.    What would you like to do about it ? Your options are:     Y or I : install the package maintainer's version Excellent QA record     N or O : keep your currently-installed version       D : show the differences between the versions       Z : background this process to examine the situation High-level, low-level, and  The default action is to keep your current version. config stuff are close to atomic. *** firstname (Y/I/N/O/D/Z) [default=N] ? Tolerance of local changes
    37. 37. Advanced Packaging Tool Or “APT” We need them anyway. Excellent QA record High-level, low-level, and config stuff are close to atomic. Tolerance of local changes
    38. 38. Advanced Packaging Tool Or “APT” We need them anyway. Excellent QA record High-level, low-level, and config stuff are close to atomic. Tolerance of local changes Reliable. Reliablereliablereliable.
    39. 39. Advanced Packaging Tool A case study in failing gracefully 1. 1. If a version of the package is already installed, call ! old-prerm upgrade new-version 2. If the script runs but exits with a non-zero exit status, dpkg will attempt: ! new-prerm failed-upgrade old-version If this works, the upgrade continues. If this does not work, the error unwind: old-postinst abort-upgrade new-version If this works, then the old-version is quot;Installedquot;, if not, the old version is in a quot;Failed-Configquot; state. 2. If a quot;conflictingquot; package is being removed at the same time, or if any package will be broken (due to Breaks): 1. If --auto-deconfigure is specified, call, for each package to be deconfigured due to Breaks: deconfigured's-prerm deconfigure ! in-favour package-being-installed version Error unwind: deconfigured's-postinst abort-deconfigure in-favour package-being-installed-but-failed version The deconfigured packages are marked as requiring configuration, so that if --install is used they will be configured again if possible. 2. If any packages depended on a conflicting package being removed and --auto-deconfigure is specified, call, for each such package: deconfigured's-prerm deconfigure in-favour package-being-installed version ! removing conflicting-package version Error unwind:
    40. 40. 2. If this fails, dpkg will attempt: ! ! new-postrm failed-upgrade old-version If this works, installation continues. If not, Error unwind: Advanced Packaging Tool old-preinst abort-upgrade new-version If this fails, the old version is left in an quot;Half Installedquot; state. If it works, dpkg now calls: new-postrm abort-upgrade old-version A case study in failing gracefully If this fails, the old version is left in an quot;Half Installedquot; state. If it works, dpkg now calls: old-postinst abort-upgrade new-version If this fails, the old version is in an quot;Unpackedquot; state. This is the point of no return - if dpkg gets this far, it won't back off past this point if an error occurs. This will leave the package in a fairly bad state, which will require a successful re-installation to clear up, but it's when dpkg starts doing things that are irreversible. 6. Any files which were in the old version of the package but not in the new are removed. 7. The new file list replaces the old. 8. The new maintainer scripts replace the old. 9. Any packages all of whose files have been overwritten during the installation, and which aren't required for dependencies, are considered to have been removed. For each such package 1. dpkg calls: disappearer's-postrm disappear overwriter overwriter-version 2. The package's maintainer scripts are removed. 3. It is noted in the status database as being in a sane state, namely not installed (any conffiles it may have are ignored, rather than being removed by dpkg). Note that disappearing packages do not have their prerm called, because dpkg doesn't know in advance that the package is going to vanish. 10. Any files in the package we're unpacking that are also listed in the file lists of other packages are removed from those lists. (This will lobotomize the file list of the quot;conflictingquot; package if there is one.) 11. The backup files made during installation, above, are deleted. 12. The new package's status is now sane, and recorded as quot;unpackedquot;. Here is another point of no return - if the conflicting package's removal fails we do not unwind the rest of the installation; the conflicting package is left in a half-removed limbo. 13. If there was a conflicting package we go and do the removal actions (described below), starting with the removal of the conflicting package's files (any that are also in the package being installed have already been removed from the conflicting package's file list, and so do not get removed now).
    41. 41. libsasl2-2 libldap2-tls <debconf> debconf | <debconf-2.0> debconf-2.0 (>= 1.2.9) libdb4.4 libgnutls13 Package Hierarchy (>= 1.4.0-0) libldap2 (>= 0.5) libpam-runtime libpam0g (>= 2.1.17-1) libpam (>= 0.76) netbase (>= 0.6.4-4.9) ifupdown squid (>= 3.5.4-1) weblion-squid-config logrotate <openbsd-inetd> {openbsd-inetd} | <inet-superserver> {inet-superserver (= 2.6.5-6etch1) (>= 3.0-6) configures-etc++squid++squid.conf squid-common <iputils-ping> {iputils-ping} | <ping> {ping} apache2 coreutils lsb-base (>= 0.5) (>= 0.5) (>= 2.2.11-1) libapache2-mod-cosign squid-novm (>= 0.5) (>= 1.32) libacl1 weblion-apache-config ca-certificates libselinux1 ssl-cert stat (>= 0.5) configures-etc++apache2++ports.conf apache2.2-common net-tools (>= 0.5.38) openssl ne-3.1-stack weblion-plone-3.1-site apache2-utils zope-common adduser libmagic1 plone3-site (>= 0.5.38) (= 3.1.5.1-3) (>= 0.5.38) (>= 0.5.38) <mime-support> {mime-support} | <python-imaging-tk> {python-imaging-tk} (>= 0.5.38) (>= 0.5.38) mime-support (>= 0.5.18) (>= 0.9.8c-1) libfreetype6 procps weblion-plone-3.1 (>= 5.6.0) (>= 0.5.38) <zope2.9> {zope2.9} | <zope2.8> {zope2.8} | <zope2.7> {zope2.7} (>= 2.2) libjpeg62 (>= 0.9.3-2~bpo40+2) (>= 1:4.0.12) weblion-zope-hosting-policy apache2-common weblion-zope-cachefu zope-externaleditor python-imaging (>= 2.4) pil libssl0.9.8 weblion-zope-webserverauth zope-cachefu python-pil libbz2-1.0 perl-base weblion-plone-site-config configures-etc++zope2.10++plone-site++zope.conf (<< 2.5) (>= 2.3) python2.3-imaging 1.4.0-0) (>= lynx (>= 5.4-5) libncursesw5 passwd configures-var++lib++zope2.10++zeo++plone-site++etc++zeo.conf (>= 0.5) python2.4-imaging lynx-ssl python (>= 0.3.3) pdftohtml gs (>= 2.3) python-central (>= 0.5) python-docutils libpaper1 python-roman (>= 0.5.21) xpdf-utils (= 3.01-9.1+etch4) xpdf-common poppler-utils libgcrypt11 (>= 1.2.2) libgpg-error0 (>= 2.3) (>= 1.4) xsltproc (>= 1.1.18) libxslt1.1 (>= 2.6.27) (<< 2.5) libxml2 (>= 2.4)
    42. 42. Package Hierarchy weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth
    43. 43. Package Hierarchy weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth
    44. 44. Package Hierarchy weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth
    45. 45. Package Hierarchy weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth
    46. 46. Package Hierarchy weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth
    47. 47. Package Hierarchy weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth massdeploy
    48. 48. Package Hierarchy weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth
    49. 49. Package Hierarchy weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth config-package-dev
    50. 50. config-package-dev conffile packaging for Debian
    51. 51. config-package-dev conffile packaging for Debian Override bundled confs by divert-and-symlink
    52. 52. config-package-dev conffile packaging for Debian Override bundled confs by divert-and-symlink Supports local changes
    53. 53. config-package-dev conffile packaging for Debian Override bundled confs by divert-and-symlink Supports local changes Unattended upgrade-safe
    54. 54. config-package-dev conffile packaging for Debian Override bundled confs by divert-and-symlink Supports local changes Unattended upgrade-safe aptitude dist-upgrade -y -o Dpkg::Options::= --force-confold
    55. 55. config-package-dev conffile packaging for Debian Override bundled confs by divert-and-symlink Supports local changes Unattended upgrade-safe aptitude dist-upgrade -y -o Dpkg::Options::= Completes dependency --force-confold unification!
    56. 56. config-package-dev Examples weblion-hosting-node weblion-squid-config squid apache2 weblion-apache-config libapache2-mod-cosign weblion-plone-3.1-stack weblion-plone-site-config weblion-plone-3.1-site weblion-zope-cachefu weblion-auto-update weblion-zope-hosting-policy weblion-zope-webserverauth
    57. 57. weblion-apache-config Crown jewel of config-package-dev-ery 1 # AUTOMATIC UPDATES MIGHT BREAK YOUR MACHINE if you don't read! 2 # https://weblion.psu.edu/wiki/ConfigPackageOverrides before editing this file.! 3 #! 4 # We intend that you can perform the customizations you need without editing! 5 # this file. Instead, edit any of the files in /etc/weblion-apache-config! 6 # Included herein. This way, we can update this file unattended without paving! 7 # over your work.! 8 #! 9 # If you find you need even more flexibility, please file a ticket, and we'll! 10 # revise the design or advise you to use an entirely custom vhost and include! 11 # what files you can from! 12 # /usr/share/weblion-apache-config/config-snippets/public.! 13 ! 14 # We don't put this in conf.d because, if dpkg puts a global.conf.dpkg-new or! 15 # something there, Apache will load it, too. This isn't a problem in other! 16 # folders, where Apache is careful to load only files with the extension! 17 # quot;.confquot;.! 18 Include /etc/weblion-apache-config/global.conf! 19 ! 20 <VirtualHost *:80>! 21 Include /etc/weblion-apache-config/servername.conf! 22 ! 23 # If you want your site to answer to more than one domain (for example,! 24 # www.example.com and example.com), don't use ServerAlias. Instead, make a! 25 # new virtual host, following the directions in! 26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.! 27 ! 28 Include /etc/weblion-apache-config/serveradmin.conf!
    58. 58. weblion-apache-config Crown jewel of config-package-dev-ery 1 # AUTOMATIC UPDATES MIGHT BREAK YOUR MACHINE if you don't read! 2 # https://weblion.psu.edu/wiki/ConfigPackageOverrides before editing this file.! 3 #! 4 # We intend that you can perform the customizations you need without editing! 5 # this file. Instead, edit any of the files in /etc/weblion-apache-config! 6 # Included herein. This way, we can update this file unattended without paving! 7 # over your work.! 8 #! 9 # If you find you need even more flexibility, please file a ticket, and we'll! 10 # revise the design or advise you to use an entirely custom vhost and include! 11 # what files you can from! 12 # /usr/share/weblion-apache-config/config-snippets/public.! 13 ! 14 # We don't put this in conf.d because, if dpkg puts a global.conf.dpkg-new or! 15 # something there, Apache will load it, too. This isn't a problem in other! 16 # folders, where Apache is careful to load only files with the extension! 17 # quot;.confquot;.! 18 Include /etc/weblion-apache-config/global.conf! 19 ! 20 <VirtualHost *:80>! 21 Include /etc/weblion-apache-config/servername.conf! 22 ! 23 # If you want your site to answer to more than one domain (for example,! 24 # www.example.com and example.com), don't use ServerAlias. Instead, make a! 25 # new virtual host, following the directions in! 26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.! 27 ! 28 Include /etc/weblion-apache-config/serveradmin.conf!
    59. 59. weblion-apache-config Crown jewel of config-package-dev-ery 1 # AUTOMATIC UPDATES MIGHT BREAK YOUR MACHINE if you don't read! 2 # https://weblion.psu.edu/wiki/ConfigPackageOverrides before editing this file.! 3 #! 4 # We intend that you can perform the customizations you need without editing! 5 # this file. Instead, edit any of the files in /etc/weblion-apache-config! 6 # Included herein. This way, we can update this file unattended without paving! 7 8 # over your work.! #! servername.conf: 9 # If you find you need even more flexibility, please file a ticket, and we'll! 10 # revise the design or advise you to use an entirely custom vhost and include! 11 # what files you can from! # This file should consist of a single 12 # ServerName directive specifying the # /usr/share/weblion-apache-config/config-snippets/public.! 13 ! 14 # We don't put this in conf.d# FQDN if dpkg puts a global.conf.dpkg-new or! because, of the primary vhost. 15 # something there, Apache will load it, too. This isn't a problem in other! 16 ServerName #example.psu.edu# # folders, where Apache is careful to load only files with the extension! 17 # quot;.confquot;.! 18 Include /etc/weblion-apache-config/global.conf! 19 ! 20 <VirtualHost *:80>! 21 Include /etc/weblion-apache-config/servername.conf! 22 ! 23 # If you want your site to answer to more than one domain (for example,! 24 # www.example.com and example.com), don't use ServerAlias. Instead, make a! 25 # new virtual host, following the directions in! 26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.! 27 ! 28 Include /etc/weblion-apache-config/serveradmin.conf!
    60. 60. weblion-apache-config Crown jewel of config-package-dev-ery 1 # AUTOMATIC UPDATES MIGHT BREAK YOUR MACHINE if you don't read! 2 # https://weblion.psu.edu/wiki/ConfigPackageOverrides before editing this file.! 3 #! 4 # We intend that you can perform the customizations you need without editing! 5 # this file. Instead, edit any of the files in /etc/weblion-apache-config! 6 # Included herein. This way, we can update this file unattended without paving! 7 # over your work.! 8 #! 9 # If you find you need even more flexibility, please file a ticket, and we'll! 10 # revise the design or advise you to use an entirely custom vhost and include! 11 # what files you can from! 12 # /usr/share/weblion-apache-config/config-snippets/public.! 13 ! 14 # We don't put this in conf.d because, if dpkg puts a global.conf.dpkg-new or! 15 # something there, Apache will load it, too. This isn't a problem in other! 16 # folders, where Apache is careful to load only files with the extension! 17 # quot;.confquot;.! 18 Include /etc/weblion-apache-config/global.conf! 19 ! 20 <VirtualHost *:80>! 21 Include /etc/weblion-apache-config/servername.conf! 22 ! 23 # If you want your site to answer to more than one domain (for example,! 24 # www.example.com and example.com), don't use ServerAlias. Instead, make a! 25 # new virtual host, following the directions in! 26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.! 27 ! 28 Include /etc/weblion-apache-config/serveradmin.conf!
    61. 61. 19 ! 20 <VirtualHost *:80>! 21 Include /etc/weblion-apache-config/servername.conf! 22 ! 23 # If you want your site to answer to more than one domain (for example,! weblion-apache-config 24 # www.example.com and example.com), don't use ServerAlias. Instead, make a! 25 # new virtual host, following the directions in! 26 # /usr/share/doc/weblion-apache-config/examples/alias-vhost.! 27 ! 28 Include /etc/weblion-apache-config/serveradmin.conf! Crown jewel of config-package-dev-ery 29 30 31 Include /etc/weblion-apache-config/log.conf! Include /usr/share/weblion-apache-config/config-snippets/public/prepare-to-proxy.conf! ! 32 # Most of your custom configuration, including rewrites, should go in this! 33 # file and in before-proxy-to-plone-https.conf, below:! 34 Include /etc/weblion-apache-config/before-proxy-to-plone.conf! 35 ! 36 Include /etc/weblion-apache-config/proxy-to-plone.conf! 37 </VirtualHost>! 38 ! 39 <VirtualHost *:443>! 40 Include /etc/weblion-apache-config/servername.conf! 41 Include /etc/weblion-apache-config/serveradmin.conf! 42 Include /etc/weblion-apache-config/log.conf! 43 ! 44 Include /etc/weblion-apache-config/enable-ssl.conf! 45 Include /etc/weblion-apache-config/ssl-certificate-files.conf! 46 ! 47 ! # Require authN for SSL access to the Plone site:! 48 ! <Location />! 49 ! Include /usr/share/weblion-apache-config/config-snippets/public/require-cosign-auth.conf! 50 ! Include /etc/weblion-apache-config/cosign-host-parameters.conf! 51 ! </Location>! 52 ! ! 53 Include /usr/share/weblion-apache-config/config-snippets/public/prepare-to-proxy-https.conf! 54 ! 55 # Most of your custom configuration, including rewrites, should go in this! 56 # file and in before-proxy-to-plone.conf, above:! 57 Include /etc/weblion-apache-config/before-proxy-to-plone-https.conf! 58 ! 59 Include /etc/weblion-apache-config/proxy-to-plone-https.conf! 60 </VirtualHost>!
    62. 62. Updation O(1) for the fun of it
    63. 63. Updation O(1) for the fun of it Update the package repository
    64. 64. Updation O(1) for the fun of it Update the package repository Visit each machine
    65. 65. Updation O(1) for the fun of it Update the package repository Visit each machine Spin the chamber with buildout
    66. 66. Updation O(1) for the fun of it Update the package repository Visit each machine Spin the chamber with buildout Go home
    67. 67. Release Process Distributions
    68. 68. Release Process Distributions etch-unstable
    69. 69. Release Process Distributions etch-unstable etch-testing
    70. 70. Release Process Distributions etch-unstable etch-testing etch
    71. 71. Release Process Distributions etch-unstable etch-testing etch lenny-unstable
    72. 72. Release Process Distributions etch-unstable etch-testing etch lenny-unstable lenny-testing
    73. 73. Release Process Distributions etch-unstable etch-testing etch lenny-unstable lenny-testing lenny
    74. 74. Release Process Project Management
    75. 75. Release Process Project Management http://weblion.psu.edu/wiki/WebLionHosting
    76. 76. Release Process Documentation http://weblion.psu.edu/wiki/WebLionHostingAdminGuide
    77. 77. Try it Hardware options
    78. 78. Try it Hardware options Dedicated
    79. 79. Try it Hardware options Dedicated Homegrown virtualization
    80. 80. Try it Hardware options Dedicated Homegrown virtualization EC2
    81. 81. Try it Hardware options Dedicated Homegrown virtualization EC2 Toasters
    82. 82. Try it I’m so rone-ry Details: http://weblion.psu.edu/wiki/BootstrapServers
    83. 83. Try it I’m so rone-ry 1. echo quot;deb http://deb.weblion.psu.edu/debian etch main non-free contribquot; >> /etc/apt/sources.list Details: http://weblion.psu.edu/wiki/BootstrapServers
    84. 84. Try it I’m so rone-ry 1. echo quot;deb http://deb.weblion.psu.edu/debian etch main non-free contribquot; >> /etc/apt/sources.list 2. aptitude update Details: http://weblion.psu.edu/wiki/BootstrapServers
    85. 85. Try it I’m so rone-ry 1. echo quot;deb http://deb.weblion.psu.edu/debian etch main non-free contribquot; >> /etc/apt/sources.list 2. aptitude update 3. aptitude install --without-recommends -y weblion-hosting-vmware-node weblion-auto-update weblion-plone-3.1-stack Details: http://weblion.psu.edu/wiki/BootstrapServers
    86. 86. Future
    87. 87. Future Newer Plones
    88. 88. Future Newer Plones Factor out Penn-State–specific stuff
    89. 89. Future Newer Plones Factor out Penn-State–specific stuff Monitor Zope
    90. 90. Future Newer Plones Factor out Penn-State–specific stuff Monitor Zope
    91. 91. Try WebLion Hosting http://weblion.psu.edu/wiki/ BootstrapServers ErikRose@psu.edu #weblion on irc.freenode.net
    92. 92. Try WebLion Hosting http://weblion.psu.edu/wiki/ BootstrapServers ErikRose@psu.edu #weblion on irc.freenode.net

    ×