Plone at Harvard School of Engineering and Applied Sciences
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Plone at Harvard School of Engineering and Applied Sciences

on

  • 2,837 views

The Harvard School of Engineering and Applied Sciences (SEAS) wanted to launch a dynamic network of websites that attracts prospective students and promotes academic activities both internally and ...

The Harvard School of Engineering and Applied Sciences (SEAS) wanted to launch a dynamic network of websites that attracts prospective students and promotes academic activities both internally and externally. SEAS engaged Jazkarta, a Boston-based open source technology consultancy specializing in Plone, on a project to build a set of websites that achieve these goals. Jazkarta redesigned SEAS' existing public website, constructed an intranet site that allows SEAS to provide up-to-date information to their community of faculty, staff and students, and developed a facility for deploying faculty and lab subsites within the site infrastructure.

Mike Trachtman, Project Manager at Jazkarta, will present a case study of the project that covers development processes, designing highly available and scalable Plone site architectures, integrating/creating Plone components to satisfy functional requirements of .EDU websites, and repeatable deployment of customized Plone software solutions.

Statistics

Views

Total Views
2,837
Views on SlideShare
2,508
Embed Views
329

Actions

Likes
2
Downloads
51
Comments
0

10 Embeds 329

http://blog.jazkarta.com 251
http://www.pilotsystems.net 67
http://www.comphumanities.org 3
http://www.linkedin.com 2
http://cepglconsulting.com 1
http://www.cepglconsulting.com 1
http://www.slideshare.net 1
http://webcache.googleusercontent.com 1
http://static.slidesharecdn.com 1
http://localhost:8080 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • students: find available programs of study, research opportunities <br /> faculty & researcher: connect with research opps <br /> harvard comm: understand functional structure, key aims and mission <br /> seas comm: understand functional structure, key aims and mission <br />
  • students: find available programs of study, research opportunities <br /> faculty & researcher: connect with research opps <br /> harvard comm: understand functional structure, key aims and mission <br /> seas comm: understand functional structure, key aims and mission <br />
  • students: find available programs of study, research opportunities <br /> faculty & researcher: connect with research opps <br /> harvard comm: understand functional structure, key aims and mission <br /> seas comm: understand functional structure, key aims and mission <br />
  • students: find available programs of study, research opportunities <br /> faculty & researcher: connect with research opps <br /> harvard comm: understand functional structure, key aims and mission <br /> seas comm: understand functional structure, key aims and mission <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />
  • 4 phyiscal hosts, each with 16 cores and 64 GB of RAM <br /> 6 Virtual servers, 1 CPU per server, 4 GB of RAM per server <br /> VMWare virtual environment <br /> Web servers load balanced in hardware <br /> Varnish for caching Plone content <br /> Software load balancers for ZEO clients <br /> Heartbeat for failover <br /> Supervisor for process control <br />

Plone at Harvard School of Engineering and Applied Sciences Presentation Transcript

  • 1. Plone at Harvard SEAS Michael Trachtman, Project Manager, Jazkarta PLONE SYMPOSIUM EAST PENN STATE open source technology solutions 2009
  • 2. Plone at Harvard SEAS • Overview • Requirements and Solutions • Implementation • Status and Takeaways • Q&A
  • 3. Overview PLONE SYMPOSIUM EAST PENN STATE open source technology solutions 2009
  • 4. About Jazkarta • Open source technology consultancy • Working with nonprofits and .EDUs • Oxfam • CMRLS • Harvard • We like chowda
  • 5. About Harvard SEAS • SEAS - School of Engineering and Applied Sciences • Part of FAS • Founded 1847/1950 • 350 graduate students, 300 undergrads, ~70 faculty
  • 6. the seas community ADMINISTRATION FINANCE ACADEMIC OFFICE FACILITIES COMPUTING & IT HR COMMUNICATIONS
  • 7. seas user community FACULTY & STUDENTS RESEARCHERS HARVARD COMMUNITY SEAS COMMUNITY
  • 8. Current Setup • www.seas.harvard.edu • HTML hand-edited, backed by dynamic scripting • Site stats, October 2007: • www: 530,000 page serves per day (5-10 rps) • subsites: 99,000 page serves per day (2-5 rps)
  • 9. Business Objectives • Develop with flexible CMS that is easy to use for non-technical community • Provide integrated directory • Offer robust site and directory search tool • Use familiar open source tools
  • 10. Team Roles • SEAS • Jazkarta • Dean’s Office, • Project Communications Management and IT • Information • Sponsor and Design Stakeholder • Visual Design • Resource • Software Procurement Architecture and Development
  • 11. Process • Agile management and development principles (iterative, transparent, adaptive) • Weekly status and bi-weekly on-sites • ClueMapper (“Super Trac”) for planning/ documentation/ticketing • Google Docs for shared resources • Functional test plans • LDAP schema references
  • 12. Cluemapper • “Super-TRAC” • Multi-project, single instance, TTW project onboarding • Shared authentication system for Trac and Subversion • Integrated time-tracking, pastebin • WYSIWYG wiki editing • http://www.cluemapper.org
  • 13. Timeline January 2008 - Kickoff April 2008 - Designs Completed November 2008 - BETA January 2009 - Intranet Launched April 2009 - First Subsites Launched July 2009 - Public Site Launch
  • 14. Requirements and Solutions PLONE SYMPOSIUM EAST PENN STATE open source technology solutions 2009
  • 15. help! HOW DO I OBTAIN A DIgITAL COPy OF THE SEAS LOgO/ SEAL? How do I plan an event at SEAS? How do I get a website what research is for myself or my lab? happening in the applied physics department?
  • 16. Choosing a Platform • University supportive of open source • Familiar to IT office - Drupal and Plone • Required easy content editing, workflow, access control, news and event management • Integration capabilities with LDAP-based directory (authentication and non- biographical information)
  • 17. Choosing Plone public site intranet subsites
  • 18. Intranet • Repository for shared information • Targeted at internal users • Directory and site search • Internal news, events and important announcements • Public and protected information • Department landing pages - who does what? • FAQs, How-tos, policies and procedures
  • 19. i te S b lic P u
  • 20. Public Site • Site redesign with a focus on research • Organized resources by research area • User-targeted content (prospective students, alumni, partners) • Highlight activity via news and events • Directory and site search
  • 21. it es b s su
  • 22. Subsites • Relieve load for communications and IT • Provide microsites for faculty, research groups and special events like conferences • Accessible for the technically challenged; easy = fresh • Separate visual theme with some customizability with adherence to university standards • Distinct access control specifications • Shared infrastructure and online procurement
  • 23. Implementation PLONE SYMPOSIUM EAST PENN STATE open source technology solutions 2009
  • 24. Implementation • Story development • Development and Deployment • Information and Visual Design • Content Migration • Software • Testing and Architecture Acceptance
  • 25. Story development • Defined stories, ran a card sort • Grouped stories into high-level groups for classification and prioritization • Developed iteration plans based on task estimation
  • 26. Information and Visual Design • Focus on Research - strip out the marketing speak • Reinforce Harvard brand • Information architecture pre-established • Delivered wireframes and comps • Iterative
  • 27. design process Harvard Schol of Engineering- Website WIREFRAME PAGE DESCRIPTION - Sample wireframe for layout of Second Level Nav (2nd Level) webage; NOT User-centered nav. Search: Logomark/Branding - All Level 1, 2 navigation visible globally Option 1 Option 2 Option 3 - Search is Global feature/location PRIMARY NAV PRIMARY NAV PRIMARY NAV PRIMARY NAV PRIMARY NAV Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Secondary Nav Bread > Crumb > Trail.... PAGE SPECIFICATIONS - Layout is centered in browser window What type of User are You? FUNCTIONALITY NOTE: - ‘Flexible Column’ in right-column is placed until SEAS staff decides ‘rules’ for this column for each - fixed image, no rotate section/specific pages Tertiarty Nav CORPORATE/PARTNER IMAGE - image/link CMS - page maintain a global Footer Tertiarty Nav - Below nav, above H1 Header; photo placed to emphasize content below. unique photo/Level2 section - User-Nav moves toleft-hand side to establish majority location; is not fully revealed, but AJAX functionality Tertiarty Nav HELPFUL LINKS upon click would reavel all options. Tertiarty Nav H1 Partner > Learn It Headline Here - Tertiarty Nav Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Sed pharetra nunc id urna. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis > Learn It Headline Here Tertiarty Nav egestas. Suspendisse dignissim tempus augue. Duis ipsum lectus, ultrices vitae, Tertiarty Nav > Learn It Headline Here lacinia a, pretium eget. FEATURED PARTNER > Learn It Headline Here H2 Headline Here Uisque aliquam magna in justo. Nunc lacinia dignissim arcu. Proin sit amet dolor non leo pharetra semper. Proin convallis odio bibendum nulla. In sapien velit, nonummy quis, vestibulum eu, dictum non, pede. Mauris sapien tellus, tincidunt ac, sodales sed, PHOTO posuere sit amet, velit. Integer dignissim, leo at adipiscing rhoncus, mi sem suscipit lectus, vitae hendrerit urna pede accumsan arcu. Aliquam erat volutpat. Proin congue vulputate eros. Ut id magna. Aenean metus ligula, facilisis nec, vehicula at, rutrum eget, risus. Morbi tristique urna eget tortor adipiscing auctor. Morbi varius. Cras faucibus. Quisque Lorem ipsum dolor sit amet, tempus auctor libero. Cras scelerisque metus at est elementum feugiat. Nunc conse- consectetuer adipiscing elit. In ac quat neque eget neque. Donec mattis massa ac libero. Etiam pellentesque. Proin nulla in odio sollicitudin congue. Praesent in lacus. Mauris sit amet lacus. Etiam euismod sodales tellus. Fusce volutpat feugiat tellus. Fusce id purus lorem ac purus suscipit blandit. In quis justo lobortis pulvinar. Praesent placerat mattis tortor. Pellentesque tincidunt gravida orci id metus. Nunc tincidunt massa a odio. turpis et dolor elementum cursus. Morbi fermentum scelerisque ipsum. NOTES THE CONTENT ON THIS PAGE REFLECTS THE NOT-SIGNED-IN STATE. Quisque eget augue. Vestibulum tincidunt ante sed enim. Nulla augue. FOOTER Meta Nav | Meta Nav | Meta Nav | Meta Nav | Meta Nav | Meta Nav | Office Name email@seas.com 123 Street Ad 1 in. = 100 pixels This wireframe is not meant to convey any design concepts, but is instead solely Meta Nav | Meta Nav meant to convey, in a visual manner, the functional elements which must exist on City, ST any given page. Placement of elements along with page copy and nomenclature Outer frame is 1076 pixels will be determined upon final definition of the elements required on this page. ZIPXXX wide. © 2008 Jazkarta, Inc. Filename: SEAS_wireframes_xxx.pdf 5 April 2008 Harvard Schol of Engineering- Website WIREFRAME PAGE DESCRIPTION - Sample wireframe for layout of User-centered nav Search: Logomark/Branding - Search is Global feature/location Option 1 Option 2 Option 3 PRIMARY NAV PRIMARY NAV PRIMARY NAV PRIMARY NAV PRIMARY NAV Bread > Crumb > Trail.... What type of User are You? Tertiarty Nav Tertiarty Nav USERTYPE 1-6 PAGE SPECIFICATIONS Tertiarty Nav - Layout is centered in browser window Tertiarty Nav < > - ‘Flexible Column’ in right-column is placed until SEAS staff decides ‘rules’ for this column for each section/specific pages Tertiarty Nav - page maintain a global Footer FUNCTIONALITY NOTE: - User-Nav moves toleft-hand side to establish majority location;on Primary Nav page(s) *all options visible* Tertiarty Nav - text description of user - jQuery used - Photo slideshow in right-hand column; contents are specific to content at-hand in this Primary Nav section. Tertiarty Nav - nav on left highlights accordingly Whether this is controlled from a ‘global’ photo library and tagged, or individual galleries is TBD. FEATURED STUDENT HELPFUL LINKS H1 UserType One-to-Six > Learn It Headline Here Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Sed pharetra nunc id urna. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis > Learn It Headline Here PHOTO egestas. Suspendisse dignissim tempus augue. Duis ipsum lectus, ultrices vitae, lacinia a, pretium eget. > Learn It Headline Here > Learn It Headline Here H2 Headline Here Uisque aliquam magna in justo. Nunc lacinia dignissim arcu. Proin sit amet dolor non leo pharetra semper. Proin convallis odio bibendum nulla. In sapien velit, nonummy Lorem ipsum dolor sit amet, consectetuer adipiscing elit. In ac quis, vestibulum eu, dictum non, pede. Mauris sapien tellus, tincidunt ac, sodales sed, nulla in odio sollicitudin congue. posuere sit amet, velit. Integer dignissim, leo at adipiscing rhoncus, mi sem suscipit Praesent in lacus. Mauris sit amet lorem ac purus suscipit blandit. In lectus, vitae hendrerit urna pede accumsan arcu. Aliquam erat volutpat. Proin congue gravida orci id metus. Nunc vulputate eros. tincidunt massa a odio. Quisque eget augue. Vestibulum >Undergraduates APPLY | >Graduates APPLY tincidunt ante sed enim. Nulla augue. NOTES FOOTER THE CONTENT ON THIS PAGE REFLECTS THE NOT-SIGNED-IN STATE. email@seas.com Meta Nav | Meta Nav | Meta Nav | Meta Nav | Meta Nav | Meta Nav | Office Name 123 Street Ad Meta Nav | Meta Nav City, ST ZIPXXX 1 in. = 100 pixels This wireframe is not meant to convey any design concepts, but is instead solely meant to convey, in a visual manner, the functional elements which must exist on any given page. Placement of elements along with page copy and nomenclature Outer frame is 1076 pixels will be determined upon final definition of the elements required on this page. wide. © 2008 Jazkarta, Inc. Filename: SEAS_wireframes_xxx.pdf 5 April 2008
  • 28. software architecture CLIENT browser FRONT END presentation caching web server web server Deliverance theming load balancing pound app server app server app server app server BACK END directory application directory server database database server database server
  • 29. Server Architecture • VMWare virtual environment • Web servers load balanced with hardware • Varnish for caching Plone content • Software load balancers for ZEO clients • Heartbeat for failover • Supervisor for process control
  • 30. Development and Deployment • Deployment configurations via Buildout • Separation of theming from application programming using Deliverance • Repeatable deployment across server infrastructure via Fabric • Repeatable load testing setup (jMeter)
  • 31. Key Plone Customizations • FacultyStaffDirectory • Collage • schema extension • Custom viewlets • Optimized views • DeliveranceController • synchronization • Workarounds for with LDAP Deliverance +Subsites • Plone4ArtistsCalendar • Calendar views
  • 32. Subsite Machinery seas.siterequest • Request form and workflow • Content Templates JazMiniSite • Site-within-a-Plone-site • Navigation Root
  • 33. Content Migration • Handled exclusively by the SEAS team • New designs and different approach to targeting users required new content • Plone training - general user training + train the trainer
  • 34. Transition • Executed functional test plans • Moved ClueMapper and source code repositories to SEAS • Acceptance/sign-off
  • 35. Project Status and Takeaways PLONE SYMPOSIUM EAST PENN STATE open source technology solutions 2009
  • 36. Current Status Launches Upcoming Enhancements • Intranet - January 2009 • News management tools • Subsites - April 2009 • Alternate subsite themes • Public Site - July 2009 • Improved multimedia integration • Bulk file upload
  • 37. Areas of Improvement • User experience • Subsites • Content editing • JazMiniSite - improvements - consider migration customization of opportunity to editor collective-based solution • File upload - consider SWF- • Deliverance - based multi-file upgrade upload solution
  • 38. Questions? PLONE SYMPOSIUM EAST PENN STATE open source technology solutions 2009
  • 39. Resources •ClueMapper - http:// www.cluemapper.org •Plone4ArtistsCalendar - http://plone.org/ products/plone4artistscalendar •Collage - http://plone.org/products/ collage •FacultyStaffDirectory - http://plone.org/ products/faculty-staff-directory •Fabric - http://docs.fabfile.org
  • 40. Thanks! •Harvard SEAS •PSU/WebLion