OpenAthens Service infrastructure and usage


Published on

Neil Drage, Eduserv's Service Delivery Manager for Access & Identity, takes a look at the infrastructure that supports the OpenAthens service, and the range of support options for customers.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Service Delivery manager.I’m responsible for managing everything about the OpenAthens infrastructure which includes:HardwareWhere it’s locatedThe network connecting the serversEtcAnd through managing this try to provide continuous improvements to:PerformanceStabilityCapacityToday I’m going to tell you about some of the changes to the service infrastructure we’ve made over the last year, and so we’re planning for the coming months.I’ll then cover service incident communications and a change that we’ve just implemented.Finally for some light relief I’d like to show you some interesting service statistics...
  • To give you an idea of the makeup of the current OpenAthens service.<list>The purpose of having more than one data centre is to reduce the risk of the entire service going down due to issues with one particular data centre. The single sign-on service is designed to be able to detect and exclude parts of the infrastructure that are no longer working or are uncontactable, this reduces the risk of users trying to access a server that isn’t working. This has enabled us to have an uptime of 99.98%, though we’d like to improve on that...
  • Commissioned to replace Bath data centreOnline Q3 2009A huge facility with over 37,000 square feet of floor space. Has multiple redundant connections to the internet via different providers (eg Cable and Wireless, Virgin).Uninterruptable power supply and generator, capable of powering entire siteHigh tech cooling system capable of using cold air from outside when cool enough.The OpenAthens servers that were located at our Bath data centre were moved to Swindon in November in two batches.
  • Early 2009 infrastructure was reviewed with these objectives:increase service resiliencewhile reducing complexityWithout having a negative impact on performance Out of this review we made the following changes:We re-organised our server infrastructure which enabled us to improve performance AND resilienceCore service authentication point hardware was upgradedWe’ve started to migrate the service to SQL Server as a database platform. We have better experience within the company of SQL Server. This migration will be completed within the next couple of months.Finally, we are migrating the service to a two data centre configuration from three. These will be at Swindon and London. The reason being that these data centre provide the highest level of resilience that will enable us to maintain or improve on service availability.
  • Incident communicationsWhen things do go wrong with parts of the service...Or when we’re planning to do some maintenance work on parts of the service...Until now we have bombarded administrators with information via email that is of no interest to most.It also takes an hour or two to send emails to all administrators – which makes service messages potentially out of date when they arrive.With this in mind we’ve now created a new Service Status page that will be used to present the health of each major part of the service. If part of the service is performing slowly or is inaccessible, then you will be able to find out why here.
  • Also, on the same page you can find out what maintenance is planned for the near future. We always endeavour to let our customers know about upcoming service affecting maintenance:at least 2 working days in advance for work carried out on a Tuesday between 07.00 and 09.00 or 5 working days notice if it’s happening any other time.
  • StatisticsUsage of the service has always followed seasonal patterns of user activityThis has been mainly due to the UK HE/FE communityTwo graphs:The top one represents the authentications that OpenAthens processed through 2009The bottom on are the number of accounts created.You can see here on the top graph:The summer dip in usageThe busy autumn termAnd fairly flat winter and spring terms.The spiky look is due to the natural weekend drop in usageThe three tall spikes are occasions when the service was subjected to denial of service attacks.We have automatic ways of reducing the impact of these attacks ensuring that users don’t notice when it’s happening.The bottom graph shows the frenetic activity associated with the start of the new academic year.We try to make sure that large changes to the service and its infrastructure happen over the summer period.Authentication month high - October – 8.6 millionAccount creation month high – September – 157 thousand
  • So who are these people that are logging into OpenAthens?Misconception Athens is a UK service, probably because it was funded to a large degree by the JISC for HE/FE for a long time.Map shows a representation of numbers of successful and unsuccessful logins for JanuaryCustomers all over the world – But particularly in the US, Israel, and Australasia.
  • UK: 2.3 millionIsrael: 72 thousandIreland: 37 thousand
  • Users of the service can connect to many OpenAthens authenticated services. This map shows where these service providers are located.
  • OpenAthens Service infrastructure and usage

    1. 1. The OpenAthens Service<br />Infrastructure and usage<br />Neil Drage<br />
    2. 2. Who and why?<br />Service Delivery Manager<br />OpenAthens service infrastructure<br />Service incident communications<br />Statistics<br />
    3. 3. OpenAthens infrastructure - overview<br />21 servers<br />Located at multiple data centres<br />Linux and Windows based<br />High service availability- 99.98% in 2009<br />24/365 infrastructure support<br />
    4. 4. Data centre changes<br />Swindon Data Centre<br />Online Q3 2009<br />Massive capacity<br />Highly resilient<br />Low environmental impact<br />Servers from Bath migrated in November 2009<br />
    5. 5. Consolidation and upgrades<br />Infrastructure reviewed – early 2009<br />Consolidation and re-organisation of servers<br />Authentication Point hardware upgraded<br />Database technology migration – partially completed<br />Moving to two data centres – August 2010<br />
    6. 6. Service status<br />
    7. 7. Service status – scheduled maintenance<br />Available at:<br /><br />
    8. 8. Service usage pattern - 2009<br />
    9. 9. Typical daily authentication pattern<br />
    10. 10.
    11. 11.
    12. 12.
    13. 13.<br />