Research influences the teaching Teaching generates the talent which benefits industry Engagement informs the research
Having the platform allows us to collaborate with many people around the University to do novel research into real world problems. There is obviously other research in the group too such as machine learning, simulation and social media analysis but it is the e-SC platform an similar technologies that is the biggest enabled.
That in turn influences teaching within the CDT – what the students are taught is based on our research.
The teaching develops talent which is acquired by industry…
And the engagement informs the research as we are solving the problems that really exist rather than made up ones.
Ultimately this is one recipe for making Cloud Computing work for a research group
Cloud Agnostic Platform Drives academic collaborations Vehicle for CS research Industrial Links Learn best practice (Dev Ops etc.) Train and Influence Education Develop new talent They drive the research
Wider remit than most research groups.
Sit in the middle of what is hopefully a virtuous circle.
Today I’m going to focus on some of the research, particularly around Cloud and what we’ve done on Windows Azure.
I’ll talk about some work that we did to port an existing application to Azure.
Then look at some of the non-functional requirements that came out of that and how we’re used that to drive our research forwards
Hard to make use of large scale computing resources
Keeping track of data and results
People more accustomed to programming for their own problem Distributed systems development background
Not many tools around to help Mainly targeted at business/consumer applications
Getting better with new tools available but these still require low level programming skills that application scientists often don’t have.
One of the often touted benefits of the Cloud is the transfer of CapEx to OpEx. Whilst in business this is considered preferable and a good thing, is it the same in academia? level programming skills that application scientists often don’t have.
In academia this has some interesting side effects when it comes to the end of the project – who pays for machines to be kept running? The reality is that this cost was hidden by central IT before who would keep machines running FOC for a period after the end of the project – usually until they failed. However, now that isn’t the case.
In some projects this is less of an issue – classical research projects which run, generate data, analyse the data, publish the results and then move onto another project.
For many of our projects it is an issue for 2 reasons. Firstly the members of the project often wish to demo the project after the official end – trying to secure more funding, enhancing publications etc.
The bigger issue is the mandate to store research data for N years after the end of the project. Who pays for this? The project can’t as the budget has ended.
I don’t have any answers but it’s one of the interesting features of the OpEx model.
One of the challenges is that we, as a group are walking a fine line between providing a service and performing our own research. Ultimately, we’re measured by both of these things – the amount number of collaborations that we engage in and also the number of papers we write. Therefore, e-SC has two functions:
Provide facilities to research projects so that they don’t have to be built from scratch Act as a vehicle for our academic research
Both of these functions are important and necessary for success.
One of the downsides of these types of collaborations is that you can be seen as a service role within the project and not a fully fledged research partner. The liklihood of this depends on the project and PI etc.
One of the others things that we have learnt is that not every assumption you make will turn out to be right. For instance…
When we started we envisaged one e-SC to rule them all. The most I’ve had to manage is about 15 at the same time. It’s not far off a full time job.
You end up doing a lot of system administration
You may be able to get central IT to do some of the work for you but in our experience it’s not that easy.
Over the past few years many of the large software companies have been making large strides, often due to necessity in the ways the manage their infrastructure and services. My view is that academia often hasn’t followed their lead because of the lack of need but we can learn a great deal from them.
One of the keys is automating the process from development to a new version going live. A few years ago this would be measured in months or even years. Version Next of the software will be available on X date. With SaaS provision this can be vastly sped up. For instance on a weekday, Amazon updates its deployments every 12 seconds.
The way that they are able to do this is leveraging tools such as
Continuous Integration and delivery. Every check in to VCS is automatically checked out and tested using automated tools. Configuration management. Make sure that the developer environment exactly matches the production one and any changes are managed. Usually using tools such as Chef/Puppet/Ansible.
Engagement arm – links to industry both small and large and we aim to learn from them and inform them of new research
Cloud agnostic platform important – OpenStack private cloud and Azure Public Cloud
RCUK Cloud Workshop
Supporting research in the Cloud:
Our experiences and future
• In 2001, NEReSC was formed to help researchers
– £17M funding
• Bioinformatics, Neuroscience, Aging & Health, Chemical Engineering,
Transport, Video archiving
– Core UK e-Science funding programme
• Became the Digital Institute
– Similar remit but more diverse: Humanities, Medical Science,
– £40M funding
• Provided computing aspect of many research projects
• Frequently same requirements
Events, engagement &
EPSRC Centre for
next generation of
g & data
Having 100s of machines
available to process data doesn’t
Building a data management and
• An environment to store, manage and process data
– Every project needed this, volumes growing
– Open Source
• A platform that can operate in a number of different locations
– On Premise
– On a cloud provider (Amazon and MS)
• An expandable system
– API to connect other software
– Data processing code can be added
• A platform for our academic research
– Scalability, data management, provenance
Features of e-SC
• Data storage
– Cloud: infinite scale
• Data processing
– Best-of-breed open source tools (R / Octave)
– Audit of everything performed
– Easy to run at large scale
• External communications
– Rich APIs
All our projects have significant storage,
processing and collaboration needs
Use of the platform
Care home design
Learning from Industry -
The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win.
Gene Kim, Kevin Behr, George Spafford. ISBN 0988262509
Cloud Innovation Centre
• £1M funding from DCMS
– Architecture Reviews
– Cloud time
• £700k Private and Public Cloud infrastructure
Cloud Computing for Big Data
• £7M funding from EPSRC
• 60 Students
• 5 years
• High level of industry engagement
– Cost of on-boarding
– Researchers doing Sys Admin
– Manage the relationship with Central IT
– Staff costs/time