5 Reasons Now's the Time to Move Learning to the Cloud
1. 5 Reasons Now is the
Right Time to Move
Learning to the Cloud
2. Enterprise Thinking vs Cloud Thinking
Over 60% of higher education
institutions report that they are
currently reviewing, converting to, or
running their LMS in the Cloud.
- The Campus Computing Project: 2014
National Survey of Computing and
Information Technology in US Higher
Education
60%
3. Enterprise Thinking vs Cloud Thinking
• Dedicated, high-availability
environments
• Infrastructure failure is
challenging and avoided
• Designed for solidity
• Manual processes
• Infrequently updated
• Monolithic change
Enterprise
• Commodity, fungible
environments
• Infrastructure failure is
expected and embraced
• Designed for resilience
• Automated processes
• Frequently updated
• Granular change
Cloud
vs.
4. Why Cloud? Why Now?
Flexibility
Scalability & Elasticity
Reliability
Innovation & Change Management
Big Data
5. Flexibility
Overview
• Heavy virtualization
• New resources can be spun up (or spun down) in minutes or even
seconds
Rapid Provisioning
• Cloud services designed for highly scaled, shared management of
resources
• Redundancy leads to durability
Resource Sharing
7. Scalability & Elasticity
Overview
• Dynamic provisioning of resources in response to system performance
• Scale either horizontally or vertically
Auto-Scaling & Elasticity
• Move the content to the “edges” of the network
• Reduce latency
Content Delivery Networks
9. Reliability
Overview
• Built for resilience
• Increased reliance on monitoring
Fault Tolerant Application Design
• Vulnerability detection
• Self-healing environments
Deployment Automation
10. Reliability
Blackboard Cloud Architecture
• Self-healing environment
• End-to-end automation ensures
consistency and quality
• Zero downtime updates
with very high success rates
• Native denial of service protection
11. Innovation & Change Management
Overview
• Always current
• Painless updates
Software as a Service
• Test-driven development
• Ready to deploy at any time
Continuous Integration
• Smaller releases = less change management
• Feature toggles
Change Management
12. Innovation & Change Management
Blackboard Cloud Architecture
• Rapid delivery of updates
– Incremental updates on a regular
schedule
– Less change management
– Faster bug fixes
– Appropriate feature controls to
manage change
13. Big Data
Overview
• The three “V’s” – high Volume, high Velocity, high Variety
• Modern technologies designed for big data sets
Characteristics
• Student retention
• Data-driven decision making
Why Big Data?
14. Big Data
Blackboard Cloud Architecture
• Re-tooling our collection of data
and our instrumentation
• Cross-product data stores
• Individualized reporting for faculty
and students where they need it,
when they need it
– No more “reporting backwater”
15. Physical & Network Security
To protect your infrastructure & the mission-critical information it contains
Data Security
To recover lost data and restore courses
Redundancy
To limit the effects of “anything” going wrong
Reliability
To maintain the system if the physical surroundings are compromised in any way
Scalability
To enable and support growth
People & Processes
To effectively & efficiently support the environment 24x7x365
Monitoring Practices
To track and report what is happening on the network and ensure responsiveness
Change Management
To ensure ongoing successful utilization of the technology
Critical Factors to Consider in Managing
Your Online Learning Environment
16. Is someone available every night and weekend to handle problems that
may arise??
Do you have enough people on staff with the expertise needed to
successfully support this mission-critical environment??
If any one person is out sick or leaves, is there someone else who is
trained, ready, and able to perform his or her duties??
Are there more strategic or mission-critical projects that you / your
team could be working on if you had more time??
Do you have peace of mind when it comes to managing your learning
environment or are you bogged down in minutiae??
18. Rethink Teaching and Learning
And the Business of Education
Program
Selection
& Marketing
Student
Support
& Retention
Learning
Delivery
Career
Placement
19. Learning Core
Traditional learning delivery
Learning Environments to meet varied
student and institutional needs
Learning Insight
Measuring effectiveness of online and
traditional programs
Learning Essentials
Offering online courses and programs
Learning Insight & Student Retention
Developing strategy and tactics to
maximize student retention
21. Blackboard and the Cloud
Blackboard
Collaborate
Blackboard
Connect
Blackboard
Engage
K12 Central
xpLor SafeAssign MyEdu
Blackboard
Transact
eAccounts
Already a
major provider
of cloud services
to education:
Our seven data centers on
four continents represents the largest
private cloud in education world-wide
28. Improved customer experience
Zero downtime updates
Better innovation and market responsiveness
Get enhancements and new features more quickly
Higher quality
Get fixes and maintenance to you more quickly
Easier change management
Small updates instead of monolithic releases
Delightful user experience
Learner-centric interface coming to the Cloud
Better support
Less variety of versions, environments
How to Summarize & Communicate the Benefits
Editor's Notes
Thanks for joining us today for a discussion on Learning in the Cloud….
I started with telling you about Chaos Monkey because it is a great example of what I call “cloud thinking.” When you move from designing and deploying applications for your own data centers – what I call “enterprise thinking” – to designing and deploying applications in the cloud, you have to unlearn a lot of what you think you know.
The way that you build and deploy software to run on dedicated hardware in your own data center drives a different set of behaviors and perspectives than building software to run on a cloud architecture where you have a less direct control and higher variance. The good news is that these new behaviors and perspectives that are part of cloud thinking bring a lot of advantages, too.
So let’s talk about some of these differences.
Dedicated, high-powered, high availability environments vs. Commodity, fungible environments
In a cloud environment, to achieve the scale at low costs, the infrastructure is built on off-the shelf components that are interchangeable -- or fungible – and cloud environments rely heavily on virtualization. The benefit here is that the infrastructure costs are kept down, but, as we’ve talked about, the challenge is the higher unpredictability of that kind of environment.
Infrastructure failure is challenging & avoided vs. Infrastructure failure is expected and embraced
If you’re managing the hardware in your own data center you have much more finite resources than a service like AWS. It can be expensive and painful to have machines fail, so you probably opt for the most solid and reliable infrastructure that you can find. In a cloud service like Amazon – where you don’t control the infrastructure – this changes your expectations.
Designed for solidity vs. Designed for resilience (monitoring, automation)
That leads to a different approach to developing software. With enterprise architecture you design with the assumption that the architecture is going to be rock solid and unchanging. In the cloud you design for resilience. If the infrastructure or services falter, the application needs to be able to bounce back.
Manual processes vs. Automation
That need to bounce back drives an increased emphasis on monitoring and automation as opposed to relying on human hands to intervene and fix problems that occur. That automation can also help improve quality by decreasing the amount of human error or delay introduced into responses.
Infrequently updated vs Frequently updated
As your deployment processes become more automated to deal with the different infrastructure, and the risk of downtime or problems associated with deployment decreases, it makes more frequent updates possible. If deployment is fast, easy, and automated, you don’t have to wait until the “big release” to push out bug fixes – you can do it as soon as the bugs are fixed.
Monolithic changes vs. Granular change
This also affects the way we think change. When deployment is difficult and requires downtime, you update as infrequently as possible, which results in big, monolithic releases that have everything and the kitchen sink in them. When deployment is automated and requires no downtime, you can move to more frequent but smaller updates that provide for higher quality, more responsiveness, and allows a customer to consume change management in smaller chunks.
Also:
state vs. stateless
So let’s talk about some of these topics in more detail. I’m going to talk about a handful of characteristics of the cloud and tell you a little about how Blackboard is implementing these in our new cloud solutions and cloud architectures.
We’ll talk about Flexibility – how does the application leverage the flexibility of the cloud architecture to provisiong resources quickly to enhance the platform
We’ll talk about Scalability & Elasticity – how does the application behave under variable load
We’ll talk about Reliability -- how does the application behave under failure scenarios (which are inevitable and need to be planned for)
We’ll talk about Innovation & Change Management – How do we leverage that cloud architecture to bring more innovation to our customers
And we’ll talk about Big Data – taking advantage of the power of the cloud to turn data into actionable information
Applications deployed in the cloud have a lot more flexibility than those deployed in a traditional enterprise architecture.
Part of this is because cloud services like Amazon Web Services are built on a scale that most enterprises can’t achieve on their own, so resources are plentiful and cheap instead of scarce and expensive.
A lot this comes about through the heavy use of virtualization in cloud environments. Combined with automation this allows new resources – new servers, copies of databases, etc. – to be spun up (or spun down) in minutes or even seconds.
Cloud services by their nature are designed around sharing of resources, so are very effective for leveraging those shared resources to allow application developers to do things that would be prohibitively expensive in a traditional enterprise environment.
And the ease and low cost of provisioning resources means that back-up and redundancy is easier and more efficient and frequently baked into the service. For example with Amazon S3 – their file storage service – the provide eleven nines of durability that’s 99.9 followed by 8 more nines. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years.
In our SaaS Deployment for Blackboard Learn in Amazon Web Services, we leverage a lot of these capabilities of the cloud architecture.
You see here a very simplified diagram of our architecture. At the bottom we leverage services like Elasticache and Logstash to take resource intensive components of the Learn application out of the app and deploy them as highly scaled multi-tenant services. We leverage Amazon S3 – with all of that durability we just talked about – for file storage.
At the application tier, rapid provisioning and automation we’ve put in place allows us to scale Learn both horizontally – adding more machines to a customers load balanced cluster – or veritically – swapping out virtual machines for ones with greater memory or more CPU power as usage demands.
That flexibility – particularly around rapid provisioning – leads to great advantages in scalability
You’ll often hear people talk about the cloud being elastic – that means dynamically expanding the infrastructure by spinning up more virtual instances to deal with changes in load or usage of the application. This is achieved through automated monitoring that recognizes when the application is under load and triggers automatically adding new machines to support that load.
When we look at scaling performance one way that cloud applications manage this is through content delivery networks or CDN. The one you might be familiar with is Akamai. Amazon has their own called CloudFront. A CDN pushes static content – images, javascript, and other things that don’t change – out to the servers distributed around the country, so those bandwidth intensive resources are closer to the end user and create a perception of reduced latency. This also takes the burden off the main application of having to serve those resources itself, allowing it to operate more efficiently.
When looking at scalability we looked at two key elements:
Blackboard is implementing elasticity in the SaaS deployment of Learn by leveraging amazon’s elastic load balancers and auto-scaling technology. Through auto-scaling the application can communicate with the cloud fabric’s architecture and give it metric data – inforamtion abou the health – and tell it when the application needs more resources or is usting too many resources. This allows the environment to expand or contract with the utilization of the application. This isn’t just adding more resources during a back-to-school time frame … that’s a very coarse view of scaling. Auto-scaling allows us to address peaks that happen during the week or even during certain periods of the day, so we can spin up new resources down to the hour or the minute to address those needs.
We’re also leveraging the CloudFront CDN. Static assets in the LMS – like images or Javascripts -- are moved out to the edge of the network. .This improves latency. The call doesn’t have to go across the continent to grab those assets, so pages load faster, and it improves the performance of the application server because the application server doesn’t have to spend any resources to serve up that static content. By moving it out of the application server, the application can focus on the business logic instead of on serving static content.
As applications move into the cloud, the design of the application starts to change. The application has to be built for reslience so it can recover quickly from problems. This demands an increased monitoring coverage.
The goal here is to get to a self-healing environment – one that detects a problem and recovers from it a automatically without manual intervention. To do that the deployment process – the way new instances of part of the application stack get spun up and deployed – have to be heavily automated.
That same auto-scaling technology we talked about a minute ago is is helpful for a reliability perspective.
If the application notices that a server has crashed, auto-scaling will see it doesn’t have enough resources and thus spin up a new one. This “self healing” behavior is really valuable for availability of the application.
We’ve also implemented end-to-end automation throughout the platform that ensures a much higher degree of consistency and quality for the platform. This consistency assures that all the application servers are the same and there’s less human intervention that can result in human error. And when there are errors, we can easily replace those servers with newer ones.
We’ve also implemented zero-downtime updates which allow us to give more frequent updates. This means the updates are smaller, demanding less change management, and it means that we can be more responsive to issues, since problems can be resolved without interrupting the service. This results in higher quality software and a better experience for your users.
Finally, we’ve also introduced Denial Of Service protection as a native part of the platform, so all the servers in the environment are protected from denial of service attacks. With Bb Learn being a mission critical system on campus, preventing such attacks is important to delivering this sort of service reliably.
These characteristics of the cloud lend themselves to new ways to manage innovation – and the flip side of that – helping our customers manage the change that comes from innovation
With the emphasis on rapid provisioning and automation, the cloud lends itself to software as a service.
SaaS brings benefits like always being on the current version of the software (which is important for today’s demanding users), higher quality through rapid deployment of fixes, and painless updates that can occur with no downtime or no interruption of service for your end users.
Building SaaS technologies for the cloud changes the way that an organization manages software. If you need to be able to push a bug fix on any given day, then you need to be ready to deploy at a moments notice.
This leads vendors down the path of continuous integration, where changes to the software are automatically incorporated into the mainline code. Continuous integration demands much more automated testing, including test-driven development, a process by which automated test cases are developed first, and the code developed against that test case – that way every piece of new code developed can’t get into production until it has already been tested.
Pushing out code on a regular basis requires more flexibility in how features are delivered
-- feature toggles
-- old/new at the same time
Gartner describes big data as data that is “high volume, high velocity, high variety”
Volume = quantity of data
Velocity = speed of generation and processing
Variety = diverse, different data elements
Modern technologies for managing big data sets -- Hadoop, Google MapReduce
So that’s a lot of information we’ve covered
When considering a move to the cloud, institutions naturally evaluate how effective they currently are in managing their current learning environment and responding to the demands of today’s end users. As part of that work, they analyze what it currently costs them vs. the Cloud alternative. If you haven’t done this type of analysis before, we wanted to share the key factors institutions typically evaluate when making this kind of decision. Because, as you know, the cost of licensing Blackboard is just one part of the total cost to run an online learning environment 24x7x365 – and some of the costs are not as explicit or well documented as others. I have already discussed many of the factors you see here on this slide, but there are a few things to consider that we have not yet discussed yet. For example…
… let me ask you a few questions. [Go through the questions on the slide and either ask the group to give you a check mark or an X for each, or ask them to answer them in their head or write down their answers if they have a piece of paper in front of them]. These are just a few of the reasons that institutions trust Blackboard to manage their learning environment for them.
We are focused on addressing the needs of the whole learner. And thinking differently about how we serve students in all aspects of their educational experience
To be truly learner centric an institution needs to re-think every aspect how they do business…An institution needs to look to their student population and learn what drives them, how they behave and where they want to go. Then use that information to inform every decision, service and program design – starting with recruitment and lasting through graduation, job placement and even career advancement.
It’s how we engage students when they are prospects – how we acknowledge what they’ve done before the come to us, how we have the right programs available for them and how we set them up on a good pathway going forward
It’s how we support them along their journey - not just answering their questions and providing help when needed, but proactively encouraging them along the way, prompting and guiding them to the next right step for them
Its how we support their learning – not just giving them flexible multiple models and pathways, but by putting data in their hands (and the hands of faculty) so they can ask better questions, and achieve better outcomes
And it’s how we link them to their end goal - Connections are made early and often to the learners goal (post-grad, employment, career advancement)
We need to address the needs of the whole learner, and we need to do it in a learner-centric way.
Holistic solutions at the best value
Before we dive into talking about our cloud approach for Blackboard Learn, let me take a moment to remind you that Blackboard is already a very experienced software as a service provider. If your primary interaction with Blackboard has been through our LMS products, you might not think of us that way.
But a number of other Blackboard products – from Blackboard Collaborate to the xpLor learning object repository to the K12-oriented Blackboard Engage -- are already delivered as cloud-based software as a service. Blackboard has a deep experience in building SaaS products.
And Blackboard has the largest private cloud business in the education market – seven data centers on four continents host many of our cloud solutions and petabytes of LMS data.
How do we become learner-centric?
The new learning experience
Centered around the learner
Not just a re-skinning of Learn; we’re going back and looking at the interaction design for every use case in the product and asking how can we make this more simple, more effective for students and faculty, and more delightful – our goal is to have students and faculty excited to interact with their online learning environment.
And because we’re taking a mobile first approach, this new learning experience is totally responsive. It adapts to the device that it’s viewed on, the layout and navigation controls changing to adjust to those screen dimensions. You can see that here – the same page
It’s also worth point out that this is built from the ground up with cloud technologies in mind. The new learner experience is delivered as a single page javascript application. This creates a more distinct separate between the UI layer and the business logic layer of the application, moves a lot of processing out of the server and into the browser client, improving performance and allowing us to leverage content delivery networks for better latency.
Responsive design is important, but we know that native devices
We’re introducing new persona-specific mobile apps. The first of these Bb Student is already available as a Blackboard Labs preview release – you can download it through the iTunes App Store or Google Play Store – with the full release later this year. This will be followed up by a dedicated app for Faculty.
Looking at the screenshots, you can tell the native app also shares the same user experience as the web interface. As we move into this next generation of Blackboard products, you will be delighted by this kind of consistency across all of our applications.
Another example of this is the next iteration of the Blackboard Collaborate platform.
The new user experience for Blackboard Collaborate leverages an emerging technology called WebRTC – web real time communication – that is entirely browser-based. No more java downloads. It has higher quality audio and video and leverages that same user experience you see in the web interface and the native app interface.
Since it’s also more deeply integrated into the
That new user experience is leveraging the big data in new ways as well.
As we build instrumentation to store more and better data about student activity, we have the ability to expose that data to the faculty and the students themselves in the form of actionable information
For instructors, this means being able to quickly and efficiently see a students work – grades, activity, etc --.
And this happens in context – where you need it, when you need it – as the instructor is grading Sarah Johnson’s quiz he can pull up individualized reports on Sarah’s performance in comparison to the rest of the class or even past instances of the same course.
For students, this means being able to monitor their progress in the course and being able to compare that progress to anonymized data of their classmates progress. Students being able to see how they are doing in the context of others is a great motivator.