Introduction – Jonathan Chiang IT Chief Engineer and Technical Project Manager for Mars Public Outreach Web ApplicationsWith 2.5 Billion dollars, 8 years of preparation, and the future of JPL at stake, we knew the world would be watching the night of August 6, 2012. This was also the most technologically advanced robotic probe ever imagined. The landing sequence consisted of techniques that had never been attempted before and were fraught with risk – especially during the final 7 minutes it took for the rover to descend through the Martian atmosphere and land on the surface of Mars. We knew from the our previous experience with the Mars Exploration Rovers (Spirit and Opportunity) hundreds of thousands of fascinated people would be visiting our websites. The experience with MER was highly successful, but costly. That was also nearly 10 years ago. Since then, the Internet immensely largely due to mobile devices and global connectivity. Events like the final Space Shuttle launch brought millions of concurrent visitors to the nasa.gov web page. We had to prepare for countless challenges and knew that the answer was in the cloud!
Three years ago, our CIO Jim Rinaldi, gave my team and I the mandate to stop buying servers and storage. No more hardware! We took this as a challenge to explore how to utilize different cloud vendors, leverage their capabilities, and understand costs to develop and deploy JPL applications. What is the cloud to JPL? To us, the cloud is a natural extension of our data centers. It provides us seemingly endless compute and storage capabilities. It offers a level of availability and resiliency that would be impossible to duplicate. The cloud also extends the JPL network to the edges of the globe with geographically dispersed compute, storage, and networks. And if done correctly, it does all of this at a fraction of traditional IT costs. Our missions are utilizing the cloud for embarrassingly parallel compute jobs, image processing, content and software distribution, multimedia… the list continues to grow. We also believe with great confidence that the cloud can be as secure or more secure that some of our own data centers. In the past year, JPL has been focused on operationalizing the cloud. These efforts include implementing additional security controls, large scale configuration and property management capabilities, and improved forensics and auditing capabilities. In many cases, we have more insight into what is running in the cloud than in our own data centers. Currently, JPL has granted Authorites to Operate in three different cloud venues. Presently, we are the only FFRDC or NASA Center who have granted ATOs to run workloads in the cloud.But we can’t limit ourselves to three cloud venues. The cloud landscape is constantly changing so we continue to evaluate new providers, technologies and services. With leadership from our CTO, Tom Soderstrom, we created the Cloud Computing Commodity Board at JPL whose mission is to evaluate and rapidly on-board new cloud vendors so JPL missions can gain benefit at the time they need it most.
So it was an obvious decision to utilize the cloud to deliver to the world the multimedia content of our greatest endeavor yet: landing curiosity on mars. We had some very challenging requirementsThe media being presented would be consumed by millions of people around the globe. In order to increase performance and reduce immense loads on our infrastructure, we needed a robust Content Distribution Network to help deliver our message. The infrastructure needed to highly scalable and elastic. We would only use the capacity we needed for landing night, then gradually reduce our infrastructure to meet demand. The solution would also need to provide very large storage capacity for images from the spacecraft, telemetry data, and hi-resolution video streaming.We also had to prepare for the unimaginable. The availability, scalability, and performance of the mars websites was of the utmost importance, especially during the landing event
We evaluated a number of cloud providers, but chose Amazon based on its capabilities and cost. Currently, Amazon by far offers the most robust set of tools for engineers and application developers. AWS is available in multiple regions in the US, Europe, Asia, and South America. This allowed us to design a highly available system that is geographically dispersed in the event of outages. Another great feature is that all AWS services can be accessed over HTTP using REST and SOAP protocols. All AWS services are also utility based, providing the elasticity we need to optimize cost. Here’s a brief overview on how these services were assembled to meet our requirements.Amazon CloudFront, a content delivery network (CDN) for distributing objects to so-called "edge locations" near the requester.Allowed us to extend our static content – videos, images, downloads, etc., to storage resources in Europe, Asia, and South AmericaAmazon Elastic Compute Cloud (EC2) provides scalable virtual private servers using Xen.EC2 provided us highly scalable server and networking capabilities to implement our infrastructure and content platformAmazon Simple Storage Service (S3) provides Web Service based storage.We utilized S3 to store and serve static content including images, telemetry, videos. This reduced a great amount of load from our system since a large portion of our visits were for newest images and videos.Amazon Elastic Block Store (EBS) provides persistent block-level storage volumes for EC2.Amazon Relational Database Service (RDS) provides a scalable database server with MySQLAmazon Route 53 provides a highly available and scalable Domain Name System (DNS) web service.Allowed us to dynamically route traffic to AWS regions in the event of outages
The legacy mars outreach websites were built long before the advent of cloud computing. It was designed to be hosted on physical servers, attached to local storage, and database servers in the same data center. To meet the paradigm shift, we leveraged Cloud Design Principles to re-engineer cloud system.To avoid large licensing costs in a scalable and elastic environment, we had to port the legacy software to the open source equivalent of ColdFusion – Railo. We also had to migrate to a file system that could be distributed to a cloud scale – so we chose Gluster, an open source file storage software platform.We utilized to dynamically scale our MySQL databases – and did so during the landing eventWe utilized cloudfront and S3 buckets to distribute our static content, and elastic load balancers and a large farm of application servers to deliver our dynamic content
Using Cloud Computing to Share Curiosity's Landing
Using Cloud Computing to ShareCuriosity’s Landing with the World Jonathan Chiang IT Chief Engineer
Agenda What is the Cloud to JPL? Requirements for Curiosity Landing Re-engineered for the Cloud Comparison with Mars Exploration Rovers
What is the Cloud to JPL? o The cloud is a natural extension of our own data centers o The cloud can be as secure or more secure than some of our own data centers o The landscape is changing – we continuously monitor new cloud providers, technologies and services
Outreach for the Curiosity Landing Requirements for sharing the landing with the world A robust Content Distribution Network (CDN) Scalability and elasticity of resources High availability and resiliency
Why Amazon Web Services? CloudFront - Content Delivery Network EC2 – Elastic Compute Cloud S3 – Simple Storage Service EBS – Elastic Block Storage RDS – Relational Database Service Route53 – Dynamic DNS
Re-engineered for the Cloud Legacy Cloud System System
Comparison with MER MER, 0 .7 Peak Throughput (Gbps) MSL, 1 50 MER, 3 1 MSL, 15 4 Total Data Served (TB)
Comparison with MER MSL MER Increase Total Data Served (TB) 154 31 5x Streaming 123 - Mars Sites 9 - Eyes on the Solar System 22 - Peak request rate / sec 80 Peak throughput (Gbps) 150 0.7 214x Peak hits per minute (M) 8 Peak hits per hour (M) 50