This document discusses cloud computing research and examples of using cloud infrastructure for various tasks. It provides an overview of key cloud concepts like elastic on-demand infrastructure, pay-as-you-go pricing, and data processing examples using Amazon Web Services. Specific research examples discussed include genome analysis, biomarker data warehousing, and high energy physics simulations. Security aspects of cloud computing like shared responsibility models and access controls are also summarized.
Risk Management and Particle Accelerators: Innovating with New Compute Platfo...Amazon Web Services
What does risk modeling and analytics in financial services have in common with large scale computing in high energy physics? Come to this session to hear how financial services customers like Aon are taking advantage of new approaches like predictive analytics and AI/deep learning on AWS to perform risk modeling and how Brookhaven National Laboratory are using 10s of thousands of cores to do large scale grid computing for Monte Carlo simulations in high energy physics. In addition, we will also showcase how CSIRO eHealth team in Australia are innovating with serverless architectures using AWS Lambda for personalized medicine and genomics.
Speakers: Adrian White, Sr SciCo Technical Manager, Amazon Web Services
NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...Igor Sfiligoi
NRP Engagement webinar: Description of the 380 PFLOP32S , 51k GPU multi-cloud burst using HTCondor to run IceCube photon propagation simulation.
Presented January 27th, 2020.
Learn from Accubits Technologies
High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
AWS는 클라우드 기반의 기계 학습 및 딥러닝 기술을 제공하는 인공 지능 서비스 개발 플랫폼을 제공합니다. AWS Deep Learning AMI를 사용하면 심도 깊은 학습을 실행할 수 있습니다. 정교한 맞춤형 AI 모델을 개발하며, 새로운 알고리즘을 실험하기 위한 오픈 소스 심층 학습 엔진(Apache MXNet 등) AMI를 GPU 기반 인스턴스와 클러스터를 스팟 인스턴스를 통해 비용 효율적으로 구성하여 운영하는 방법을 안내합니다.
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Igor Sfiligoi
The San Diego Supercomputer Center (SDSC) and the Wisconsin IceCube Particle Astrophysics Center (WIPAC) at the University of Wisconsin–Madison successfully completed a computational experiment as part of a multi-institution collaboration that marshalled all globally available for sale GPUs (graphics processing units) across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform.
In all, some 51,500 GPU processors were used during the approximately 2-hour experiment conducted on November 16 and funded under a National Science Foundation EAGER grant.
The experiment – completed just prior to the opening of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, CO – was coordinated by Frank Würthwein, SDSC Lead for High-Throughput Computing, and Benedikt Riedel, Computing Manager for the IceCube Neutrino Observatory and Global Computing Coordinator at WIPAC. Igor Sfiligoi, SDSC’s lead scientific software developer for high-throughput computing, and David Schultz, a production software manager with IceCube, conducted the actual run.
This presentation was given at several booths during SC19 by Frank Würthwein.
Characterizing network paths in and out of the CloudsIgor Sfiligoi
Cloud computing is becoming mainstream, with funding agencies moving beyond prototyping and starting to fund production campaigns, too. An important aspect of any production computing campaign is data movement, both incoming and outgoing. And while the performance and cost of VMs is relatively well understood, the network performance and cost is not.
We thus embarked on a network characterization campaign, documenting traceroutes, latency and throughput in various regions of Amazon AWS, Microsoft Azure and Google GCP Clouds, both between Cloud resources and major DTNs in the Pacific Research Platform, including OSG data federation caches in the network backbone, and inside the clouds themselves. We also documented the incurred cost while doing so.
Presented at CHEP 2019.
Risk Management and Particle Accelerators: Innovating with New Compute Platfo...Amazon Web Services
What does risk modeling and analytics in financial services have in common with large scale computing in high energy physics? Come to this session to hear how financial services customers like Aon are taking advantage of new approaches like predictive analytics and AI/deep learning on AWS to perform risk modeling and how Brookhaven National Laboratory are using 10s of thousands of cores to do large scale grid computing for Monte Carlo simulations in high energy physics. In addition, we will also showcase how CSIRO eHealth team in Australia are innovating with serverless architectures using AWS Lambda for personalized medicine and genomics.
Speakers: Adrian White, Sr SciCo Technical Manager, Amazon Web Services
NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...Igor Sfiligoi
NRP Engagement webinar: Description of the 380 PFLOP32S , 51k GPU multi-cloud burst using HTCondor to run IceCube photon propagation simulation.
Presented January 27th, 2020.
Learn from Accubits Technologies
High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
AWS는 클라우드 기반의 기계 학습 및 딥러닝 기술을 제공하는 인공 지능 서비스 개발 플랫폼을 제공합니다. AWS Deep Learning AMI를 사용하면 심도 깊은 학습을 실행할 수 있습니다. 정교한 맞춤형 AI 모델을 개발하며, 새로운 알고리즘을 실험하기 위한 오픈 소스 심층 학습 엔진(Apache MXNet 등) AMI를 GPU 기반 인스턴스와 클러스터를 스팟 인스턴스를 통해 비용 효율적으로 구성하여 운영하는 방법을 안내합니다.
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Igor Sfiligoi
The San Diego Supercomputer Center (SDSC) and the Wisconsin IceCube Particle Astrophysics Center (WIPAC) at the University of Wisconsin–Madison successfully completed a computational experiment as part of a multi-institution collaboration that marshalled all globally available for sale GPUs (graphics processing units) across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform.
In all, some 51,500 GPU processors were used during the approximately 2-hour experiment conducted on November 16 and funded under a National Science Foundation EAGER grant.
The experiment – completed just prior to the opening of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, CO – was coordinated by Frank Würthwein, SDSC Lead for High-Throughput Computing, and Benedikt Riedel, Computing Manager for the IceCube Neutrino Observatory and Global Computing Coordinator at WIPAC. Igor Sfiligoi, SDSC’s lead scientific software developer for high-throughput computing, and David Schultz, a production software manager with IceCube, conducted the actual run.
This presentation was given at several booths during SC19 by Frank Würthwein.
Characterizing network paths in and out of the CloudsIgor Sfiligoi
Cloud computing is becoming mainstream, with funding agencies moving beyond prototyping and starting to fund production campaigns, too. An important aspect of any production computing campaign is data movement, both incoming and outgoing. And while the performance and cost of VMs is relatively well understood, the network performance and cost is not.
We thus embarked on a network characterization campaign, documenting traceroutes, latency and throughput in various regions of Amazon AWS, Microsoft Azure and Google GCP Clouds, both between Cloud resources and major DTNs in the Pacific Research Platform, including OSG data federation caches in the network backbone, and inside the clouds themselves. We also documented the incurred cost while doing so.
Presented at CHEP 2019.
In this deck from CHEP 2019, Igor Sfiligoi from UCSD presents: Characterizing Network Paths in and out of the Clouds.
"Cloud computing is becoming mainstream, with funding agencies moving beyond prototyping and starting to fund production campaigns, too. An important aspect of any production computing campaign is data movement, both incoming and outgoing. And while the performance and cost of VMs is relatively well understood, the network performance and cost is not. We thus embarked on a network characterization campaign, documenting traceroutes, latency and throughput in various regions of Amazon AWS, Microsoft Azure and Google GCP Clouds, both between Cloud resources and major DTNs in the Pacific Research Platform, including OSG data federation caches in the network backbone, and inside the clouds themselves. We also documented the incurred cost while doing so. Along the way we discovered that network paths were often not what the major academic network providers thought they were, and we helped them in improving the situation, thus improving peering between academia and commercial cloud. In this talk we present the observed results, both during the initial test runs and the latest state of the art, as well as explain what it took to get there."
Watch the video: https://wp.me/p3RLHQ-lbO
Learn more: https://indico.cern.ch/event/773049/contributions/3473824/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
"Building and running the cloud GPU vacuum cleaner"Frank Wuerthwein
This talk, describing the "Largest Cloud Simulation in History" (Jensen Huang at SC19), was given at the MAGIC meeting on Dec. 4th 2019. MAGIC stands for "Middleware and Grid Interagency Cooperation", and is a group within NITRD. Current federal agencies that are members of MAGIC include DOC, DOD, DOE, HHS, NASA, and NSF.
Time to Science, Time to Results. Accelerating Scientific research in the CloudAmazon Web Services
This session will feature ways customers have accelerated scientific research in the cloud using the AWS Cloud
Speaker: Brendon Bouffer, Scientific Computing, Amazon Web Services
(CMP303) ResearchCloud: CfnCluster and Internet2 for Enterprise HPCAmazon Web Services
Biogen built ResearchCloud for large-scale processing of research data. This extension of our infrastructure capability allows us to be more nimble, process more data, scale as needed, and collaborate with external organizations. In this session, learn about the design choices we took into account when building ResearchCloud. We cover our implementation of Internet2 and AWS Direct Connect, and the challenges we encountered when scaling to speeds of 10 gigabits. We also discuss the architecture of InstantHPC, which combines CfnCluster with GlusterFS using secure templates.
Advanced Container Management and Scheduling - DevDay Los Angeles 2017Amazon Web Services
Amazon EC2 Container Service (ECS) is a highly scalable, high performance container management service. You can use Amazon ECS to schedule the placement of containers across your cluster. You can also integrate your own scheduler or third-party scheduler to meet business or application specific requirements.
In this video from ChefConf 2014 in San Francisco, Cycle Computing CEO Jason Stowe outlines the biggest challenge facing us today, Climate Change, and suggests how Cloud HPC can help find a solution, including ideas around Climate Engineering, and Renewable Energy.
"As proof points, Jason uses three use cases from Cycle Computing customers, including from companies like HGST (a Western Digital Company), Aerospace Corporation, Novartis, and the University of Southern California. It’s clear that with these new tools that leverage both Cloud Computing, and HPC – the power of Cloud HPC enables researchers, and designers to ask the right questions, to help them find better answers, faster. This all delivers a more powerful future, and means to solving these really difficult problems."
Watch the video presentation: http://insidehpc.com/2014/09/video-hpc-cluster-computing-64-156000-cores/
In this deck from the 2017 MVAPICH User Group, Adam Moody from Lawrence Livermore National Laboratory presents: MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts.
"High-performance computing is being applied to solve the world's most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand."
Watch the video: https://wp.me/p3RLHQ-hp6
2 Sessione - Macchine virtuali per la scalabilità di calcolo per velocizzare ...Jürgen Ambrosi
La Fondazione CRUI e Microsoft organizzano un ciclo di webinar nell'ambito dell’Accordo Quadro Education Transformation Agreement. L’iniziativa si prefigge lo scopo di approcciare temi specifici della ricerca attraverso le tecnologie Microsoft più avanzate già a disposizione delle Università e degli Enti di Ricerca aderenti l’Accordo.
In questo secondo appuntamento verrà spiegato come la piattaforma Cloud Microsoft Azure può venire incontro all'esigenza di scalabilità, abbattendo i costi e abbassando i tempi di esecuzione nel mondo del calcolo parallelo. Nella sessione verranno utilizzati strumenti presenti nel cloud e software più comuni nell'ambito della ricerca
High Performance Computing (HPC) has been driving technology advancements for many decades. HPC enables performance-demanding applications and workloads to solve complex problems while dramatically reducing time to solution. With a history of requiring very large data centers, HPC is now on the edge of a paradigm shift. The AWS Cloud will allow customers to have access to near infinite compute and storage resources, without the overhead of running their own data centers. There are a vast number of HPC segments and verticals that are already seeing great success running their workloads on AWS. Life Sciences, Financial Services, Energy & Geo Sciences, as well as Manufacturing are successfully deploying their applications on AWS. In these two sessions we will discuss how AWS can help you run HPC workloads in the cloud. The first session will be a general introduction to HPC on AWS.
High Performance Computing on AWS: Accelerating Innovation with virtually unl...Amazon Web Services
In this session, learn how you innovate without limits, reduce costs, and get your results to market faster by moving your HPC workloads to AWS. Learn how you can use HPC on AWS to let your research needs dictate you HPC architecture requirements, not the other way around. Understand how to create, operate, and tear down secure, well-optimized HPC clusters in minutes.
In this deck from CHEP 2019, Igor Sfiligoi from UCSD presents: Characterizing Network Paths in and out of the Clouds.
"Cloud computing is becoming mainstream, with funding agencies moving beyond prototyping and starting to fund production campaigns, too. An important aspect of any production computing campaign is data movement, both incoming and outgoing. And while the performance and cost of VMs is relatively well understood, the network performance and cost is not. We thus embarked on a network characterization campaign, documenting traceroutes, latency and throughput in various regions of Amazon AWS, Microsoft Azure and Google GCP Clouds, both between Cloud resources and major DTNs in the Pacific Research Platform, including OSG data federation caches in the network backbone, and inside the clouds themselves. We also documented the incurred cost while doing so. Along the way we discovered that network paths were often not what the major academic network providers thought they were, and we helped them in improving the situation, thus improving peering between academia and commercial cloud. In this talk we present the observed results, both during the initial test runs and the latest state of the art, as well as explain what it took to get there."
Watch the video: https://wp.me/p3RLHQ-lbO
Learn more: https://indico.cern.ch/event/773049/contributions/3473824/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
"Building and running the cloud GPU vacuum cleaner"Frank Wuerthwein
This talk, describing the "Largest Cloud Simulation in History" (Jensen Huang at SC19), was given at the MAGIC meeting on Dec. 4th 2019. MAGIC stands for "Middleware and Grid Interagency Cooperation", and is a group within NITRD. Current federal agencies that are members of MAGIC include DOC, DOD, DOE, HHS, NASA, and NSF.
Time to Science, Time to Results. Accelerating Scientific research in the CloudAmazon Web Services
This session will feature ways customers have accelerated scientific research in the cloud using the AWS Cloud
Speaker: Brendon Bouffer, Scientific Computing, Amazon Web Services
(CMP303) ResearchCloud: CfnCluster and Internet2 for Enterprise HPCAmazon Web Services
Biogen built ResearchCloud for large-scale processing of research data. This extension of our infrastructure capability allows us to be more nimble, process more data, scale as needed, and collaborate with external organizations. In this session, learn about the design choices we took into account when building ResearchCloud. We cover our implementation of Internet2 and AWS Direct Connect, and the challenges we encountered when scaling to speeds of 10 gigabits. We also discuss the architecture of InstantHPC, which combines CfnCluster with GlusterFS using secure templates.
Advanced Container Management and Scheduling - DevDay Los Angeles 2017Amazon Web Services
Amazon EC2 Container Service (ECS) is a highly scalable, high performance container management service. You can use Amazon ECS to schedule the placement of containers across your cluster. You can also integrate your own scheduler or third-party scheduler to meet business or application specific requirements.
In this video from ChefConf 2014 in San Francisco, Cycle Computing CEO Jason Stowe outlines the biggest challenge facing us today, Climate Change, and suggests how Cloud HPC can help find a solution, including ideas around Climate Engineering, and Renewable Energy.
"As proof points, Jason uses three use cases from Cycle Computing customers, including from companies like HGST (a Western Digital Company), Aerospace Corporation, Novartis, and the University of Southern California. It’s clear that with these new tools that leverage both Cloud Computing, and HPC – the power of Cloud HPC enables researchers, and designers to ask the right questions, to help them find better answers, faster. This all delivers a more powerful future, and means to solving these really difficult problems."
Watch the video presentation: http://insidehpc.com/2014/09/video-hpc-cluster-computing-64-156000-cores/
In this deck from the 2017 MVAPICH User Group, Adam Moody from Lawrence Livermore National Laboratory presents: MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts.
"High-performance computing is being applied to solve the world's most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand."
Watch the video: https://wp.me/p3RLHQ-hp6
2 Sessione - Macchine virtuali per la scalabilità di calcolo per velocizzare ...Jürgen Ambrosi
La Fondazione CRUI e Microsoft organizzano un ciclo di webinar nell'ambito dell’Accordo Quadro Education Transformation Agreement. L’iniziativa si prefigge lo scopo di approcciare temi specifici della ricerca attraverso le tecnologie Microsoft più avanzate già a disposizione delle Università e degli Enti di Ricerca aderenti l’Accordo.
In questo secondo appuntamento verrà spiegato come la piattaforma Cloud Microsoft Azure può venire incontro all'esigenza di scalabilità, abbattendo i costi e abbassando i tempi di esecuzione nel mondo del calcolo parallelo. Nella sessione verranno utilizzati strumenti presenti nel cloud e software più comuni nell'ambito della ricerca
High Performance Computing (HPC) has been driving technology advancements for many decades. HPC enables performance-demanding applications and workloads to solve complex problems while dramatically reducing time to solution. With a history of requiring very large data centers, HPC is now on the edge of a paradigm shift. The AWS Cloud will allow customers to have access to near infinite compute and storage resources, without the overhead of running their own data centers. There are a vast number of HPC segments and verticals that are already seeing great success running their workloads on AWS. Life Sciences, Financial Services, Energy & Geo Sciences, as well as Manufacturing are successfully deploying their applications on AWS. In these two sessions we will discuss how AWS can help you run HPC workloads in the cloud. The first session will be a general introduction to HPC on AWS.
High Performance Computing on AWS: Accelerating Innovation with virtually unl...Amazon Web Services
In this session, learn how you innovate without limits, reduce costs, and get your results to market faster by moving your HPC workloads to AWS. Learn how you can use HPC on AWS to let your research needs dictate you HPC architecture requirements, not the other way around. Understand how to create, operate, and tear down secure, well-optimized HPC clusters in minutes.
Join the product and cloud computing leaders of Netflix to discuss why and how the company moved to Amazon Web Services. From early experiments for media transcoding, to building the operational skills to optimize costs and the creation of the Simian Army, this session guides business leaders through real world examples of evaluating and adopting cloud computing.
Slides from the High Performance Cloud Computing tutorial at Supercomputing 2011 in Seattle. Additional materials available from: cloudsupercomputing.net.
Architecture talk aimed at a well informed developer audience (i.e. QConSF Real Use Cases for NoSQL track), focused mainly on availability. Skips the Netflix cloud migration stuff that is in other talks.
Web Scale Applications using NeflixOSS Cloud PlatformSudhir Tonse
Web Scale Applications using NeflixOSS Cloud Platform. Infographics on IaaS, PaaS, SaaS. Commandments of developing a cloud based distributed application.
Similar to 8 mattwoodaws-intro-pdf-110411093115-phpapp01 (20)
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
37. BLAT @ U. PENN
Map 100 million, 100 base paired end reads
Quad core with 5 GB of RAM would take 16 days
30 high-memory instances; 32 hours; $195
38. HEAVY-ION COLLISIONS @
RHIC
Problem: Quark physics conference imminent
but no compute resources handy
Solution: NIMBUS context broker allowed
researchers to provision 300 nodes and get the
simulations done