National Center for Manufacturing Sciences (NCMS) and UberCloud are excited to announce the Michigan “round” of HPC Experiment as part of NCMS’ Grid Initiative. Just like previous UberCloud Experiment rounds, this community driven effort will support engineers to explore the end-to-end processes of using technical computing for product design and development.
During this webinar you can learn more about Michigan2.0 Grid Initiative and how you can apply for the program.
Dennis Nagy, Principal at BeyondCAE, talks about the future of MCAE Engineering Simulation Industry and the impact of Cloud computing on engineering simulations.
This webinar was recorded and made available as a UberCloud University TechTalk.
Engineering Simulation: Where are we going?hpcexperiment
Engineering Simulations is a 2 part webinar series discussing where the engineering simulation market has its roots, its current status and its future.
Engineering Simulation Meets the CloudBurak Yenier
Dennis Nagy talks about the impact of Cloud computing in the evolution of the engineering simulations market. He will share his insight on how and why Cloud computing will change how engineering simulations are done.
How Do I Understand Deep Learning Performance?NVIDIA
Introduced at GTC 2018, PLASTER outlines critical problems with machine learning. Learn how to address and tackle these problems to better deliver AI-based services.
National Center for Manufacturing Sciences (NCMS) and UberCloud are excited to announce the Michigan “round” of HPC Experiment as part of NCMS’ Grid Initiative. Just like previous UberCloud Experiment rounds, this community driven effort will support engineers to explore the end-to-end processes of using technical computing for product design and development.
During this webinar you can learn more about Michigan2.0 Grid Initiative and how you can apply for the program.
Dennis Nagy, Principal at BeyondCAE, talks about the future of MCAE Engineering Simulation Industry and the impact of Cloud computing on engineering simulations.
This webinar was recorded and made available as a UberCloud University TechTalk.
Engineering Simulation: Where are we going?hpcexperiment
Engineering Simulations is a 2 part webinar series discussing where the engineering simulation market has its roots, its current status and its future.
Engineering Simulation Meets the CloudBurak Yenier
Dennis Nagy talks about the impact of Cloud computing in the evolution of the engineering simulations market. He will share his insight on how and why Cloud computing will change how engineering simulations are done.
How Do I Understand Deep Learning Performance?NVIDIA
Introduced at GTC 2018, PLASTER outlines critical problems with machine learning. Learn how to address and tackle these problems to better deliver AI-based services.
See the latest in acceleration, deep and machine learning, and more by clicking thru our curated experience of International Supercomputing 2016. Through an OpenPOWER lens, we show you the best news and conversations that took place at ISC June 20-23, 2016 in Frankfurt, Germany.
Building big data pipelines—lessons learnedProfinit
What is the power of business departments? What is missing in communication between layers responsible for building big data solutions? What mistakes can happen when IT departments are too proactive in creating solutions for big data?
In this deck from the HPC User Forum in Milwaukee, Tim Barr from Cray presents: Perspective on HPC-enabled AI.
"Cray’s unique history in supercomputing and analytics has given us front-line experience in pushing the limits of CPU and GPU integration, network scale, tuning for analytics, and optimizing for both model and data parallelization. Particularly important to machine learning is our holistic approach to parallelism and performance, which includes extremely scalable compute, storage and analytics."
Watch the video: https://wp.me/p3RLHQ-hpw
Learn more: http://cray.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Enterprise-level Green ICT Using virtualization to balance energy economicsIJARIDEA Journal
Abstract— The computing industry has been a significant contributor to global warming ever since its
inception. Performance maximization per unit has cost remained the prime focus of academic and industrial
research alike, ignoring environmental impacts in the process if any. However, the infamous global energy
crisis has inevitably pushed power and energy management up the priority list of computing design and
management activities for purely economic reasons today. Green IT lays emphasis on including the
dimensions of environmental sustainability, the offsets of energy efficiency, and the total cost of
disposal and recycling. A green computing initiative must be adaptive and flexible enough to be
able to address problems that keep on increasing in size and complexity with time. Cloud computing concepts
can invariably be applied to reduce e-waste generation. The service oriented architecture lends itself to
incorporating green computing as a process rather than a product. Re-usability, extensibility and flexibility
are some of the key characteristics which are inherent to the cloud and directly help address the vertical
specific challenges to reducing energy consumption in the long run.
Keywords— Cloud computing, Electronic waste, Green Information Technology, Service oriented architecture.
SPARK USE CASE- Distributed Reinforcement Learning for Electricity Market Bi...Impetus Technologies
SPARK SUMMIT SESSION -
A majority of the electricity in the U.S. is traded in independent system operator (ISO) based wholesale markets. ISO-based markets typically function in a two-step settlement process with day-ahead (DA) financial settlements followed by physical real-time (spot) market settlements for electricity. In this work, we focus on obtaining equilibrium bidding strategies for electricity generators in DA markets. Electricity prices in DA markets are determined by the ISO, which matches competing supply offers from power generators with demand bids from load serving entities. Since there are multiple generators competing with one another to supply power, this can be modeled as a competitive Markov decision problem, which we solve using a reinforcement learning approach. For power networks of realistic sizes, the state-action space could explode, making the RL procedure computationally intensive. This has motivated us to solve the above problem over Spark. The talk provides the following takeaways:
1. Modeling the day-ahead market as a Markov decision process
2. Code sketches to show the markov decision process solution over Spark and Mahout over Apache Tez
3. Performance results comparing Mahout over Apache Tez and Spark.
Machine learning at the edge is gaining traction. Tech companies like Google, Amazon, Nvidia, Intel and ARM all have their solutions and are betting on the technology. Communities have recently started forming around this movement.
Applications that require privacy, speed, autonomy and power efficiency greatly benefit from these new inference accelerators. This will drive decision making at the edge.
The Hive Think Tank: Rendezvous Architecture Makes Machine Learning Logistics...The Hive
Think Tank Event 10/23/2017, hosted by The Hive and presented by Ted Dunning, Chief Application Architect of MapR Technologies and Ellen Friedman of MapR Technologies.
Digital Transformation - #StrataData London 2017 - Data101Ellen Friedman
Presented at Strata Data London conference May 2017 in the Data 101 track, this presentation explores what is needed in planning, architecture, and cultural organization for effective digital transformation.
7 Habits for Big Data in Production - keynote Big Data London Nov 2018Ellen Friedman
You can improve your chances for success with data intensive large scale applications (AI, machine learning and analytics) in production.
This keynote presentation from Big Data London shows you how.
The RECAP Project: Large Scale Simulation FrameworkRECAP Project
In this presentation, Sergej Svorobej (DCU) gave a brief overview of RECAP and introduced the large scale simulation framework used in the project. The event was held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) and took place April 10, 2018 in Dublin, Ireland.
Learn more about RECAP: https://recap-project.eu/
What Makes Machine Learning Work? Berlin Buzzwords 2018 #bbuzz talk Ellen Friedman
What matters to get real business value from machine learning and AI? It's not just the algorithm or the model that's important - you also need to handle logistics of data & model management, meet SLAs, have a way to take action on insights, use flexible organization at human level. This talk gives you key tips for success.
UberCloud HPC Experiment Introduction for Beginnershpcexperiment
UberCloud HPC Experiment Introduction for Beginners.
What is the HPC Experiment
How the HPC Experiment works
How to participate in the HPC Experiment
And an example project
This whitepaper details the use of High Performance Computing HPC in Aerospace & Defense, Earth Sciences, Education And Research, Financial Services among others...
See the latest in acceleration, deep and machine learning, and more by clicking thru our curated experience of International Supercomputing 2016. Through an OpenPOWER lens, we show you the best news and conversations that took place at ISC June 20-23, 2016 in Frankfurt, Germany.
Building big data pipelines—lessons learnedProfinit
What is the power of business departments? What is missing in communication between layers responsible for building big data solutions? What mistakes can happen when IT departments are too proactive in creating solutions for big data?
In this deck from the HPC User Forum in Milwaukee, Tim Barr from Cray presents: Perspective on HPC-enabled AI.
"Cray’s unique history in supercomputing and analytics has given us front-line experience in pushing the limits of CPU and GPU integration, network scale, tuning for analytics, and optimizing for both model and data parallelization. Particularly important to machine learning is our holistic approach to parallelism and performance, which includes extremely scalable compute, storage and analytics."
Watch the video: https://wp.me/p3RLHQ-hpw
Learn more: http://cray.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Enterprise-level Green ICT Using virtualization to balance energy economicsIJARIDEA Journal
Abstract— The computing industry has been a significant contributor to global warming ever since its
inception. Performance maximization per unit has cost remained the prime focus of academic and industrial
research alike, ignoring environmental impacts in the process if any. However, the infamous global energy
crisis has inevitably pushed power and energy management up the priority list of computing design and
management activities for purely economic reasons today. Green IT lays emphasis on including the
dimensions of environmental sustainability, the offsets of energy efficiency, and the total cost of
disposal and recycling. A green computing initiative must be adaptive and flexible enough to be
able to address problems that keep on increasing in size and complexity with time. Cloud computing concepts
can invariably be applied to reduce e-waste generation. The service oriented architecture lends itself to
incorporating green computing as a process rather than a product. Re-usability, extensibility and flexibility
are some of the key characteristics which are inherent to the cloud and directly help address the vertical
specific challenges to reducing energy consumption in the long run.
Keywords— Cloud computing, Electronic waste, Green Information Technology, Service oriented architecture.
SPARK USE CASE- Distributed Reinforcement Learning for Electricity Market Bi...Impetus Technologies
SPARK SUMMIT SESSION -
A majority of the electricity in the U.S. is traded in independent system operator (ISO) based wholesale markets. ISO-based markets typically function in a two-step settlement process with day-ahead (DA) financial settlements followed by physical real-time (spot) market settlements for electricity. In this work, we focus on obtaining equilibrium bidding strategies for electricity generators in DA markets. Electricity prices in DA markets are determined by the ISO, which matches competing supply offers from power generators with demand bids from load serving entities. Since there are multiple generators competing with one another to supply power, this can be modeled as a competitive Markov decision problem, which we solve using a reinforcement learning approach. For power networks of realistic sizes, the state-action space could explode, making the RL procedure computationally intensive. This has motivated us to solve the above problem over Spark. The talk provides the following takeaways:
1. Modeling the day-ahead market as a Markov decision process
2. Code sketches to show the markov decision process solution over Spark and Mahout over Apache Tez
3. Performance results comparing Mahout over Apache Tez and Spark.
Machine learning at the edge is gaining traction. Tech companies like Google, Amazon, Nvidia, Intel and ARM all have their solutions and are betting on the technology. Communities have recently started forming around this movement.
Applications that require privacy, speed, autonomy and power efficiency greatly benefit from these new inference accelerators. This will drive decision making at the edge.
The Hive Think Tank: Rendezvous Architecture Makes Machine Learning Logistics...The Hive
Think Tank Event 10/23/2017, hosted by The Hive and presented by Ted Dunning, Chief Application Architect of MapR Technologies and Ellen Friedman of MapR Technologies.
Digital Transformation - #StrataData London 2017 - Data101Ellen Friedman
Presented at Strata Data London conference May 2017 in the Data 101 track, this presentation explores what is needed in planning, architecture, and cultural organization for effective digital transformation.
7 Habits for Big Data in Production - keynote Big Data London Nov 2018Ellen Friedman
You can improve your chances for success with data intensive large scale applications (AI, machine learning and analytics) in production.
This keynote presentation from Big Data London shows you how.
The RECAP Project: Large Scale Simulation FrameworkRECAP Project
In this presentation, Sergej Svorobej (DCU) gave a brief overview of RECAP and introduced the large scale simulation framework used in the project. The event was held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) and took place April 10, 2018 in Dublin, Ireland.
Learn more about RECAP: https://recap-project.eu/
What Makes Machine Learning Work? Berlin Buzzwords 2018 #bbuzz talk Ellen Friedman
What matters to get real business value from machine learning and AI? It's not just the algorithm or the model that's important - you also need to handle logistics of data & model management, meet SLAs, have a way to take action on insights, use flexible organization at human level. This talk gives you key tips for success.
UberCloud HPC Experiment Introduction for Beginnershpcexperiment
UberCloud HPC Experiment Introduction for Beginners.
What is the HPC Experiment
How the HPC Experiment works
How to participate in the HPC Experiment
And an example project
This whitepaper details the use of High Performance Computing HPC in Aerospace & Defense, Earth Sciences, Education And Research, Financial Services among others...
The UberCloud - From Project to Product - From HPC Experiment to HPC Marketpl...Wolfgang Gentzsch
The UberCloud online marketplace for engineers and scientists to discover, try, and buy compute power on demand, in the cloud. Starting with free experiments in the cloud, including application software, cloud hardware, and expertise. Learning by doing how to use your application in the cloud.
The UberCloud online marketplace for engineers and scientists to discover, try, and buy compute power on demand, in the cloud. Starting with free experiments in the cloud, including application software, cloud hardware, and expertise. Learning by doing how to use your application in the cloud.
info.theubercloud.com/case-studies-and-resources
Presentation by Bruno Schulze, Senior Researcher / Professor at Laboratório Nacional de Computação Científica (LNCC) at Cloudscape Brazil 2017 & WCN 2017
Extending open source and hybrid cloud to drive OT transformation - Future Oi...John Archer
A look at ESG concerns and agility needed to address pressures to transform energy organizations with decarbonization. Presented to Future Oil and Gas conference November 2021
BUILDING A PRIVATE HPC CLOUD FOR COMPUTE AND DATA-INTENSIVE APPLICATIONSijccsa
Traditional HPC (High Performance Computing) clusters are best suited for well-formed calculations. The
orderly batch-oriented HPC cluster offers maximal potential for performance per application, but limits
resource efficiency and user flexibility. An HPC cloud can host multiple virtual HPC clusters, giving the
scientists unprecedented flexibility for research and development. With the proper incentive model,
resource efficiency will be automatically maximized. In this context, there are three new challenges. The
first is the virtualization overheads. The second is the administrative complexity for scientists to manage
the virtual clusters. The third is the programming model. The existing HPC programming models were
designed for dedicated homogeneous parallel processors. The HPC cloud is typically heterogeneous and
shared. This paper reports on the practice and experiences in building a private HPC cloud using a subset
of a traditional HPC cluster. We report our evaluation criteria using Open Source software, and
performance studies for compute-intensive and data-intensive applications. We also report the design and
implementation of a Puppet-based virtual cluster administration tool called HPCFY. In addition, we show
that even if the overhead of virtualization is present, efficient scalability for virtual clusters can be achieved
by understanding the effects of virtualization overheads on various types of HPC and Big Data workloads.
We aim at providing a detailed experience report to the HPC community, to ease the process of building a
private HPC cloud using Open Source software.
CWIN16 UK Event - The Future of Infrastructure Gunnar Menzel
What technologies made the biggest impact and which ones will impact us in the future? Will technology advances slow down, stay the same of speed up? What trends and technologies should I consider?
The Digital agenda, shifting business models, as well as the need for speed at lower cost are impacting, shaping and forming new technologies; creating new opportunities at an ever increasing pace.
During the 30 min presentation Gunnar will outline the various key infrastructure related trends and technologies that are and will be key going forward.
CAE Simulations for Automotive in the CloudThe UberCloud
Automotive CAE simulations can be complex, compute intensive and heavily reliant on parallel computing. The advent of cloud computing has made access to compute resources fast, and on-demand. But what are the challenges of using the cloud for CAE? These slides explore this topic, look at case studies from Automotive industry and show how automotive engineers can benefit from the cloud to speed up their product development cycle.
Virtual Human Brain Simulations with Abaqus in the CloudThe UberCloud
UberCloud, Dassault Systèmes Simulia and Advania Data Centers presentation about the award winning project: HPC Cloud Simulation of Neuromodulation in Schizophrenia. Learn how simulation and high performance computing in the cloud play a key role in accelerating personalized healthcare.
The Brain Neuromodulation project represents a breakthrough in demonstrating the high value of computational modeling and simulation in improving the clinical application of non-invasive electro-stimulation of the human brain in schizophrenia and the potential to apply this technology to the treatment of other neuropsychiatric disorders such as depression and Parkinson’s disease. With the addition of HPC, clinicians can now precisely and non-invasively target regions of the brain without disrupting nearby healthy brain regions.
2018 Hyperion HPC Innovation Excellence Award: UberCloud, the National Institute of Mental Health & Neuro Sciences (NIMHANS) in Bangalore, Dassault Systèmes Simulia, Advania Data Centers, Hewlett Packard Enterprise and Intel won the 2018 Hyperion HPC Innovation Excellence Award for their Neuromodulation Project, based on computer simulations of non-invasive transcranial electro-stimulation of the human brain in schizophrenia.
High Performance Computing (HPC) and Engineering Simulations in the CloudThe UberCloud
UberCloud Customer Workshop for engineers and scientist and their software providers, discussing cloud challenges and their solution, based on novel UberCloud software container technology which allows access and use of cloud resources and engineering applications and data, on demand, at your fingertips.
info.theubercloud.com/case-studies-and-resources
UberCloud: From Experiment to MarketplaceThe UberCloud
In this deck, Wolfgang Gentzsch presents: UberCloud - From Experiment to Marketplace.
"UberCloud is the online community and marketplace where engineers and scientists discover, try, and buy Computing Power as a Service, from Clouds and even from Supercomputing Centers around the world. Engineers and scientists can explore and discuss how to use this computing power to solve their demanding problems. The UberCloud has been launched in July 2012, by Burak Yenier and Wolfgang Gentzsch in Silicon Valley. The early idea was to explore the roadblocks of Cloud Computing and find solutions, with a crowd-sourcing approach, together with our engineering and scientific community. Why did we call it The UberCloud later? Because, after just one year in operation, more than 50 HPC Cloud providers and Supercomputing Centers of all flavors had joined the UberCloud initiative, to provide free computing cycles to your experiments, and commercial computing cycles to your business: UberCloud – the community and marketplace platform to discover, try, and buy computing services."
Learn more: https://www.theubercloud.com/about-ubercloud/
info.theubercloud.com/case-studies-and-resources
Dennis Nagy talks about the impact of Cloud computing in the evolution of the engineering simulations market. He will share his insight on how and why Cloud computing will change how engineering simulations are done.
www.theubercloud.com
We discuss engineering and scientific computing in the Cloud. Users today have three major choices of computing: workstations, servers, and cloud. We compare benefits and challenges of each, and present a solution: the online UberCloud community, experiment, and marketplace for engineers and scientists to discover, try, and buy compute power on demand, in the cloud. Our approach of application containerization and tight software/hardware integration removes many of the known cloud roadblocks.
www.theubercloud.com
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
UberCloud at ucc dresden
1. The UberCloud
Paving the way to
High Performance Computing as a Service
International Conference on Utility and Cloud Computing
December 9 – 12, Dresden, Germany
Wolfgang Gentzsch and BurakYenier
2. 2
What is High PerformanceComputing ?
Modeling theWorld on the computer …
… in order to allow forVirtual Product Development …
mF a
2
E mc
0t U U U
ti H
3. 3
What is High PerformanceComputing ?
… or for scientific insights
Combustion
Cosmology
Climate
Environment
4. HPC is needed to stay competitive
The digital manufacturing engineer has
several options to use High Performance
Computing (HPC):
HPC on the Desktop: over 90% of engineers
HPC on the Server: about 5% of engineers
HPC as a Service: in the Cloud; less than 1%
5. Workstations: limited capacity
Low-end workstations and PCs are important for
daily design and development work, but
50+ % of users are dissatisfied with their computing
capacity*
Too slow, e.g. jobs run over night or a whole week
Too small, detailed geometry and physics don’t fit into
memory
Number of jobs are limited which affects the quality of
the final result
* Source: http://www.compete.org/
6. Servers: expensive and complex
For SMEs buying and using large scale HPC systems is
expensive and complex …
7. HPC as a Service: benefits & challenges
It’s a new business and working paradigm
Security, privacy, trust in service provider
Intellectual property
Software Licensing
Heavy data transfers
HPC as a Service (in the Cloud) offers flexibility, business
agility, scaling up and down, pay-per-use, OPEX instead
CAPEX, but
9. The UberCloud HPC Experiment
An open voluntary collaborative community
Objective:
Making HPC as a Service available, for everybody, on demand
How?
For SMEs and their engineering applications
to explore the end-to-end process
of using remote computing resources,
as a service, on demand, at your finger tip,
and learning how to resolve the roadblocks.
10. The End-User’s Benefits
Free HPC Experiment, on-demand access to hardware, software,
and expertise, with a one stop resource access experience
No hunting for resources in the complex emerging cloud market
Professional match-making of end-users with service providers
Perfectly tuned end-to-end, step-by-step process to HPC Cloud
Lowering barriers & risks for frictionless entry into HPC Cloud
Crowd sourcing: End-Users are building relationships with other
community members who actively contribute to improvements
11. How does the Experiment work?
End-User joins the experiment
Software Vendor joins
We select a Team Expert
We suggest a Resource Provider
Team is ready to go
… 22 steps on Basecamp’s virtual team office
Finally, writing the Case Study
12. Where are we with the experiment
Started August 2012: today 700+ participating organizations
and individuals
Participants are from 66 countries
Round 4 started September 1: already 42 new teams
124 teams have been formed in Rounds 1-4
Registration at:
www.hpcexperiment.com
www.cfdexperiment.com
www.compbioexperiment.com
www.bigdataexperiment.com
13. UberCloud community website
With social network and crowdsourcing features
Forums, Q&A, discussions, newsletters,…
Feature stories: HPCwire, Desktop Engineering, Bio-IT…
UberCloud University with free and paid lectures
UberCloud HPC Experiment free trial service
UberCloud Exhibit services directory
List of currently ongoing team projects, their status, and
organizations involved
List of upcoming conferences with ‘meet me there’ button
14.
15.
16. Finally:The UberCloud Marketplace
Crowdsourcing and social networking: provide the
community web platform to discover and try HPCaaS,
address pain points, facilitate adoption, harness collective
intelligence
Marketplace for computing-related services: 20+ million
engineers, scientists, and their service providers to list,
discover, try, and sell/purchase
IaaS: HPC centers and public cloud providers
SaaS: open source and commercial technical software
Expertise: specialized HPC, software, technical know-how
17. Cloud Computing Reality Check
UberCloud Compendium
sponsored by Intel
25 selected use-cases from
60 teams in Rounds 1 & 2
Google:
“ubercloud compendium”
17
18. Team 2: Simulating new probe design
for a medical device
Front End + 2 GPU Solvers In Action
22. Team 26: Simulating Stent
Deployment
Using SIMULIA’s Abaqus/Standard and RemoteViz
Software from NICE to run CAE on SGI Cyclone™
Assessment of a fictitious balloon expandable stent design
deployment, physiological pulsatile loading
and both axial and radial compression.
23. Team 26-Team members
End User: Anonymous: Global Designer and Manufacturer of Sterile
Medical Products
CAE Software Provider: Matt Dunbar, Chief Architect, SIMULIA
RemoteViz Software: NICE Desktop CloudVisualization (DCV)
HPC/CAE Expert: Scott Shaw, Senior Applications Engineer, SGI
Resource Provider: SGI Cyclone. Tony DeVarco, Senior Manager for
Strategic Partners and Cloud Computing at SGI. Eugene Kremenetsky,
Systems EngineeringTechnical Lead at SGI
Team Mentor: Gregory Shirin from the HPC Experiment team
24. Challenges and Solutions
Information security, privacy: protecting the users intellectual
property, guarding raw data, processing models, resulting
information: document security requirements, and select the
right provider
Internet too slow for heavy data transfer: don’t ship every
result, just the important ones; use remote visualization; if
necessary, fedex the data over-night via USB hard drive
Incompatible software licensing models: ISVs have to develop
compatible on-demand software licensing models
25. Challenges and Solutions, cont’d
Reliability & availability of resource providers: seeking info on
reliability and availability of each vendor before partnering
Lack of easy registration and administration: originally HPC
resources are not designed for the masses: use automated rules
based instant decision making capabilities
Costs: pay-per-use billing can result in unpredictable cost,
project can easily run out of budget: use automated, policy
driven monitoring of usage and billing
High expectations, disappointing results: we are in transition
to the cloud computing paradigm: Set goals that are
incrementally better than your current capabilities.
26. Why join?
HPC as a Service is the next big thing, benefits are obvious
HPC is complex; together it is easier to tackle the complexity
Low entry barrier to HPC as a Service through an experiment
Learning by doing; experimenting with no risk; no failure
Becoming an active part of this growing community
Exploring the end-to-end process and learning how this fits
into your research and/or business direction in the near future