NCMS Michigan2.0
Introduction to
UberCloud Experiment
Presented By:
Danielle Jones, NCMS
BurakYenier, UberCloud
About the presenters
 Danielle Jones
 Business Development manager at NCMS
 Oversees SMM integration into the NCMSGrid effort
 BurakYenier
 Co-founder of UberCloud
 Industrial Engineer
 Experience in software development and data center management
2
The Engineer’s computing tools:
workstations, servers, CAE clouds
CAE Server
CAE Cloud
3 options to use technical compute power
The Engineer’s computing tools:
The challenges
The digital manufacturing engineer is facing
three main challenges with these resources:
 Workstation: slow, limited capacity
 CAE server: expensive, complex
 CAE Service in the Cloud:
security,licensing, data transfer, expertise
What is the UberCloud Experiment?
To explore tech cloud challenges & potential solutions:
 We started the experiment mid-2012 as voluntary effort
 Demonstrate potential ofTechnical Computing in the Cloud
 Now with over 1,300 participants, 55 cloud providers, 80+ software
providers, and hundreds of experts
 Round 5 is in progress, 147 teams as of today
 2nd Compendium of use cases reports will appear in 2 weeks
Please submit your project ideas at the end of this webinar!
5
WhyThis Experiment?
 Foster use of HPC in Digital Manufacturing
 Focus on: remote resources in HPC Centers &HPC Clouds
 Support initiatives from Intel, NCMS, and many others to
uncover and support the ‘missing middle’ (SMEs)
 Observation: business clouds are becoming widely accepted, but
acceptance of simulation clouds in industry is still in early adopter
stage (CAE, Bio, Finance,Oil & Gas, …)
 Some Barriers: Complexity, IP, data transfer, software
licenses, performance, specific system requirements,
data security, interoperability, cost, etc.
6
How the Experiment works…
 End-User joins the experiment
 ISV joins
 We select a Team Expert
 We suggest a Resource Provider
 Team is ready to go
 … 25 steps on Basecamp virtual team office
 Finally, writing the Case Study
Our recipe for success
 Find the best matches for the end-user and form the team
 Step by step process of accessing and using remote HPC
 Collaborative tools such as Basecamp
 Experienced team mentors
 UberCloud Exhibit – list of services to pick from
 UberCloud University – specific lectures
 UberCloud Community – help is available when needed
Step by Step process
Basecamp Management Platform:
Team 8: Multiphase flows
within the cement and mineral industry
End User:
 A leading one-source supplier of equipment and services to
the global minerals and cement industries
 FLSmidth supplies everything from single machine units to
complete minerals and cement flow sheets including
associated services
 FLSmidth primarily focuses on the following industries: coal,
iron ore, fertilizers, copper, gold and cement and has the
ambition of being among the leading and preferred suppliers
in each of these industries
11
The end-user:
Application
 FLSmidth’s largest flash dryer to date is located in
Morocco
 Dry wet phosphate cake
 Hot combustion products mixed with cooler air
 Moist solids (21%) added in venturi, ~700tph
 Moisture evaporated by hot gas
 Remaining moisture content in solids ~6%
Challenges
 Computationally expensive, multi-
phase, lagrangian, mass-transfer
 In-house hardware: Intel Xeon Processor X5667, 12M
Cache, 3.06 GHz, 6.40 GT/s, 24 GB Ram
 Geometry: 1.4 million cells, 5 species and a time step of
1 millisecond for a total time of 2 seconds
 One transient run takes 5 days
Experience
 Easy to upload data
 Intuitive web interface,
Support for extra script commands
 Easy run monitoring, using web basedTurboVNC,
not so responsive forOpenGL applications
 Great support, even when whole cluster went down
 1-2 day turnaround
 Large amounts of data still need to be transferred
Benefits
 Hardware know-how and expense (to some extent) is
outsourced
 Focus on the results instead
of the process
 Faster turn around for our
application
 Useable remote visualisation
Lessons Learned
 Positive experience with cloud HPC
 Some concerns about remote visualisation
 Data transfer at project end need to be considered
 Financial analysis still need to be undertaken
 Licensing
 Our expectations were met
Any questions?
Q & A Session
After the event, you can email:
Danielle Jones at daniellej@ncms.org
17
More information
AboutTeam 8:
www.theubercloud.com/on-cloud-nine-ansys-advantage-magazine-volume-vii-issue-3-
2013/
25 More Projects reports:
www.theubercloud.com/ubercloud-compendium-2013/
Create your project:
www.theubercloud.com/create-join-teams/?how_did_you_hear=NCMS
The UberCloud Experiment
ThankYou
Register at
http://www.TheUberCloud.com

NCMS UberCloud Experiment Webinar .

  • 1.
    NCMS Michigan2.0 Introduction to UberCloudExperiment Presented By: Danielle Jones, NCMS BurakYenier, UberCloud
  • 2.
    About the presenters Danielle Jones  Business Development manager at NCMS  Oversees SMM integration into the NCMSGrid effort  BurakYenier  Co-founder of UberCloud  Industrial Engineer  Experience in software development and data center management 2
  • 3.
    The Engineer’s computingtools: workstations, servers, CAE clouds CAE Server CAE Cloud 3 options to use technical compute power
  • 4.
    The Engineer’s computingtools: The challenges The digital manufacturing engineer is facing three main challenges with these resources:  Workstation: slow, limited capacity  CAE server: expensive, complex  CAE Service in the Cloud: security,licensing, data transfer, expertise
  • 5.
    What is theUberCloud Experiment? To explore tech cloud challenges & potential solutions:  We started the experiment mid-2012 as voluntary effort  Demonstrate potential ofTechnical Computing in the Cloud  Now with over 1,300 participants, 55 cloud providers, 80+ software providers, and hundreds of experts  Round 5 is in progress, 147 teams as of today  2nd Compendium of use cases reports will appear in 2 weeks Please submit your project ideas at the end of this webinar! 5
  • 6.
    WhyThis Experiment?  Fosteruse of HPC in Digital Manufacturing  Focus on: remote resources in HPC Centers &HPC Clouds  Support initiatives from Intel, NCMS, and many others to uncover and support the ‘missing middle’ (SMEs)  Observation: business clouds are becoming widely accepted, but acceptance of simulation clouds in industry is still in early adopter stage (CAE, Bio, Finance,Oil & Gas, …)  Some Barriers: Complexity, IP, data transfer, software licenses, performance, specific system requirements, data security, interoperability, cost, etc. 6
  • 7.
    How the Experimentworks…  End-User joins the experiment  ISV joins  We select a Team Expert  We suggest a Resource Provider  Team is ready to go  … 25 steps on Basecamp virtual team office  Finally, writing the Case Study
  • 8.
    Our recipe forsuccess  Find the best matches for the end-user and form the team  Step by step process of accessing and using remote HPC  Collaborative tools such as Basecamp  Experienced team mentors  UberCloud Exhibit – list of services to pick from  UberCloud University – specific lectures  UberCloud Community – help is available when needed
  • 9.
    Step by Stepprocess Basecamp Management Platform:
  • 10.
    Team 8: Multiphaseflows within the cement and mineral industry End User:
  • 11.
     A leadingone-source supplier of equipment and services to the global minerals and cement industries  FLSmidth supplies everything from single machine units to complete minerals and cement flow sheets including associated services  FLSmidth primarily focuses on the following industries: coal, iron ore, fertilizers, copper, gold and cement and has the ambition of being among the leading and preferred suppliers in each of these industries 11 The end-user:
  • 12.
    Application  FLSmidth’s largestflash dryer to date is located in Morocco  Dry wet phosphate cake  Hot combustion products mixed with cooler air  Moist solids (21%) added in venturi, ~700tph  Moisture evaporated by hot gas  Remaining moisture content in solids ~6%
  • 13.
    Challenges  Computationally expensive,multi- phase, lagrangian, mass-transfer  In-house hardware: Intel Xeon Processor X5667, 12M Cache, 3.06 GHz, 6.40 GT/s, 24 GB Ram  Geometry: 1.4 million cells, 5 species and a time step of 1 millisecond for a total time of 2 seconds  One transient run takes 5 days
  • 14.
    Experience  Easy toupload data  Intuitive web interface, Support for extra script commands  Easy run monitoring, using web basedTurboVNC, not so responsive forOpenGL applications  Great support, even when whole cluster went down  1-2 day turnaround  Large amounts of data still need to be transferred
  • 15.
    Benefits  Hardware know-howand expense (to some extent) is outsourced  Focus on the results instead of the process  Faster turn around for our application  Useable remote visualisation
  • 16.
    Lessons Learned  Positiveexperience with cloud HPC  Some concerns about remote visualisation  Data transfer at project end need to be considered  Financial analysis still need to be undertaken  Licensing  Our expectations were met
  • 17.
    Any questions? Q &A Session After the event, you can email: Danielle Jones at daniellej@ncms.org 17
  • 18.
    More information AboutTeam 8: www.theubercloud.com/on-cloud-nine-ansys-advantage-magazine-volume-vii-issue-3- 2013/ 25More Projects reports: www.theubercloud.com/ubercloud-compendium-2013/ Create your project: www.theubercloud.com/create-join-teams/?how_did_you_hear=NCMS
  • 19.
    The UberCloud Experiment ThankYou Registerat http://www.TheUberCloud.com

Editor's Notes

  • #8 We got asked this question a few times. How do you put the teams together?Our approach is simple:“The End-User is King”: It all starts with the end-user, and his/her application & requirementsAim: Find optimal Match of Four: end-user, resource provider, software provider, team expertMatching workflow: (end-user + software) => team expert => resource provider = perfect teamWe take geography, time-zone into account as well. As the teams make progress, we sometimes add other providers’ services, for example for further computation, cost analysis or for visualization.
  • #11 The TeamEnd-user: FLSmidth is the leading supplier of complete plants, equipment and services to the global minerals and cement industriesSoftware provider: ANSYS develops, markets and supports engineering simulation softwareResource provider: Bull, manufacturer of HPC computers, through its extreme factory (XF) HPC on demand serviceTeam expert: science+computing, provides IT services and solutions in HPC and technical computing environments