Your SlideShare is downloading. ×
NCMS UberCloud Experiment Webinar .
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

NCMS UberCloud Experiment Webinar .


Published on

National Center for Manufacturing Sciences (NCMS) and UberCloud are excited to announce the Michigan “round” of HPC Experiment as part of NCMS’ Grid Initiative. Just like previous UberCloud Experiment …

National Center for Manufacturing Sciences (NCMS) and UberCloud are excited to announce the Michigan “round” of HPC Experiment as part of NCMS’ Grid Initiative. Just like previous UberCloud Experiment rounds, this community driven effort will support engineers to explore the end-to-end processes of using technical computing for product design and development.

During this webinar you can learn more about Michigan2.0 Grid Initiative and how you can apply for the program.

Published in: Technology, Business

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide
  • We got asked this question a few times. How do you put the teams together?Our approach is simple:“The End-User is King”: It all starts with the end-user, and his/her application & requirementsAim: Find optimal Match of Four: end-user, resource provider, software provider, team expertMatching workflow: (end-user + software) => team expert => resource provider = perfect teamWe take geography, time-zone into account as well. As the teams make progress, we sometimes add other providers’ services, for example for further computation, cost analysis or for visualization.
  • The TeamEnd-user: FLSmidth is the leading supplier of complete plants, equipment and services to the global minerals and cement industriesSoftware provider: ANSYS develops, markets and supports engineering simulation softwareResource provider: Bull, manufacturer of HPC computers, through its extreme factory (XF) HPC on demand serviceTeam expert: science+computing, provides IT services and solutions in HPC and technical computing environments
  • Transcript

    • 1. NCMS Michigan2.0 Introduction to UberCloud Experiment Presented By: Danielle Jones, NCMS BurakYenier, UberCloud
    • 2. About the presenters  Danielle Jones  Business Development manager at NCMS  Oversees SMM integration into the NCMSGrid effort  BurakYenier  Co-founder of UberCloud  Industrial Engineer  Experience in software development and data center management 2
    • 3. The Engineer’s computing tools: workstations, servers, CAE clouds CAE Server CAE Cloud 3 options to use technical compute power
    • 4. The Engineer’s computing tools: The challenges The digital manufacturing engineer is facing three main challenges with these resources:  Workstation: slow, limited capacity  CAE server: expensive, complex  CAE Service in the Cloud: security,licensing, data transfer, expertise
    • 5. What is the UberCloud Experiment? To explore tech cloud challenges & potential solutions:  We started the experiment mid-2012 as voluntary effort  Demonstrate potential ofTechnical Computing in the Cloud  Now with over 1,300 participants, 55 cloud providers, 80+ software providers, and hundreds of experts  Round 5 is in progress, 147 teams as of today  2nd Compendium of use cases reports will appear in 2 weeks Please submit your project ideas at the end of this webinar! 5
    • 6. WhyThis Experiment?  Foster use of HPC in Digital Manufacturing  Focus on: remote resources in HPC Centers &HPC Clouds  Support initiatives from Intel, NCMS, and many others to uncover and support the ‘missing middle’ (SMEs)  Observation: business clouds are becoming widely accepted, but acceptance of simulation clouds in industry is still in early adopter stage (CAE, Bio, Finance,Oil & Gas, …)  Some Barriers: Complexity, IP, data transfer, software licenses, performance, specific system requirements, data security, interoperability, cost, etc. 6
    • 7. How the Experiment works…  End-User joins the experiment  ISV joins  We select a Team Expert  We suggest a Resource Provider  Team is ready to go  … 25 steps on Basecamp virtual team office  Finally, writing the Case Study
    • 8. Our recipe for success  Find the best matches for the end-user and form the team  Step by step process of accessing and using remote HPC  Collaborative tools such as Basecamp  Experienced team mentors  UberCloud Exhibit – list of services to pick from  UberCloud University – specific lectures  UberCloud Community – help is available when needed
    • 9. Step by Step process Basecamp Management Platform:
    • 10. Team 8: Multiphase flows within the cement and mineral industry End User:
    • 11.  A leading one-source supplier of equipment and services to the global minerals and cement industries  FLSmidth supplies everything from single machine units to complete minerals and cement flow sheets including associated services  FLSmidth primarily focuses on the following industries: coal, iron ore, fertilizers, copper, gold and cement and has the ambition of being among the leading and preferred suppliers in each of these industries 11 The end-user:
    • 12. Application  FLSmidth’s largest flash dryer to date is located in Morocco  Dry wet phosphate cake  Hot combustion products mixed with cooler air  Moist solids (21%) added in venturi, ~700tph  Moisture evaporated by hot gas  Remaining moisture content in solids ~6%
    • 13. Challenges  Computationally expensive, multi- phase, lagrangian, mass-transfer  In-house hardware: Intel Xeon Processor X5667, 12M Cache, 3.06 GHz, 6.40 GT/s, 24 GB Ram  Geometry: 1.4 million cells, 5 species and a time step of 1 millisecond for a total time of 2 seconds  One transient run takes 5 days
    • 14. Experience  Easy to upload data  Intuitive web interface, Support for extra script commands  Easy run monitoring, using web basedTurboVNC, not so responsive forOpenGL applications  Great support, even when whole cluster went down  1-2 day turnaround  Large amounts of data still need to be transferred
    • 15. Benefits  Hardware know-how and expense (to some extent) is outsourced  Focus on the results instead of the process  Faster turn around for our application  Useable remote visualisation
    • 16. Lessons Learned  Positive experience with cloud HPC  Some concerns about remote visualisation  Data transfer at project end need to be considered  Financial analysis still need to be undertaken  Licensing  Our expectations were met
    • 17. Any questions? Q & A Session After the event, you can email: Danielle Jones at 17
    • 18. More information AboutTeam 8: 2013/ 25 More Projects reports: Create your project:
    • 19. The UberCloud Experiment ThankYou Register at