Taking High Performance Computing to the Cloud: Windows HPC and Saptak Sen
High Performance Computing (HPC) is expected to be the single largest workload on Windows Azure. This session discusses how Windows HPC Server 2008 R2 SP2 enables our customers to easily run their HPC applications on Windows Azure. It covers different usage scenarios (“bursting” to Windows Azure vs. running everything in Windows Azure),differences between running HPC applications on-premise vs. in Azure,best practices,limitations,etc. Real-world customers and their scenarios are highlighted. The key points are illustrated with live demos of HPC applications running in Windows Azure. This session is a must for everyone who wants to know about HPC and Windows Azure.
We present applications of Azure Services such as Azure IaaS/PaaS and Azure RemoteApp in computational fluid dynamics and sparse linear algebra. We also present Microsoft Machine Learning Studio in prediction of the heating load in the buildings.
The MEW Workshop is now established as a leading national event dedicated to distributed high performance scientific computing. The principle objective is to encourage close contact between the research communities from the Mathematics, Chemistry, Physics and Materials Programmes of EPSRC and the major vendors.
Accumulo and the Convergence of Machine Learning, Big Data, and SupercomputingAccumulo Summit
Machine learning, big data, and simulation challenges have led to a proliferation of computing hardware and software solutions. Hyperscale data centers, accelerators, and programmable logic can deliver enormous performance via a wide range of analytic environments and data storage technologies. Apache Accumulo is a unique technology with the potential to enable all of these fields. Effectively exploiting Accumulo in these fields requires mathematically rigorous interfaces that allow users to focus on their domains. Mathematically rigorous interfaces are at the core MIT Lincoln Laboratory Supercomputing Center (LLSC) and enable the LLSC to deliver Apache Accumulo o thousands of scientists and engineers. This talk discusses the rapidly evolving computing landscape and how mathematically rigorous interfaces are the key to exploiting Apache Accumulo's advanced capabilities.
– Speaker –
Jeremy Kepner
Fellow, MIT
Dr. Jeremy Kepner is a MIT Lincoln Laboratory Fellow. He founded the Lincoln Laboratory Supercomputing Center and pioneered the establishment of the Massachusetts Green High Performance Computing Center. He has developed novel big data and parallel computing software used by thousands of scientists and engineers worldwide. He has led several embedded computing efforts, which earned him a 2011 R&D 100 Award. Dr. Kepner has chaired SIAM Data Mining, the IEEE Big Data conference, and the IEEE High Performance Extreme Computing conference. Dr. Kepner is the author of two bestselling books, Parallel MATLAB and Graph Algorithms in the Language of Linear Algebra. His peer-reviewed publications include works on abstract algebra, astronomy, cloud computing, cybersecurity, data mining, databases, graph algorithms, health sciences, signal processing, and visualization. Dr. Kepner holds a BA degree in astrophysics from Pomona College and a PhD degree in astrophysics from Princeton University.
— More Information —
For more information see http://www.accumulosummit.com/
Some weeks ago, our ML6 agent Karel Dumon gave a talk at a Nexxworks Bootcamp. During this week-long event, several speakers are invited to take the floor to inspire a heterogenous group of (senior) business people from a wide range of industries. On the third day, Artificial Intelligence was planned. A broad intro to AI and ML was given by prof. dr. Eric Mannens, after which Karel provided the audience with some hands-on insights through use cases.
Machine learning at scale with Google Cloud PlatformMatthias Feys
Machine Learning typically involves big datasets and lots of model iterations. This presentation shows how to use GCP to speed up that process with ML Engine and Dataflow. The focus of the presentation is on tooling not on models or business cases.
Taking High Performance Computing to the Cloud: Windows HPC and Saptak Sen
High Performance Computing (HPC) is expected to be the single largest workload on Windows Azure. This session discusses how Windows HPC Server 2008 R2 SP2 enables our customers to easily run their HPC applications on Windows Azure. It covers different usage scenarios (“bursting” to Windows Azure vs. running everything in Windows Azure),differences between running HPC applications on-premise vs. in Azure,best practices,limitations,etc. Real-world customers and their scenarios are highlighted. The key points are illustrated with live demos of HPC applications running in Windows Azure. This session is a must for everyone who wants to know about HPC and Windows Azure.
We present applications of Azure Services such as Azure IaaS/PaaS and Azure RemoteApp in computational fluid dynamics and sparse linear algebra. We also present Microsoft Machine Learning Studio in prediction of the heating load in the buildings.
The MEW Workshop is now established as a leading national event dedicated to distributed high performance scientific computing. The principle objective is to encourage close contact between the research communities from the Mathematics, Chemistry, Physics and Materials Programmes of EPSRC and the major vendors.
Accumulo and the Convergence of Machine Learning, Big Data, and SupercomputingAccumulo Summit
Machine learning, big data, and simulation challenges have led to a proliferation of computing hardware and software solutions. Hyperscale data centers, accelerators, and programmable logic can deliver enormous performance via a wide range of analytic environments and data storage technologies. Apache Accumulo is a unique technology with the potential to enable all of these fields. Effectively exploiting Accumulo in these fields requires mathematically rigorous interfaces that allow users to focus on their domains. Mathematically rigorous interfaces are at the core MIT Lincoln Laboratory Supercomputing Center (LLSC) and enable the LLSC to deliver Apache Accumulo o thousands of scientists and engineers. This talk discusses the rapidly evolving computing landscape and how mathematically rigorous interfaces are the key to exploiting Apache Accumulo's advanced capabilities.
– Speaker –
Jeremy Kepner
Fellow, MIT
Dr. Jeremy Kepner is a MIT Lincoln Laboratory Fellow. He founded the Lincoln Laboratory Supercomputing Center and pioneered the establishment of the Massachusetts Green High Performance Computing Center. He has developed novel big data and parallel computing software used by thousands of scientists and engineers worldwide. He has led several embedded computing efforts, which earned him a 2011 R&D 100 Award. Dr. Kepner has chaired SIAM Data Mining, the IEEE Big Data conference, and the IEEE High Performance Extreme Computing conference. Dr. Kepner is the author of two bestselling books, Parallel MATLAB and Graph Algorithms in the Language of Linear Algebra. His peer-reviewed publications include works on abstract algebra, astronomy, cloud computing, cybersecurity, data mining, databases, graph algorithms, health sciences, signal processing, and visualization. Dr. Kepner holds a BA degree in astrophysics from Pomona College and a PhD degree in astrophysics from Princeton University.
— More Information —
For more information see http://www.accumulosummit.com/
Some weeks ago, our ML6 agent Karel Dumon gave a talk at a Nexxworks Bootcamp. During this week-long event, several speakers are invited to take the floor to inspire a heterogenous group of (senior) business people from a wide range of industries. On the third day, Artificial Intelligence was planned. A broad intro to AI and ML was given by prof. dr. Eric Mannens, after which Karel provided the audience with some hands-on insights through use cases.
Machine learning at scale with Google Cloud PlatformMatthias Feys
Machine Learning typically involves big datasets and lots of model iterations. This presentation shows how to use GCP to speed up that process with ML Engine and Dataflow. The focus of the presentation is on tooling not on models or business cases.
High Performance Computing in the Cloud is viable in numerous use cases. Common to all successful use cases for cloud-based HPC is the ability embrace latency. Not surprisingly then, early successes were achieved with embarrassingly parallel HPC applications involving minimal amounts of data - in other words, there was little or no latency to be hidden. Over the fulness of time, however, the HPC-cloud community has become increasingly adept in its ability to ‘hide’ latency and, in the process, support increasingly more sophisticated HPC use cases in public and private clouds. Real-world use cases, deemed relevant to remote sensing, will illustrate aspects of these sophistications for hiding latency in accounting for large volumes of data, the need to pass messages between simultaneously executing components of distributed-memory parallel applications, as well as (processing) workflows/pipelines. Finally, the impact of containerizing HPC for the cloud will be considered through the relatively recent creation of the Cloud Native Computing Foundation.
Scientific Computing With Amazon Web ServicesJamie Kinney
Researchers from around the world are increasingly using AWS for a wide-array of use cases. This presentation describes how AWS facilitates scientific collaboration and powers some of the world's largest scientific efforts, including real-world examples from NASA JPL, the European Space Agency (ESA) and CERN's CMS particle detector.
Dr. Konstantinos Giannoutakis presents the CloudLightning simulator, a bespoke cloud simulation engine built for modelling and simulating heterogeneous resources as well as self-organising systems.
This presentation was given at the CloudLightning Conference held in conjunction with NC4 2017 in Dublin City University on 11th April 2017.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-bordoloi
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Unmesh Bordoloi, Senior Researcher at General Motors, presents the "Collaboratively Benchmarking and Optimizing Deep Learning Implementations" tutorial at the May 2017 Embedded Vision Summit.
For car manufacturers and other OEMs, selecting the right processors to run deep learning inference for embedded vision applications is a critical but daunting task. One challenge is the vast number of options in terms of neural network models, frameworks (such as Caffe, TensorFlow, Torch), and libraries such as CUDA and OpenCL. Another challenge is the large number of network parameters that can affect the computation requirements, such the choice of training data sets, precision, and batch size. These challenges also complicate efforts to optimize implementations of deep learning algorithms for deployment.
In this talk, Bordoloi presents a methodology and open-source software framework for collaborative and reproducible benchmarking and optimization of convolutional neural networks. General Motors' software framework, CK-Caffe, is based on the Collective Knowledge framework and the Caffe framework. GM invites the community to collaboratively evaluate, design and optimize convolutional neural networks to meet the performance, accuracy and cost requirements of a variety of applications – from sensors to self-driving cars.
I presented "Cloudsim & Green Cloud" in First National Workshop of Cloud Computing at Amirkabir University on 31st October and 1st November, 2012.
Enjoy it!
Cloud2Sim - An Elastic Middleware Platform for Concurrent and Distributed Cloud and MapReduce Simulations.
This is a presentation that partially describes the work-in-progress CloudSim, as presented at MASCOTS 2014 in Paris.
Learn from Accubits Technologies
High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
Technical computing (high-performance computing) used to be the domain of specialists using expensive, proprietary equipment. Today, technical computing is going mainstream, becoming the absolutely irreplaceable competitive tool for research scientists and businesses alike.
Here's a look at Dell’s pioneering role in the evolution of technical computing, with a focus on the key industry trends and technologies that will bring the next generation of tools and functionality to research and development organizations around the world.
High Performance Computing (HPC) and Engineering Simulations in the CloudThe UberCloud
UberCloud Customer Workshop for engineers and scientist and their software providers, discussing cloud challenges and their solution, based on novel UberCloud software container technology which allows access and use of cloud resources and engineering applications and data, on demand, at your fingertips.
info.theubercloud.com/case-studies-and-resources
High Performance Computing (HPC) and Engineering Simulations in the CloudWolfgang Gentzsch
UberCloud Customer Workshop for engineers and scientist and their software providers, discussing cloud challenges and their solution, based on novel UberCloud software container technology which allows access and use of cloud resources and engineering applications and data, on demand, at your fingertips.
High Performance Computing in the Cloud is viable in numerous use cases. Common to all successful use cases for cloud-based HPC is the ability embrace latency. Not surprisingly then, early successes were achieved with embarrassingly parallel HPC applications involving minimal amounts of data - in other words, there was little or no latency to be hidden. Over the fulness of time, however, the HPC-cloud community has become increasingly adept in its ability to ‘hide’ latency and, in the process, support increasingly more sophisticated HPC use cases in public and private clouds. Real-world use cases, deemed relevant to remote sensing, will illustrate aspects of these sophistications for hiding latency in accounting for large volumes of data, the need to pass messages between simultaneously executing components of distributed-memory parallel applications, as well as (processing) workflows/pipelines. Finally, the impact of containerizing HPC for the cloud will be considered through the relatively recent creation of the Cloud Native Computing Foundation.
Scientific Computing With Amazon Web ServicesJamie Kinney
Researchers from around the world are increasingly using AWS for a wide-array of use cases. This presentation describes how AWS facilitates scientific collaboration and powers some of the world's largest scientific efforts, including real-world examples from NASA JPL, the European Space Agency (ESA) and CERN's CMS particle detector.
Dr. Konstantinos Giannoutakis presents the CloudLightning simulator, a bespoke cloud simulation engine built for modelling and simulating heterogeneous resources as well as self-organising systems.
This presentation was given at the CloudLightning Conference held in conjunction with NC4 2017 in Dublin City University on 11th April 2017.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-bordoloi
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Unmesh Bordoloi, Senior Researcher at General Motors, presents the "Collaboratively Benchmarking and Optimizing Deep Learning Implementations" tutorial at the May 2017 Embedded Vision Summit.
For car manufacturers and other OEMs, selecting the right processors to run deep learning inference for embedded vision applications is a critical but daunting task. One challenge is the vast number of options in terms of neural network models, frameworks (such as Caffe, TensorFlow, Torch), and libraries such as CUDA and OpenCL. Another challenge is the large number of network parameters that can affect the computation requirements, such the choice of training data sets, precision, and batch size. These challenges also complicate efforts to optimize implementations of deep learning algorithms for deployment.
In this talk, Bordoloi presents a methodology and open-source software framework for collaborative and reproducible benchmarking and optimization of convolutional neural networks. General Motors' software framework, CK-Caffe, is based on the Collective Knowledge framework and the Caffe framework. GM invites the community to collaboratively evaluate, design and optimize convolutional neural networks to meet the performance, accuracy and cost requirements of a variety of applications – from sensors to self-driving cars.
I presented "Cloudsim & Green Cloud" in First National Workshop of Cloud Computing at Amirkabir University on 31st October and 1st November, 2012.
Enjoy it!
Cloud2Sim - An Elastic Middleware Platform for Concurrent and Distributed Cloud and MapReduce Simulations.
This is a presentation that partially describes the work-in-progress CloudSim, as presented at MASCOTS 2014 in Paris.
Learn from Accubits Technologies
High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
Technical computing (high-performance computing) used to be the domain of specialists using expensive, proprietary equipment. Today, technical computing is going mainstream, becoming the absolutely irreplaceable competitive tool for research scientists and businesses alike.
Here's a look at Dell’s pioneering role in the evolution of technical computing, with a focus on the key industry trends and technologies that will bring the next generation of tools and functionality to research and development organizations around the world.
High Performance Computing (HPC) and Engineering Simulations in the CloudThe UberCloud
UberCloud Customer Workshop for engineers and scientist and their software providers, discussing cloud challenges and their solution, based on novel UberCloud software container technology which allows access and use of cloud resources and engineering applications and data, on demand, at your fingertips.
info.theubercloud.com/case-studies-and-resources
High Performance Computing (HPC) and Engineering Simulations in the CloudWolfgang Gentzsch
UberCloud Customer Workshop for engineers and scientist and their software providers, discussing cloud challenges and their solution, based on novel UberCloud software container technology which allows access and use of cloud resources and engineering applications and data, on demand, at your fingertips.
The presentation presents the competences and technologies provided by Technical computing department of Microsoft Innovation Center Rapperswil such as simulation of the electrical arc, wind turbines, thermal simulations in the building and cloud computing using Microsoft Azure.
Applying Cloud Techniques to Address Complexity in HPC System Integrationsinside-BigData.com
In this video from the HPC User Forum at Argonne, Arno Kolster from Providentia Worldwide presents: Applying Cloud Techniques to Address Complexity in HPC System Integrations.
"The Oak Ridge Leadership Computing Facility (OLCF) and technology consulting company Providentia Worldwide recently collaborated to develop an intelligence system that combines real-time updates from the IBM AC922 Summit supercomputer with local weather and operational data from its adjacent cooling plant, with the goal of optimizing Summit’s energy efficiency. The OLCF proposed the idea and provided facility data, and Providentia developed a scalable platform to integrate and analyze the data."
Watch the video: https://wp.me/p3RLHQ-kOg
Learn more: http://www.providentiaworldwide.com/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In today’s world the growing demand for knowledge has made cloud computing a center of attraction. Cloud computing is providing utility based services to all the users worldwide. It enables presentation of applications from consumers, scientific and business domains. However, data centers created for cloud computing applications consume huge amounts of energy, contributing to high operational costs and a large amount of carbon dioxide emission to the environment. With enhancement of data center, the power consumption is increasing at such a rate that it has become a key concern these days because it is ultimately leading to energy shortcomings and global climatic change. Therefore, we need green cloud computing solutions that can not only save energy, but also reduce operational costs.
Supporting bioinformatics applications with hybrid multi-cloud servicesAhmed Abdullah
ElasticHPC Supports the creation and management of cloud computing resources over multiple public cloud Providers Including Amazon, Azure, Google and Clouds supporting OpenStack.
Den Datenschatz heben und Zeit- und Energieeffizienz steigern: Mathematik und...Joachim Schlosser
In einer Gesellschaft, in der das Sammeln von personenbezogenen Daten mittlerweile alltäglich geworden ist, ist es nicht weiter verwunderlich, dass auch der innovative Maschinenbauer Daten sammelt, wo er nur kann. Produktdaten, Maschinendaten, Statistikdaten – in einer durchschnittlichen Produktionsanlage fallen bereits heute jeden Tag Gigabytes an Daten an. „Big Data“ wurde eines der Schlagworte der Industrie 4.0.
Doch was verspricht man sich davon? Welche Information steckt in den aufgezeichneten Maschinen- und Produktdaten? Und wie erfolgt die Auswertung?
Im Rahmen des Vortrags wird aufgezeigt, wie Unternehmen auf Basis einer etablierten Plattform wie MATLAB® ihre Auswertealgorithmen entwickeln, testen und ausrollen können. Die kontinuierliche Auswertung selbst erfolgt dann wahlweise auf einem Anlagenserver oder aber auch in Echtzeit direkt an der Maschine. Veranschaulicht wird dies anhand von Beispielen aus der Praxis.
Doch neben der gesammelten Daten kommt auch den Steuerungseinheiten in der Produktion in der Industrie 4.0 eine größere Bedeutung zu.
Wenn Werkstücke demnächst selbst wissen, wo sie im Produktionsablauf hin möchten und welcher Verarbeitungsschritt ihnen angedeihen soll, dann bedeutet das auch für die einzelnen Komponenten und Module in Produktion und Logistik ein mehr an Funktionalität, da sie auf diese Eingaben ebenfalls reagieren sollen.
Wie stellen Sie sicher, dass diese zusätzliche Funktionalität nicht zu Lasten der Energiebilanz gehen? Wie fahren Sie die Motoren und anderen aktiven Komponenten Ihrer Fertigung so, dass sie flexibel auf veränderte Routen der Werkstücke reagieren und dennoch im optimalen Bereich fahren?
Mehr denn je brauchen Sie gesteuerte und geregelte Komponenten und Module. Das sollte schon seit Industrie 3.0 vorhanden sein, jedoch ist auch hier noch viel ganz konkretes Potential zur Steigerung von Produktivität und Einsparung von Energie und Produktionszeit vorhanden.
Sie sehen im Vortrag, wie Sie ihre Komponenten besser beschalten, dass die vernetzten dynamischen Anforderungen von Industrie 4.0 lokal effizient umgesetzt werden können.
The Impact of Cloud Computing on Predictive Analytics 7-29-09 v5Robert Grossman
This is a talk I gave in San Diego on July 29, 2009 explaining some of the impact and some of the opportunities of cloud computing on predictive analytics.
Top 31 Cloud Computing Interview Questions and Answers.Ecare Technologies
Here we provide Top 31 Cloud Computing Interview Questions. eCare technologies is one of the best Cloud Computing training institutes in Bangalore with 100% placement support. Cloud Computing certification training in Bangalore provided by cloud computing certified experts and real-time working professionals.
We discuss engineering and scientific computing in the Cloud. Users today have three major choices of computing: workstations, servers, and cloud. We compare benefits and challenges of each, and present a solution: the online UberCloud community, experiment, and marketplace for engineers and scientists to discover, try, and buy compute power on demand, in the cloud. Our approach of application containerization and tight software/hardware integration removes many of the known cloud roadblocks.
www.theubercloud.com
Similar to Cloud Roundtable at Microsoft Switzerland (20)
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
1. Microsoft Innovation Center for Technical
Computing
MICROSOFT AZURE IN HPC SCENARIOS
Lukasz Miroslaw, Ph.D.
lukasz.miroslaw@hsr.ch
07.12.2015, MICROSOFT SWITZERLAND
2. Challenges
2
57 % of users are dissatisfied with their desktop
computing capacity*
* Source: US Council of Competitiveness: http://www.compete.org, theubercloud.com
Computing: too slow
Memory: too small
Fig. Getting better PCs is sometimes hard.
3. Typical Cloud Scenarios
3
Use Case #1: CFD simulations on powerful VM.
Use Case #2: HPC Cluster in the cloud.
Use Case #3: Prediction of Energy Efficiency in the buildings.
4. Typical Cloud Scenarios
4
Use Case #1: CFD simulations on powerful VM.
Microsoft Azure IaaS, Remote App, Azure Batch
Use Case #2: Scale out physical simulations in the cloud.
Microsoft HPC Pack, SimplyHPC
Use Case #3: Prediction of Energy Efficiency in the buildings.
Microsoft AzureML
5. Motivation: Collaborative Simulations of Electrical Arcs
Goal #1: Develop a Cloud-based algorithm for electrical arc simulation
Microsoft Azure Research Award in 2014
Contact: Kenji Takeda (Microsoft Research)
Goal #2: Provide a simulation tool to partners in Brasil and Deutschland
Partners
Streamer International (RU)
Panasonic (JP)
Fraunhofer SCAI (DE)
WEG (BR)
5
6. Use Case #1: VM with ANSYS
6
Cloud Infrastructure:
D14 with 16 core CPU,
112 GB RAM, Windows
Server 2012
MpCCI, ANSYS
preinstalled
Storage: locally
redundant,
automatically scallable
License Server (LS) on
A0 in Germany or in
Switzerland.
Customer
VM
LS
7. Use Case #1: VM with ANSYS
7
No Installation. No configuration. No up-front costs.
Access to powerful VMs with ANSYS already
preinstalled and preconfigured.
Access to redundant and highly available
storage.
Disaster Recovery and 99.5% SLA.
Connection to on-premise infrastructure with
IPSec VPN.
More info: www.msic.ch/Downloads/Software
10h of ANSYS CFD with
HPC Pack (8 cores) costs
about 330 CHF.
8. Use Case #1: INSTANT ANSYS
8
IaaS DEMO (Windows + Linux)
9. Use Case #1: VM with OpenFOAM
9
The UberCloud: Making Technical
Computing available in the Cloud
UberCloud Community:
+2500 companies and
individuals:
+60 cloud providers,
+80 software providers,
several hundred consulting
firms and individual experts.
OpenFOAM added to Azure
Marketplace
Docker containerization
10. Use Case #1: Docker Containers
10
Virtual Machines Docker Containers
Run as isolated process in userspace.
Are not tied to any specific
infrastructure.
11. Use Case #1: Docker Containers
11
Docker Swarm
Scale out your application on
the cluster natively.
X 1000
16. 16
Deploy compute nodes in compute-
intensive VMs (IaaS)
Deploy compute intensive
worker role instances (PaaS)
Use Case #2: HPC Cluster in the cloud.
17. 17
Use Case #2: HPC Cluster in the cloud.
Linux Cluster is also possible (with SLURM)
18. HPC Pack IaaS Demo
Configure the location, virtual
network, domain and data base.
Configure the head node.
Configure the compute nodes.
Configure the certificate.
20. Performance and Scalability
Example #1: Solving linear systems with PETSc and HPCG
20
Fig. Performance in GFlops of PETSc solving ruep (right) matrix
system and HPCG Benchmark (left) on different Microsoft Azure
nodes and a private HSR cluster.
22. Azure Batch
22
Batch is a managed service for batch
processing or batch computing - running a
large volume of similar tasks to get some
desired result.
23. Short Summary
23
+ Scaling properties of Microsoft Azure is comparable to the on-premises
cluster.
HSR Cluster: 7.3 days (176 hours), limited availability.
Microsoft Azure: 4.9 days (118 hours), ca. 40% faster, 100% availability.
+ Dynamic scaling (up- / downscaling) and instant access to the newest
hardware reduces the costs.
+ (Un)limited computing at competitive price.
Cluster composed of 32 x A8 nodes (=256 cores) costs
32 x 2.11 CHF/h = ca. 68 CHF/h
- Upscaling > 100 cores should be planned in advance and consulted with
Microsoft.
24. Microsoft Azure Machine Learning Studio
Three types of knowlege:
Know-What (facts)
Know-How (processes)
Know-Why (reasons)
24
Image credit: Univ. Hamburg
25. AzureML Studio
Key goals of Machine Learning:
Prediction
Classification
Clustering
Collaborative Filtering
25
Image credits: OpenCV, Snipview, Stanford
Examples:
+ Predictive Analytics
+ Anomaly Detection
+ Prediction of energy consumption
+ Classification of sensor readings
26. AzureML Example: Heating Load Prognosis
26
Image credit: SAB Magazine
Input:
- Roof area
- Overall hight
- Glazing area
- Surface area
- ...
Output:
- Heating load
prediction
27. AzureML Workflow
27
Machine Learning Workflow
1. Hypothesis
2. Data Preparation
3. Model
4. Test
5. Evaluate
A. Tsanas, A. Xifara: 'Accurate quantitative estimation of energy performance of
residential buildings using statistical machine learning tools', Energy and Buildings,
Vol. 49, pp. 560-567, 201
- 8 physical characteristics
from 768 buildings
- Goal: predict buldings’
heating load and cooling
load
- Architects need to compare
several building designs
before selecting the final
approach
29. AzureML: Short Summary
29
Very fast prototyping. Load the system with data, test different
Machine Learning methods.
Platform for Internet of Things: Event Hubs, Stream Analytics.
Share the models & results.
Deploy web services fast.
Develop own methods in Python and R Statistics.
30. Modus operandi
30
What can we do for you?
Cloud design and planning.
Deploying your software (or ANSYS) in the
cloud.
Health Check (+40 questions to find out if your
infrastructure is ready for the cloud).
Machine Learning Analysis of your data.
IoT and Big Data.
CTI Projects (50% of the project budget comes
from the state)
BA/MA thesis: +140h/+300h can be alocated.
31. Modus operandi
31
Individual Projects
Preis: 130 CHF per hour
1st meeting is free of charge.
Estimate: we will prepare deliverables variants with
cost estimates, timelines and proof of concept if
feasible.
Billing can be sub-divided as follows:
Initial cost-estimate study (mandatory step, billed
as the equivalent of 10 hrs. A report is delivered.)
Project hours: as agreed with the user
Adjustments: due to a change in user-requrest or
mis-etimate of effort. Only upon prior agreement
with the user.
Inquiry
1st meeting
Optional
Meeting
Work
Deliverables
Estimate
Agree
ment
32. Why Cloud Computing?
32
Reduced costs.
You pay for the time you use the VM. You
save on power, management, support and
... floor space.
Greater business agility.
You respond quickly to business
opportunities and treaths. You can scale
quickly to the cloud.
Flexibility.
You can move, resize, consolidate, and
make choices to optimize any business
metric.
Editor's Notes
Mein Name is Lukasz Miroslaw und ich bin fuer die Cloud in unserer Gruppe zustandig. Ich werde kurz presentieren, wie das Microsoft Azure Platform bei Simulationen und HPC helfen kann. Wir sind Microsoft Partner mit denen arbeiten wir eng zusammen. In unserer Gruppe gibt es ca. 15 Mitarbeiter und allen simulieren mit kommerzielen und Open Source Simulationssoftware. Ziemlich oft die Simulationen nehmen sehr viel Zeit, nicht nur Minuten aber auch Stunden oder Tagen.
Wir verwenden eine interne Cluster mit ca. 380 Kernen oder starke Workstations. Ab und zu mussen wir uns wettbewerbfahig machen, da die Warteschlange wachst, besonders wenn die Studenden auch rechnen wollen.
Die Warteschlange wächst und wir sind frustriert. Ab und zu der Hardware geht kaput und dann sind wir noch frustriert.
Kein Wunder, dass 57% von Benutzern sind nicht befriedigt mit dem Komputern.
Seit paar Monaten evalurieren wir, wie die Cloud uns aber auch unseren Partner helfen kann.
Ich werde drei use cases vorstellen. Erstens, wie man Stromungssimulationen auf virtuellen Maschinen laufen lässt, zweitens wie kann man einen Kluster in der Cloud deployen und drittens, wie kann man Azure Machine Learning benutzen um verschiede Prognose durchzufuhren. Hier werde ich ein Beispiel zeigen im Bereich Energieffizienz von Gebäuden.
Dafur warden verschiede Azure Service angewendet:
Wir haben eine laufende Zusammenarbeit mit zwei WEG Brazil und Fraunhofer SCAI Institute in Deutschland. Sie brauchen einen Zugriff zu eine VM mit ANSYS wo Sie das Model weiter entwickeln können.
Wir haben fuer Ihnen eine VM in Brazil South Data Center erstellt, der Lizenserver bafand sich in Rapperswil, aber optional auch in Deutschland.
Dieses Projekt motivierte uns eines sog. INSTANT ANSYS Service anzubieten. Wir haben eine Zusammenarbeit mit CADFEM und jetzt es ist moglich mit ANSYS zu stundenweise zu berechnen. Die Abrechnung erfolgt nach geleistenen Stunden.
Wir haben mit Ubercloud gearbeitet um die Docker containers fuer zwei Applikationenn anzuwenden, unter anderem OpenFOAM. Jetzt OpenFOAM ist offiziel in MS Azure verfugbar. Ich werde jetzt zeigen wie eine VM mit Linux funktioneren kann.
http://novnctest.cloudapp.net:6080/vnc.html
hsruser, HSR123hsrxxxmore /proc/cpuinfo
Im Vergleich zu VMs die ziemlich schwer sind und viel GBs gross, Containers enthalten Applikationen und Abhängigkeiten aber einen OS mit anderen Containers teilen. Sie laufen als ein separater Prozess und sind nicht an einer Infrastruktur verbunden. Docker container lauft auf einem beliebigen Computern oder Infrastruktur.
They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.
Man kann die bestehende Containers auf dem Kluster mit sog. Docker Swarm deployen.
Azure RemoteApp ist ein besonderes Service, das ermoglicht eine beliebige Applikation mehreren Benutzern anzubieten.
Der Benutzer braucht nur ein Klient zu installieren
Username: mictc-hsr@outlook.comPassword: H$R@micTC
Azure RemoteApp ist ein besonderes Service, das ermoglicht eine beliebige Applikation mehreren Benutzern anzubitten.
Der Benutzer braucht nur ein Klient zu installieren
Username: mictc-hsr@outlook.comPassword: H$R@micTC
Wir haben zwei Szenarien getestet: in Ersten Fall der Cluster befindet sich on-premises, und die Cluster nodes befinden sich entweder local oder in der Cloud. In zweiten Fall, der ganze Klaster befindet sich in der Cloud. Fur diesen Fall habe ich ein Bespiel vorbereitet.
Wir haben zwei Szenarien getestet: in Ersten Fall der Cluster befindet sich on-premises, und die Cluster nodes befinden sich entweder local oder in der Cloud. In zweiten Fall, der ganze Klaster befindet sich in der Cloud. Fur diesen Fall habe ich ein Bespiel vorbereitet.
Das Klaster kann man beliebig mit Hilfe einer XML Datei konfigurieren.
Das Deployment hat zwei Schritte.
Das Deployment hat zwei Schritte. Im ersten Schritt wird geprüft, ob der Kluster deployt in diesem bestimmten Datacenter werden kann. In zweiten, der Klaster wird erstellt. Das ganze Prozess dauert ca. 1 Stunde.
Hier ist ziemlich klar, dass die Leistung auf Azure A8 nodes viel besser ist. Der Grund dafür war,
Wir haben auch ANSYS CFX auf bis zu 256 Kernen ausgeführt. Hier seht man die Berechnungszeiten und die Effizienz. Offensichtlich, die Leistung auf A8 Maschinen sieht besser aus. Wir vermuten, dass die langsamer RAM ist dafür zuständig.
Ich möchte Sie auf Azure Batch aufmerksam machen. Diese Service ermoglicht eine Skalierung von eigenenen Applikationen. Jetzt werden wird das mit unseren Solver fuer Electrical Arc Simulationen testen.
Skalierung ist viel besser auf dem Azure Cluster, da dort immer die neuste Hardware installiert wird. Was mir besonders gefallt hat ist eine Moglichkeit den Kluster beliebig up und down zu skalieren. Ein Bespiel: ein Kluster kostet ca. 70 CHF pro Stunde.
Es gibt 3 Arten von das Wissen. Das Know-what Wissen sind Fakten, die in Lexikons, Enzyklopädien behalten werden kann. Das Know-how Wissen beinhaltet das Gewissen über Prozessen und das know-why Wissen erklärt warum etwas funktioniert. Man kann anhand von Know-what die Schlussfolgerungen uber know-how ziehen. Und das ist die Stelle wo Machine Learning hilft.
There are 3 types of knowledge. Know-what are facts, can be encoded in lexicons, encyclopedias but also in databases. Know-how is about understanding processes, why something works and know-why is about understanding why it works. You can deliver know-how from know-what.
Die wichstigste Einwendugsbereiche von ML sind:
Prediction: anhand die Daten in der Vergangenheit, kann man die Werten von einer bestimmte Einheit im Voraus sagen.
Man kann auch die Daten zu verschiedenen Klassen klassifizieren, man kann auch die Daten gruppieren.
Beispiele: Predictive Maintanance, anomaly detection
Hier ist ein Bespiel von Heizlast Prognose. Es gibt ein Gebäude, das mit vielen Eingenschaften beschreiben warden kann. Das Ziel ist die Heizlast vorauszusagen.
Thermal loads are the quantity of heating and cooling energy that must be added or removed from the building to keep people comfortable. Thermal loads come from heat transfer from within the building during its operation (internal, or core loads) and between the building and the external environment (external, envelope, or fabric loads).
These thermal loads can be translated to heating loads (when the building is too cold) and cooling loads (when the building is too hot). These heating and cooling loads aren’t just about temperature (sensible heat), they also include moisture control (latent heat). (See Infiltration & Moisture Control)
Heating and cooling loads are met by the building’s HVAC system, which uses energy to add or remove heat and condition the space. This energy use translates to the HVAC component of a building’s equipment loads (met by fuel or electricity). Other building loads include plug loads (electricity used for computers and appliances) and lighting loads (electricity used for lights).
- See more at: http://sustainabilityworkshop.autodesk.com/buildings/building-energy-loads#sthash.OstBuGSN.dpuf
Resources: both VMs and cluster
method of operation
Man kann auch mit uns individuelle Projekte durchfuhren. Estes Treffen is kostenlos, dann erstellen wir ein Kostenvoranschlag, nach dem Projekt der Kunde bekommt die Delivarables (Arbeitsergebnisse) und naturlich eine Rechnung.
Mit der Cloud kann man die Kosten sparen. Man bezahlt nur fuer die Zeit wenn die VM gebraucht wird.
Kosten von Hardware, aber auch von technischer Unterstutzung konnen tiefer werden. Normaleweise gibt es nur Operationen Kosten, die man pro Monat bezahlt. Man muss sich nicht mehr mit hohen up-front Kosten rechnen.
Bessere bussiness agilitaet. Mit der Cloud kann die Organisation mehr flexibel, aktiv, anpassungsfähig sein. Z. B. unserer Konkurent mochte sofort einen neuen Prototyp testen, unsere Kunde/Partner braucht dringend neue Resultate. Damit entstehen die Peaks in der Produktion. Normaleweise die Beschaffung von neuen Komputern ist ein komplizierten und dauerhaften Process. Man muss das Management uberzeugen, dann die neue Maschinen bestellen, warten auf der Lieferung, dann installieren, konfigurieren, testen. Mit der Cloud Skalierung ist einfacher und schneller.
Flexibitlitat: man kann beliebig die VMs und Cloud resources deployen, migrieren, löschen, abspeichern.
https://cloud.google.com/files/esg-whitepaper.pdf
• Reduced costs. Cloud computing enables organizations to pay only for what they use. Instead of standing up dedicated infrastructure to run each application, you spin up virtual machines on infrastructure owned and managed by a provider. You pay for the time you use the VMs, instead of setting up servers on your own infrastructure. You save on power, cooling, and floor space; you save on management since you don’t have to install, operate, and troubleshoot it yourself. And you’re not depreciating the equipment—someone else is. The ability to start small and grow organically as your business requires it, instead of having to guess at what you’ll need next week, next month, and next year, lets you match your costs with actual usage. In addition, your computing costs in the cloud are usually operational expenses paid monthly rather than hefty up-front capital expenses.
• Greater business agility. Agility is really about responsiveness. Cloud computing lets you respond quickly to business opportunities and threats. What if your product team suddenly figures out how to make the ultimate widget? Or a competitor suddenly starts gaining on your market share? You can scale quickly in the cloud, adding VMs to cover spikes in production and ramp up sales. With physical infrastructure, scaling is often a lengthy process that starts with requisition, justification to senior management, and purchase, followed by waiting for delivery, and then managing deployment, testing, re-configuration, and, finally, production. Equally important, in the cloud you can scale back down when a utilization spike has passed. With strictly physical infrastructure, you’ve made an investment that likely sits idle waiting for another spike.
• Flexibility. Flexibility gives you choice. With the cloud, you can instantiate or destroy VM instances as you need to, move workloads around, and change your mind and revert—without wasting already purchased resources. You can move, resize, consolidate, and make choices to optimize any business metric.