This webinar showcases the latest GPU-acceleration technologies available to AMBER users and discusses features, recent updates and future plans. Go through the sides to learn how to obtain the latest accelerated versions of AMBER, which features are supported, the simplicity of its installation and use, and how it performs with Kepler GPUs. To run AMBER free on GPUs register here: www.Nvidia.com/GPUTestDrive
Listen to Professor Ross Walker and Adrian Roitberg explaining the new GPU features of AMBER version 14. With these features, AMBER is now world's fastest Molecular Dynamics package.
It Does What You Say, Not What You Mean: Lessons From A Decade of Program RepairClaire Le Goues
In this talk we present lessons learned, good ideas, and thoughts on the future, with an eye toward informing junior researchers about the realities and opportunities of a long-running project. We highlight some notions from the original paper that stood the test of time, some that were not as prescient, and some that became more relevant as industrial practice advanced. We place the work in context, highlighting perceptions from software engineering and evolutionary computing, then and now, of how program repair could possibly work. We discuss the importance of measurable benchmarks and reproducible research in bringing scientists together and advancing the area. We give our thoughts on the role of quality requirements and properties in program repair. From testing to metrics to scalability to human factors to technology transfer, software repair touches many aspects of software engineering, and we hope a behind-the-scenes exploration of some of our struggles and successes may benefit researchers pursuing new projects.
A National Big Data Cyberinfrastructure Supporting Computational Biomedical R...Larry Smarr
Invited Presentation
Symposium on Computational Biology and Bioinformatics:
Remembering John Wooley
National Institutes of Health
Bethesda, MD
July 29, 2016
In the age of Big Data, what role for Software Engineers?CS, NcState
ABSTRACT:
Consider the premise of Big Data:
better conclusions = same algorithms + more data + more cpu
If this were always true, then there would be no role for human analysts
that reflected over the domain to offer insights that produce better solutions
(since all such insight is now automatically generated from the CPUs).
This talk proposes a marriage of sorts between Big Data and software
engineering. It reviews over a decade of work by the author in exploring
user goals using CPU-intensive methods. It will be shown that analyst-insight was
useful from building “better" tools (where “better” means generate
more succinct recommendations, runs faster, scales to much larger problems).
The conclusion will be that in the age of big data, human analysis is still
useful and necessary. But a new kind of software engineering analyst is required- one
that know how to take full advantage of the power of Big Data.
ABOUT THE AUTHOR:
Tim Menzies (P.hD., UNSW) is a Professor in CS at WVU; the author of
over 230 referred publications; and is one of the 50 most cited
authors in software engineering (out of 50,000+ researchers, see
http://goo.gl/wqpQl). At WVU, he has been a lead researcher on
projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work
with private companies. He teaches data mining and artificial
intelligence and programming languages.
Prof. Menzies is the co-founder of the PROMISE conference series
devoted to reproducible experiments in software engineering (see
http://promisedata.googlecode.com). He is an associate editor of IEEE
Transactions on Software Engineering, Empirical Software Engineering
and the Automated Software Engineering Journal. In 2012, he served as
co-chair of the program committee for the IEEE Automated Software
Engineering conference. In 2015, he will serve as co-chair for the
ICSE'15 NIER track. For more information, see his web site
http://menzies.us or his vita at http://goo.gl/8eNhY or his list of
pubs at http://goo.gl/0SWJ2p.
Listen to Professor Ross Walker and Adrian Roitberg explaining the new GPU features of AMBER version 14. With these features, AMBER is now world's fastest Molecular Dynamics package.
It Does What You Say, Not What You Mean: Lessons From A Decade of Program RepairClaire Le Goues
In this talk we present lessons learned, good ideas, and thoughts on the future, with an eye toward informing junior researchers about the realities and opportunities of a long-running project. We highlight some notions from the original paper that stood the test of time, some that were not as prescient, and some that became more relevant as industrial practice advanced. We place the work in context, highlighting perceptions from software engineering and evolutionary computing, then and now, of how program repair could possibly work. We discuss the importance of measurable benchmarks and reproducible research in bringing scientists together and advancing the area. We give our thoughts on the role of quality requirements and properties in program repair. From testing to metrics to scalability to human factors to technology transfer, software repair touches many aspects of software engineering, and we hope a behind-the-scenes exploration of some of our struggles and successes may benefit researchers pursuing new projects.
A National Big Data Cyberinfrastructure Supporting Computational Biomedical R...Larry Smarr
Invited Presentation
Symposium on Computational Biology and Bioinformatics:
Remembering John Wooley
National Institutes of Health
Bethesda, MD
July 29, 2016
In the age of Big Data, what role for Software Engineers?CS, NcState
ABSTRACT:
Consider the premise of Big Data:
better conclusions = same algorithms + more data + more cpu
If this were always true, then there would be no role for human analysts
that reflected over the domain to offer insights that produce better solutions
(since all such insight is now automatically generated from the CPUs).
This talk proposes a marriage of sorts between Big Data and software
engineering. It reviews over a decade of work by the author in exploring
user goals using CPU-intensive methods. It will be shown that analyst-insight was
useful from building “better" tools (where “better” means generate
more succinct recommendations, runs faster, scales to much larger problems).
The conclusion will be that in the age of big data, human analysis is still
useful and necessary. But a new kind of software engineering analyst is required- one
that know how to take full advantage of the power of Big Data.
ABOUT THE AUTHOR:
Tim Menzies (P.hD., UNSW) is a Professor in CS at WVU; the author of
over 230 referred publications; and is one of the 50 most cited
authors in software engineering (out of 50,000+ researchers, see
http://goo.gl/wqpQl). At WVU, he has been a lead researcher on
projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work
with private companies. He teaches data mining and artificial
intelligence and programming languages.
Prof. Menzies is the co-founder of the PROMISE conference series
devoted to reproducible experiments in software engineering (see
http://promisedata.googlecode.com). He is an associate editor of IEEE
Transactions on Software Engineering, Empirical Software Engineering
and the Automated Software Engineering Journal. In 2012, he served as
co-chair of the program committee for the IEEE Automated Software
Engineering conference. In 2015, he will serve as co-chair for the
ICSE'15 NIER track. For more information, see his web site
http://menzies.us or his vita at http://goo.gl/8eNhY or his list of
pubs at http://goo.gl/0SWJ2p.
Engineers play critical roles in astronomy, from building telescopes, to designing scientific instruments, to operating observatories. Working together, engineers and scientists answer fundamental questions about our universe. In this session, you'll hear from women engineers making contributions to astronomy by developing a new high resolution optical spectrograph, adapting telescope control software for remote operations, architecting document management and managing critical systems for the next generation of telescopes. You will learn about the different engineering disciplines involved in astronomy, key concepts and technologies shaping astronomy today, and how to find job opportunities in astronomy as an engineer.
Engineers play critical roles in astronomy, from building telescopes, to designing scientific instruments, to operating observatories. Working together, engineers and scientists answer fundamental questions about our universe. In this session, you'll hear from women engineers making contributions to astronomy by developing a new high resolution optical spectrograph, adapting telescope control software for remote operations, architecting document management and managing critical systems for the next generation of telescopes. You will learn about the different engineering disciplines involved in astronomy, key concepts and technologies shaping astronomy today, and how to find job opportunities in astronomy as an engineer.
GALE: Geometric active learning for Search-Based Software EngineeringCS, NcState
Multi-objective evolutionary algorithms (MOEAs) help software engineers find novel solutions to complex problems. When automatic tools explore too many options, they are slow to use and hard to comprehend. GALE is a near-linear time MOEA that builds a piecewise approximation to the surface of best solutions along the Pareto frontier. For each piece, GALE mutates solutions towards the better end. In numerous case studies, GALE finds comparable solutions to standard methods (NSGA-II, SPEA2) using far fewer evaluations (e.g. 20 evaluations, not 1,000). GALE is recommended when a model is expensive to evaluate, or when some audience needs to browse and understand how an MOEA has made its conclusions.
Blue Waters and Resource Management - Now and in the Futureinside-BigData.com
In this presentation from Moabcon 2013, Bill Kramer from NCSA presents: Blue Waters and Resource Management - Now and in the Future.
Watch the video of this presentation: http://insidehpc.com/?p=36343
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
The Royal Society of Chemistry hosts large scale data collections and provides access to the data to the chemistry community. The largest RSC data set of wide scale interest to the community offers access to tens of millions of compounds. The host platform, ChemSpider, is limited as it is a structure centric hub only. A new architecture, the RSC data repository, has been developed that extends support to reactions, spectral data, crystallography data and related property data. It is also the architecture underlying a series of exemplar projects for managing data for a number of diverse laboratories. The adoption of data standards for the integration and distribution of data has been essential. Specific standards include molecular structure formats such as molfiles and InChIs, and spectral data formats such as JCAMP. This presentation will report on our development of the data repository, the importance of utilizing standards for data integration, the flexible nature of the architecture to deliver solutions for various laboratories and our efforts to develop new large data collections. This includes text-mining efforts to extract large spectrum-structure collections from large corpuses.
Science Engagement: A Non-Technical Approach to the Technical DivideCybera Inc.
A presentation for the Future of Networking session at the 2014 Cyber Summit by Jason Zurawski, Science Engagement Engineer, ESnet (Lawrence Berkeley National Laboratory).
Scalable and Efficient Algorithms for Analysis of Massive, Streaming GraphsJason Riedy
Graph-structured data in network security, social networks, finance, and other applications not only are massive but also under continual evolution. The changes often are scattered across the graph, permitting novel parallel and incremental analysis algorithms. We discuss analysis algorithms for streaming graph data to maintain both local and global metrics with low latency and high efficiency.
Mobile data traffic has quadrupled since 2013. In order to cope with a newly diversified device landscape, engineers have embraced responsive design. Implementing “responsive images” is the most important thing that you can do for a responsive site’s performance.
In this session, we discuss the past, present, and future of responsive images.
Engineers play critical roles in astronomy, from building telescopes, to designing scientific instruments, to operating observatories. Working together, engineers and scientists answer fundamental questions about our universe. In this session, you'll hear from women engineers making contributions to astronomy by developing a new high resolution optical spectrograph, adapting telescope control software for remote operations, architecting document management and managing critical systems for the next generation of telescopes. You will learn about the different engineering disciplines involved in astronomy, key concepts and technologies shaping astronomy today, and how to find job opportunities in astronomy as an engineer.
Engineers play critical roles in astronomy, from building telescopes, to designing scientific instruments, to operating observatories. Working together, engineers and scientists answer fundamental questions about our universe. In this session, you'll hear from women engineers making contributions to astronomy by developing a new high resolution optical spectrograph, adapting telescope control software for remote operations, architecting document management and managing critical systems for the next generation of telescopes. You will learn about the different engineering disciplines involved in astronomy, key concepts and technologies shaping astronomy today, and how to find job opportunities in astronomy as an engineer.
GALE: Geometric active learning for Search-Based Software EngineeringCS, NcState
Multi-objective evolutionary algorithms (MOEAs) help software engineers find novel solutions to complex problems. When automatic tools explore too many options, they are slow to use and hard to comprehend. GALE is a near-linear time MOEA that builds a piecewise approximation to the surface of best solutions along the Pareto frontier. For each piece, GALE mutates solutions towards the better end. In numerous case studies, GALE finds comparable solutions to standard methods (NSGA-II, SPEA2) using far fewer evaluations (e.g. 20 evaluations, not 1,000). GALE is recommended when a model is expensive to evaluate, or when some audience needs to browse and understand how an MOEA has made its conclusions.
Blue Waters and Resource Management - Now and in the Futureinside-BigData.com
In this presentation from Moabcon 2013, Bill Kramer from NCSA presents: Blue Waters and Resource Management - Now and in the Future.
Watch the video of this presentation: http://insidehpc.com/?p=36343
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
The Royal Society of Chemistry hosts large scale data collections and provides access to the data to the chemistry community. The largest RSC data set of wide scale interest to the community offers access to tens of millions of compounds. The host platform, ChemSpider, is limited as it is a structure centric hub only. A new architecture, the RSC data repository, has been developed that extends support to reactions, spectral data, crystallography data and related property data. It is also the architecture underlying a series of exemplar projects for managing data for a number of diverse laboratories. The adoption of data standards for the integration and distribution of data has been essential. Specific standards include molecular structure formats such as molfiles and InChIs, and spectral data formats such as JCAMP. This presentation will report on our development of the data repository, the importance of utilizing standards for data integration, the flexible nature of the architecture to deliver solutions for various laboratories and our efforts to develop new large data collections. This includes text-mining efforts to extract large spectrum-structure collections from large corpuses.
Science Engagement: A Non-Technical Approach to the Technical DivideCybera Inc.
A presentation for the Future of Networking session at the 2014 Cyber Summit by Jason Zurawski, Science Engagement Engineer, ESnet (Lawrence Berkeley National Laboratory).
Scalable and Efficient Algorithms for Analysis of Massive, Streaming GraphsJason Riedy
Graph-structured data in network security, social networks, finance, and other applications not only are massive but also under continual evolution. The changes often are scattered across the graph, permitting novel parallel and incremental analysis algorithms. We discuss analysis algorithms for streaming graph data to maintain both local and global metrics with low latency and high efficiency.
Mobile data traffic has quadrupled since 2013. In order to cope with a newly diversified device landscape, engineers have embraced responsive design. Implementing “responsive images” is the most important thing that you can do for a responsive site’s performance.
In this session, we discuss the past, present, and future of responsive images.
For image optimization, reducing the quality doesn’t always lead to degradation of visual experience. In fact, precise adjustment of compression level and fine tuning of encoding settings can reduce significantly the file size without any noticeable degradation. But, there is no standard quality setting that works for all images - it depends on the compression algorithm, image format and content. And manually experimentation is not scalable.
In this webinar we cover how to find the best quality compression level and optimal encoding settings, in order to produce a perceptually fine image while minimizing the file size.
B2B Product Marketing. What is the role of Product Marketing in organizations? What are the most important skills to be a good product marketing manager?
Challenges and Advances in Large-scale DFT Calculations on GPUs using TeraChemCan Ozdoruk
Recent advances in reformulating electronic structure algorithms for stream processors such as graphical processing units have made DFT calculations on systems comprising up to O(10 to the 3) atoms feasible. Simulations on such systems that previously required half a week on traditional processors can now be completed in only half an hour. Listen to Professor Heather Kulik, Massachusetts Institute of Technology, as she discusses how she leverages these GPU-accelerated quantum chemistry methods in the code TeraChem to investigate large-scale quantum mechanical features in applications ranging from protein structure to mechanochemical depolymerization. In each case, large-scale and rapid evaluation of electronic structure properties is critical for unearthing previously poorly understood properties and mechanistic features of these systems. Professor Kulik also discusses outstanding challenges in the use of Gaussian localized-basis-set codes on GPUs pertaining to limitations in basis set size and how she circumvents such challenges to computational efficiency with systematic, physics-based error corrections to basis set incompleteness
Slides by VMD lead developer Mr. John Stone, a pioneer in the field of MD Visualization. Visualization is essential to unlocking key insights from the results of MD simulations. Mr. Stone explains the many GPU-accelerated features of VMD. You can learn how these features can help you speed up a wide range of simulation preparation, analyses, and visualization tasks.
Molecular Shape Searching on GPUs: A Brave New WorldCan Ozdoruk
Shape is a fundamental three dimensional molecular property and a powerful descriptor for molecular comparison and similarity assessment; similarity in shape has proven to be a very effective method for predicting similarity in biology. As such shape-based virtual screening has become an integral part of computational drug discovery, due to both its speed and efficacy. OpenEye’s recent port of their shape similarity application, ROCS, to the GPU has resulted in a virtual screening tool of unprecedented power – FastROCS. FastROCS’ speed allows it to perform large-scale calculations of a kind inaccessible in the past and has accelerated more routine shape searching to the point that it has become competitive with more traditional, but less effective, two dimensional methods. Go through the slides to learn more. Try GPUs for free here: www.Nvidia.com/GPUTestDrive
Introduction to SeqAn, an Open-source C++ Template LibraryCan Ozdoruk
SeqAn (www.seqan.de) is an open-source C++ template library (BSD license) that implements many efficient and generic data structures and algorithms for Next-Generation Sequencing (NGS) analysis. It contains gapped k-mer indices, enhanced suffix arrays (ESA) or an FM-index, as well algorithms for fast and accurate alignment or read mapping. Based on those data types and fast I/O routines, users can easily develop tools that are extremely efficient and easy to maintain. Besides multi-core, the research team at Freie Universität Berlin has started generic support for distinguished accelerators such as NVIDIA GPUs. Go through the slides to learn more. For your own BI development you can try GPUs for free here: www.Nvidia.com/GPUTestDrive
Uncovering the Elusive HIV Capsid with Kepler GPUs Running NAMD and VMDCan Ozdoruk
Computational scientists at the University of Illinois at Urbana–Champaign and the University of Pittsburg have now resolved the HIV capsid's chemical structure. As reported recently on the cover of Nature, the researchers combined NMR structure analysis, electron microscopy and data-guided molecular dynamics simulations utilizing VMD to prepare and analyze simulations performed using NAMD on NVIDIA GPUs in one of the most powerful computers worldwide, Blue Waters, to obtain and characterize the HIV-1 capsid. The discovery can now guide the design of novel drugs for enhanced antiviral therapy.Also learn how NAMD performs with the latest Kepler GPUs, as well as details about GPU Test Drive (www.nvidia.com/GPUTestDrive) and how to try NAMD on Kepler GPUs for free.
ACEMD: High-throughput Molecular Dynamics with NVIDIA Kepler GPUsCan Ozdoruk
Acellera Founder Gianni De Fabritiis, and CTO Matt Harvey talk about the latest developments of high-throughput molecular dynamics both in terms of applications and methodological advances. Examples are in the context of ACEMD, a highly efficient, best-in-class graphical processing units (GPUs) centric code for running MD simulations, and its protocols. In particular, attendees will learn how the high arithmetic performance and intrinsic parallelism of the latest NVIDIA Kepler GPUs can offer a technological edge for molecular dynamics simulations. Try GPUs for free via: www.Nvidia.com/GPUTestDrive
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
AMBER and Kepler GPUs
1. AMBER and Kepler GPUs
Julia Levites, Sr. Product Manager, NVIDIA
Ross Walker, Assistant Research Professor and NVIDIA CUDA Fellow
San Diego Supercomputer and Department of Chemistry & Biochemistry
SAN DIEGO SUPERCOMPUTER CENTER
1
2. Walker Molecular
Dynamics Lab
http://www.wmd-lab.org/
GPU Acceleration
Lipid Force Field
Development
QM/MM MD
Automated
Refinement
Researchers / Postdocs: Andreas Goetz, Romelia Salomon, JianYin
Graduate Students: Ben Madej (UCSD/SDSC), Justin McCallum (Imperial
College), Age Skjevik (UCSD/SDSC/Bergen), Davide Sabbadin (SDSC)
Undergraduate Researchers: Robin Betz, Matthew Clark, Mike Wu
SAN DIEGO SUPERCOMPUTER CENTER
2
UCSD
3. What is Molecular Dynamics?
• In the context of this talk:
• The simulation of the dynamical properties of condensed
phase biological systems.
• Enzymes / Proteins
• Drug Molecules
• Biological Catalysts
• Classical Energy Function
• Force Fields
• Parameterized (Bonds, Angles,
Dihedrals, VDW, Charges…)
• Integration of Newton’s equations
of motion.
• Atoms modeled as points, electrons included implicitly within the
parameterization.
SAN DIEGO SUPERCOMPUTER CENTER
3
4. Why Molecular Dynamics?
• Atoms move!
– Life does NOT exist at the global minimum.
– We may be interested in studying time dependent
phenomena, such as molecular vibrations, structural
reorganization, diffusion, etc.
– We may be interested in studying temperature dependant
phenomena, such as free energies, anharmonic effects,
– etc.
• Ergodic Hypothesis
– Time average over trajectory is equivalent to an ensemble
average.
– Allows the use of MD for statistical mechanics studies.
SAN DIEGO SUPERCOMPUTER CENTER
4
6. What is AMBER?
An MD simulation
package
12 Versions as of 2012
A set of MD forcefields
fixed charge, biomolecular forcefields:
ff94, ff99, ff99SB, ff03, ff11, ff12
distributed in two parts:
experimental polarizable forcefields e.g.
ff02EP
- AmberTools, preparatory and analysis
programs, free under GPL
parameters for general organic
molecules, solvents, carbohydrates
(Glycam), etc.
- Amber the main simulation programs,
under academic licensing
in the public domain
independent from the accompanying
forcefields
SAN DIEGO SUPERCOMPUTER CENTER
6
7. The AMBER Development Team
A Multi-Institutional Research Collaboration
Principal contributors to the current codes:
David A. Case (Rutgers University)
Tom Darden (NIEHS)
Thomas E. Cheatham III (University of Utah)
Carlos Simmerling (Stony Brook)
Junmei Wang (UT Southwest Medical Center)
Robert E. Duke (NIEHS and UNC-Chapel Hill)
Ray Luo (UC Irvine)
Mike Crowley (NREL)
Ross Walker (SDSC)
Wei Zhang (TSRI)
Kenneth M. Merz (Florida)
Bing Wang (Florida)
Seth Hayik (Florida)
Adrian Roitberg (Florida)
Gustavo Seabra (Florida)
SAN DIEGO SUPERCOMPUTER CENTER
Kim F. Wong (University of Utah)
Francesco Paesani (University of Utah)
Xiongwu Wu (NIH)
Scott Brozell (TSRI)
Thomas Steinbrecher (TSRI)
Holger Gohlke (J.W. GoetheUniversität)
Lijiang Yang (UC Irvine)
Chunhu Tan (UC Irvine)
John Mongan (UC San Diego)
Viktor Hornak (Stony Brook)
Guanglei Cui (Stony Brook)
David H. Mathews (Rochester)
Celeste Sagui (North Carolina State)
Volodymyr Babin (North Carolina
State)
Peter A. Kollman (UC San Francisco)
7
8. AMBER Usage
• Approximately 850 site licenses (per version)
across most major countries.
SAN DIEGO SUPERCOMPUTER CENTER
8
9. What can we do with Molecular
Dynamics?
• Can simulate time dependent properties.
• Protein domain motions.
• Small Protein Folds.
• Spectroscopic Properties.
• Can simulate ensemble properties.
• Binding free energies.
• Drug Design
• Biocatalyst Design
• Reaction Pathways
• Free Energy Surfaces.
SAN DIEGO SUPERCOMPUTER CENTER
9
10. Why do we need Supercomputers?
(Complex Equations)
U R
K r r req
bonds
Vn
1 cos n
dihedrals 2
atoms
i j
2
2
K
eq
angles
atoms
i j
Aij
Bij
12
ij
6
ij
R
R
qi q j
Rij
SAN DIEGO SUPERCOMPUTER CENTER
10
11. Why do we need Supercomputers?
Lots of Atoms
SAN DIEGO SUPERCOMPUTER CENTER
11
12. Why do we need Supercomputers?
Lots of Time Steps
• Maximum time per step is limited by fastest
motion in system (vibration of bonds)
= 2 femto seconds (0.000000000000002 seconds)
(Light travels 0.006mm in 2 fs)
• Biological activity occurs on the nano-second to
micro-second timescale.
1 micro second = 0.000001 seconds
SO WE NEED
500 million steps to reach 1 microsecond!!!
SAN DIEGO SUPERCOMPUTER CENTER
12
14. Just build bigger supercomputers
Beliefs in moving to the exascale
Atheist
Heretic
Believer
+++R, -P
++R, P
+R, +P
(Today)
True Believer
R, P
R, ++P
R = Node Speed
P = No. Nodes
Luddites
Fanatic
-R, -P
-R, +++P
The Problem: Immediate
Future is somewhere here.
14
15. The Problem(s)
• Molecular Dynamics is inherently serial.
• To compute t+1 we must have computed all
previous steps.
• We cannot simply make the system bigger since
these need more sampling (although many
people conveniently forget this).
• 100M atoms = 300M degrees of freedom (d.o.f)
• 10ns = 5,000,000 time steps = 60x less time steps than d.o.f.
• We can run ensembles of calculations but these
present their own challenges (both practical and
political).
SAN DIEGO SUPERCOMPUTER CENTER
15
16. Better Science?
• Bringing the tools the researcher needs into his
own lab.
• Can we make a researcher’s desktop look like a small cluster (remove
the queue wait)?
• Can we make MD truly interactive (real time feedback /
experimentation?)
• Can we find a way for a researcher to cheaply increase the power of
all his graduate students workstations?
• Without having to worry about available power (power cost?).
• Without having to worry about applying for cluster
time.
• Without having to have a full time ‘student’
to maintain the group’s clusters?
• GPU’s offer a possible cost
effective solution.
SAN DIEGO SUPERCOMPUTER CENTER
16
17. Requirements
• Any implementation that expects to gain
widespread support must be:
• Simple / transparent to use.
• Scientists want science first.
• Technology is the enabler, NOT the science.
• Whichever path is the easiest will be the one which is taken.
• Not make additional approximations.
• Have broad support.
• Have longevity (5+ years minimum).
SAN DIEGO SUPERCOMPUTER CENTER
17
18. The Project
• Develop a GPU accelerated
version of AMBER’s PMEMD.
San Diego
Supercomputer Center
Ross C. Walker
Funded as a pilot project (1
year) under NSF SSE
Program & renewed for 3
more years.
SAN DIEGO SUPERCOMPUTER CENTER
NVIDIA
Scott Le Grand
Duncan Poole
18
19. Project Info
• AMBER Website: http://ambermd.org/gpus/
Publications
1.
2.
3.
4.
5.
Goetz, A.W., Williamson, M.J., Xu, D., Poole, D.,Grand, S.L., Walker, R.C. "Routine microsecond
molecular dynamics simulations with amber - part i: Generalized born", Journal of Chemical Theory and
Computation, 2012, 8 (5), pp 1542-1555, DOI:10.1021/ct200909j
Pierce, L.C.T., Salomon-Ferrer, R. de Oliveira, C.A.F. McCammon, J.A. Walker, R.C., "Routine access
to millisecond timescale events with accelerated molecular dynamics.", Journal of Chemical Theory and
Computation, 2012, 8 (9), pp 2997-3002, DOI: 10.1021/ct300284c
Salomon-Ferrer, R.; Case, D.A.; Walker, R.C.; "An overview of the Amber biomolecular simulation
package", WIREs Comput. Mol. Sci., 2012, in press, DOI: 10.1002/wcms.1121
Grand, S.L.; Goetz, A.W.; Walker, R.C.; "SPFP: Speed without compromise - a mixed precision model
for GPU accelerated molecular dynamics simulations", Chem. Phys. Comm., 2013, 184, pp374-380,
DOI: 10.1016/j.cpc.2012.09.022
Salomon-Ferrer, R.; Goetz, A.W.; Poole, D.; Le Grand, S.; Walker, R.C.* "Routine microsecond
molecular dynamics simulations with AMBER - Part II: Particle Mesh Ewald" , J. Chem. Theory
Comput., (in review), 2013
SAN DIEGO SUPERCOMPUTER CENTER
19
20. Original Design Goals
• Transparent to the user.
• Easy to compile / install.
• AMBER Input, AMBER Output
• Simply requires a change in executable name.
• Cost effective performance.
• C2050 should be equivalent to 4 or 6 standard IB nodes.
• Focus on accuracy.
• Should NOT make any additional approximations we
cannot rigorously defend.
• Accuracy / Precision should be directly comparable to the
standard CPU implementation.
SAN DIEGO SUPERCOMPUTER CENTER
20
21. Version History
• AMBER 10 – Released Apr 2008
• Implicit Solvent GB GPU support released as patch Sept 2009.
• AMBER 11 – Released Apr 2010
• Implicit and Explicit solvent supported internally on single GPU.
• Oct 2010 – Bugfix.9 doubled performance on single GPU, added
multi-GPU support.
• AMBER 12 – Released Apr 2012
• Added Umbrella Sampling Support, REMD, Simulated Annealing,
aMD, IPS and Extra Points.
• Aug 2012 – Bugfix.9 new SPFP precision model, support for Kepler I,
GPU accelerate NMR restraints, improved performance.
• Jan 2013 – Bugfix.14 support CUDA 5.0, Jarzynski on GPU, GBSA.
Kepler II support.
SAN DIEGO SUPERCOMPUTER CENTER
21
22. Supported Features Summary
• Supports ‘standard’ MD
• Explicit Solvent (PME)
• NVE/NVT/NPT
• Implicit Solvent (Generalized Born)
• AMBER and CHARMM classical force fields
• Thermostats
• Berendsen, Langevin, Anderson
• Restraints / Constraints
• Standard harmonic restraints
• Shake on hydrogen atoms
SAN DIEGO SUPERCOMPUTER CENTER
New in AMBER 12
•
•
•
•
•
•
Umbrella Sampling
REMD
Simulated Annealing
Accelerated MD
Isotropic Periodic Sum
Extra Points
22
23. Precision Models
SPSP - Use single precision for the entire calculation with the
exception of SHAKE which is always done in double precision.
SPDP - Use a combination of single precision for calculation and
double precision for accumulation (default < AMBER 12.9)
DPDP – Use double precision for the entire calculation.
SPFP – New!1 – Single / Double / Fixed precision hybrid. Designed for
optimum performance on Kepler I. Uses fire and forget atomic
ops. Fully deterministic, faster and more precise than SPDP,
minimal memory overhead. (default >= AMBER 12.9)
Q24.40 for Forces, Q34.30 for Energies / Virials
1. Scott
Le Grand, Andreas W. Goetz, Ross C. Walker, “SPFP: Speed without compromise - a mixed precision
model for GPU accelerated molecular dynamics simulations”, Comp. Phys. Comm., 2012, in review.
SAN DIEGO SUPERCOMPUTER CENTER
23
25. Running on GPUs
• Details provided on: http://ambermd.org/gpus/
• Compile (assuming nvcc >= 4.2 installed)
•
•
•
•
cd $AMBERHOME
./configure –cuda gnu
make install
make test
• Running on GPU
• Just replace executable pmemd with pmemd.cuda
• $AMBERHOME/bin/pmemd.cuda –O –i mdin …
If set process is exclusive mode is on for each GPU,
pmemd just ‘Does the right thing’™
SAN DIEGO SUPERCOMPUTER CENTER
25
32. Interactive MD?
• Single nodes are now fast enough, GPU enabled cloud nodes
actually make sense as a back end now.
SAN DIEGO SUPERCOMPUTER CENTER
32
36. Recommended Hardware
• See the following page for continuous updates:
http://ambermd.org/gpus/recommended_hardware.htm#hardware
SAN DIEGO SUPERCOMPUTER CENTER
36
37. DIY 4 GPU System
Antec P280 Black ATX Mid Tower Case
http://tinyurl.com/a2wtkfr
$126.46
SILVERSTONE ST1500 1500W
ATX12V/EPS12V Power Supply
http://tinyurl.com/alj9w93
$299.99
AMD FX-8350 Eight Core CPU
http://tinyurl.com/b9teunj
$189.15
Corsair Vengeance 16GB
(2x8GB) DDR3 1600 MHz
Desktop Memory
http://tinyurl.com/amh4jyu
$96.87
GIGABYTE GA-990FXA-UD7
AM3+ AMD Motherboard
http://tinyurl.com/b8yvykv
$216.00
Seagate Barracuda 7200 3 TB
7200RPM Internal Bare Drive
ST3000DM001
http://tinyurl.com/a4ccfvj
$139.99
4of EVGA GeForce GTX 680
4096 MB GDDR5
http://tinyurl.com/d82lq8d
$534.59 each
Or K20X or GTX Titan
Total Price : $3206.82
Note cards in this system run at x8 so you can only run single GPU AMBER runs (but you can run 4 simultaneously at full
speed) – If you want to be able to run MPI 2xGPU runs then only place 2 cards in the x16 slots.
SAN DIEGO SUPERCOMPUTER CENTER
37
38. Single Workstation
Based on Exxact Model Quantum TXR410-512R
(Available as Amber MD Workstation with AMBER 12 preinstalled)
(A) SuperServer Tower / 4U Convertible Chassis, Supports Up To 3x 5.25 Inch
Bays, 8x 3.5 Inch Hot-Swap HDD Bays, Up To 4x Double-Width GPU, 1620W
Redundant Power Supplies
(B) SuperServer Intel Patsburg Based Motherboard, Supports Up To 2x Sandy
Bridge EP (Socket R) Series CPU, 2x 10/100/1000 NIC, Dedicated IPMI Port, 4x
PCIE 3.0 x16 Slots, 2x PCIE 3.0 x8 Slots, Up To 512GB DDR3 1600MHz
ECC/REG Memory
(C) Intel Xeon E5-2620 2.00 Ghz 15MB Cache 7.20GT/sec LGA 2011 6-Core
Processor (2)
(D) Certified 4GB 240-Pin DDR3 SDRAM ECC Registered DDR3 1600 MHz
Server Memory (8)
(E) Certified 2TB 7200RPM 64MB Cache 3.5 Inch SATA Enterprise Class HDD in a
RAID 1 Configuration (2)
(G) GeForce GTX 680 4GB or GTX Titan or K20X 6GB 384-bit GDDR5 PCI
Express 3.0 Accelerator (4)
(H) CentOS 6
Price ~ $6,500 (GTX 680) $8,500 (GTX Titan)
$20,000 (K20X)
SAN DIEGO SUPERCOMPUTER CENTER
38
39. Clusters
# of CPU sockets
2
Cores per CPU socket
4+ (1 CPU core drives 1 GPU)
CPU speed (Ghz)
2.0+
System memory per node (GB)
16 to 32
GPUs
Kepler K10, K20, K20X
Fermi M2090, M2075, C2075
# of GPUs per CPU socket
1-2
(4 GPUs on 1 socket is good to do 4 fast serial GPU runs)
GPU memory preference (GB)
6
GPU to CPU connection
PCIe 2.0 16x or higher
Server storage
2 TB+
Network configuration
Infiniband QDR or better (optional)
Scale to multiple nodes with same single node configuration
SAN DIEGO SUPERCOMPUTER CENTER
39
40. Acknowledgements
San Diego Supercomputer Center
University of California San Diego
National Science Foundation
NSF Strategic Applications Collaboration (AUS/ASTA) Program
NSF SI2-SSE Program
NVIDIA Corporation
Hardware + People
People
Romelia Salomon
Andreas Goetz
Scott Le Grand
Mike Wu
Matthew Clark
Romelia Salomon
Robin Betz
Jason Swails
Ben Madej
40
SAN DIEGO SUPERCOMPUTER CENTER
Duncan Poole
Mark Berger
Sarah Tariq
41. AMBER User Survey – 2011
GPU Momentum is Growing!
AMBER machines
Don't
know
1%
GPUs and
CPUs
49%
CPUs only
50%
GPU Experience
2-3
years
6%
Don't
know
8%
1-2
years
22%
Less
than 6
months
30%
Almost 1
year
34%
49% of AMBER machines have GPUs
85% of users have up to 2 years of GPU experience
Slide #41
42. Testimonials
“ The whole lab loves the GPU cluster.
Students are now able to run AMBER
simulations that would not have been
feasible on our local CPU-based
resources before. Research throughput
the group has been enhanced
significantly.
”
Jodi Hadden
Chemistry Graduate Student
Woods Computing Lab
Complex Carbohydrate Research Center
University of Georgia
Slide #42
43. GPU Accelerated Apps Momentum
Key codes are GPU Accelerated!
Molecular Dynamics
Abalone – GPU only code
ACEMD – GPU only code
AMBER
CHARMM
DL_POLY
GROMACS
HOOMD-Blue – GPU only
code
LAMMPS
NAMD
Quantum Chemistry
ABINIT
BigDFT
CP2K
GAMESS
Gaussian – in development
NWChem – in development
Quantum Espresso
TeraChem – GPU only code
VASP
Check many more apps at www.nvidia.com/teslaapps
Slide #43
44. Test Drive K20 GPUs!
Experience The Acceleration
Run AMBER on Tesla K20 GPU
today
Sign up for FREE GPU Test
Drive on remotely hosted
clusters
www.nvidia.com/GPUTestDrive
Slide #44
45. Test Drive K20
GPUs!
Registration is Open!
March 18-21, 2013 | San Jose, CA
Experience The Acceleration
Run AMBER on Tesla K20 GPU
today
Sign up for FREE GPU Test
Drive on remotely hosted
clusters
www.nvidia.com/GPUTestDrive
Four days
Three keynotes
400+ sessions
One day of preconference developer
tutorials
150+ research posters
Lots of networking
events and
opportunities
Visit www.gputechconf.com for
more info.
Slide #45