This document summarizes Robert Henschel's presentation about optimizing parallel applications to accelerate scientific discoveries. It discusses Indiana University's high performance computing resources like BigRed and Quarry as well as the TeraGrid. It provides examples of how the High Performance Applications group helped researchers integrate HPC systems into an electron microscope workflow, migrate simulations of gas giant planets to the TeraGrid, and develop computational models to predict drug interactions. The presentation aims to make researchers aware of computing resources and how to get research done more efficiently.
Why Data Science Matters - 2014 WDS Data Stewardship Award LectureXiaogang (Marshall) Ma
A presentation with a review of technical trends in data management, publication and citation, and methodologies on data interoperability, provenance of research and semantic escience.
Significance Of Hadoop For Data ScienceRobert Smith
Hadoop is an important tool for data science when the volume of data exceeds the system memory or when the business case requires data to be distributed across multiple servers.
Ecosystem data and TERN: Genes to geosciences workshop 19 May 2014TERN Australia
Powerpoint presentation used to support the 'Ecosystem data and TERN' workshop on 19 May 2014, held at Macquarie University in Sydney as part of the Genes to Geosciences seminar series.
Optique - to provide semantic end-to-end connection between users and data sources; enable users to rapidly formulate intuitive queries using familiar vocabularies and conceptualisations and return timely answers from large scale and heterogeneous data sources.
Reusable Software and Open Data To Optimize AgricultureDavid LeBauer
Abstract:
Humans need a secure and sustainable food supply, and science can help. We have an opportunity to transform agriculture by combining knowledge of organisms and ecosystems to engineer ecosystems that sustainably produce food, fuel, and other services. The challenge is that the information we have. Measurements, theories, and laws found in publications, notebooks, measurements, software, and human brains are difficult to combine. We homogenize, encode, and automate the synthesis of data and mechanistic understanding in a way that links understanding at different scales and across domains. This allows extrapolation, prediction, and assessment. Reusable components allow automated construction of new knowledge that can be used to assess, predict, and optimize agro-ecosystems.
Developing reusable software and open-access databases is hard, and examples will illustrate how we use the Predictive Ecosystem Analyzer (PEcAn, pecanproject.org), the Biofuel Ecophysiological Traits and Yields database (BETYdb, betydb.org), and ecophysiological crop models to predict crop yield, decide which crops to plant, and which traits can be selected for the next generation of data driven crop improvement. A next step is to automate the use of sensors mounted on robots, drones, and tractors to assess plants in the field. The TERRA Reference Phenotyping Platform (TERRA-Ref, terraref.github.io) will provide an open access database and computing platform on which researchers can use and develop tools that use sensor data to assess and manage agricultural and other terrestrial ecosystems.
TERRA-Ref will adopt existing standards and develop modular software components and common interfaces, in collaboration with researchers from iPlant, NEON, AgMIP, USDA, rOpenSci, ARPA-E, many scientists and industry partners. Our goal is to advance science by enabling efficient use, reuse, exchange, and creation of knowledge.
---
Invited talk for the "Informatics for Reproducibility in Earth and Environmental Science Research" session at the American Geophysical Union Fall Meeting, Dec 17 2015.
Why Data Science Matters - 2014 WDS Data Stewardship Award LectureXiaogang (Marshall) Ma
A presentation with a review of technical trends in data management, publication and citation, and methodologies on data interoperability, provenance of research and semantic escience.
Significance Of Hadoop For Data ScienceRobert Smith
Hadoop is an important tool for data science when the volume of data exceeds the system memory or when the business case requires data to be distributed across multiple servers.
Ecosystem data and TERN: Genes to geosciences workshop 19 May 2014TERN Australia
Powerpoint presentation used to support the 'Ecosystem data and TERN' workshop on 19 May 2014, held at Macquarie University in Sydney as part of the Genes to Geosciences seminar series.
Optique - to provide semantic end-to-end connection between users and data sources; enable users to rapidly formulate intuitive queries using familiar vocabularies and conceptualisations and return timely answers from large scale and heterogeneous data sources.
Reusable Software and Open Data To Optimize AgricultureDavid LeBauer
Abstract:
Humans need a secure and sustainable food supply, and science can help. We have an opportunity to transform agriculture by combining knowledge of organisms and ecosystems to engineer ecosystems that sustainably produce food, fuel, and other services. The challenge is that the information we have. Measurements, theories, and laws found in publications, notebooks, measurements, software, and human brains are difficult to combine. We homogenize, encode, and automate the synthesis of data and mechanistic understanding in a way that links understanding at different scales and across domains. This allows extrapolation, prediction, and assessment. Reusable components allow automated construction of new knowledge that can be used to assess, predict, and optimize agro-ecosystems.
Developing reusable software and open-access databases is hard, and examples will illustrate how we use the Predictive Ecosystem Analyzer (PEcAn, pecanproject.org), the Biofuel Ecophysiological Traits and Yields database (BETYdb, betydb.org), and ecophysiological crop models to predict crop yield, decide which crops to plant, and which traits can be selected for the next generation of data driven crop improvement. A next step is to automate the use of sensors mounted on robots, drones, and tractors to assess plants in the field. The TERRA Reference Phenotyping Platform (TERRA-Ref, terraref.github.io) will provide an open access database and computing platform on which researchers can use and develop tools that use sensor data to assess and manage agricultural and other terrestrial ecosystems.
TERRA-Ref will adopt existing standards and develop modular software components and common interfaces, in collaboration with researchers from iPlant, NEON, AgMIP, USDA, rOpenSci, ARPA-E, many scientists and industry partners. Our goal is to advance science by enabling efficient use, reuse, exchange, and creation of knowledge.
---
Invited talk for the "Informatics for Reproducibility in Earth and Environmental Science Research" session at the American Geophysical Union Fall Meeting, Dec 17 2015.
How to Win the Moment in Real Time EventsTammy Gordon
Case studies on real time social media responses that won the Internet and beyond, plus a guided discussion on what Red Lobster should have done in their #Formation response. Prepared for Georgetown University School of Professional Studies class on February 8, 2016.
Nick Holway from the Novartis Institutes for Biomedical Research (NIBR) presented this deck at the Switzerland HPC Conference.
"Focused on High Performance and Scientific Computing at Novartis Institutes for Biomedical Research (NIBR), in Basel Switzerland, Nick Holway and his team provide HPC resources and services, including programming and consultancy for the innovative research organization. Supporting more than 6,000 scientists, physicians and business professionals from around the world focused on developing medicines and devices that can produce positive real-world outcomes for patients and healthcare providers, Nick also contributes expertise in bioinformatics, image processing and data science in support of the researchers and their works."
Watch the video: http://wp.me/p3RLHQ-gJG
Learn more: https://www.nibr.com/
and
http://www.hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How to Win the Moment in Real Time EventsTammy Gordon
Case studies on real time social media responses that won the Internet and beyond, plus a guided discussion on what Red Lobster should have done in their #Formation response. Prepared for Georgetown University School of Professional Studies class on February 8, 2016.
Nick Holway from the Novartis Institutes for Biomedical Research (NIBR) presented this deck at the Switzerland HPC Conference.
"Focused on High Performance and Scientific Computing at Novartis Institutes for Biomedical Research (NIBR), in Basel Switzerland, Nick Holway and his team provide HPC resources and services, including programming and consultancy for the innovative research organization. Supporting more than 6,000 scientists, physicians and business professionals from around the world focused on developing medicines and devices that can produce positive real-world outcomes for patients and healthcare providers, Nick also contributes expertise in bioinformatics, image processing and data science in support of the researchers and their works."
Watch the video: http://wp.me/p3RLHQ-gJG
Learn more: https://www.nibr.com/
and
http://www.hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
High Performance Data Analytics and a Java Grande Run TimeGeoffrey Fox
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development.
However the same is not so true for data intensive even though commercially clouds devote many more resources to data analytics than supercomputers devote to simulations.
Here we use a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures.
We propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks.
Our analysis builds on the Apache software stack that is well used in modern cloud computing.
We give some examples including clustering, deep-learning and multi-dimensional scaling.
One suggestion from this work is value of a high performance Java (Grande) runtime that supports simulations and big data
Dell High-Performance Computing solutions: Enable innovations, outperform exp...Dell World
Businesses and organizations depend on high-performance computing (HPC) solutions to help engineers, data analysts, researchers, developers and designers more effectively drive innovation and increase overall performance and competitiveness. Learn how Dell’s latest powerful and comprehensive HPC solutions for healthcare and life sciences, manufacturing and engineering, energy, finance, research and big-data analytics can provide your team with new ways to get more done—faster and better than ever before.
Using The Hadoop Ecosystem to Drive Healthcare InnovationDan Wellisch
Presentation delivered to the Chicago Technology For Value-Based Healthcare Meetup (https://www.meetup.com/Chicago-Technology-For-Value-Based-Healthcare-Meetup/)
Challenges and Issues of Next Cloud Computing PlatformsFrederic Desprez
Cloud computing has now crossed the frontiers of research to reach industry. It is used every day , whether to exchange emails or make
reservations on web sites. However, many research works remain to be done to improve the performance and functionality of these platforms of tomorrow. In this talk, I will do an overview of some these theoretical and appliead researches done at INRIA and particularly around Clouds distribution, energy monitoring and management, massive data processing and exchange, and resource management.
OpenStack at SJTU: Predictive Data Mining in Clinical Medicine with Dynamical...Shuquan Huang
Shanghai Jiao Tong University (SJTU) is building an OpenStack-based HPC Cloud for clinicians from various hospitals & institutes to improve the efficiency of diagnostic, therapeutic, and monitoring tasks. Clinicians can take advantage of cloud-based data mining technology to deal with huge amount of research data obtained from molecular medicine, such as genetic or genomic signatures and apply predictive data analytics with learning models in clinical medicine for patients' health. Predictive data mining is a typical HPC workload which is not easy to manage in OpenStack cloud. There are many challenges, trade-offs, and gaps in this cloud journey.
In this session, we’ll share:
How to build a HPC platform upon OpenStack infrastructure
What’s the key consideration of architecture design
How to dynamically provide a fully optimized HPC cluster with OpenHPC ingredients within OpenStack
How to guarantee the data mining workload performance in cloud environment
The Transformation of HPC: Simulation and Cognitive Methods in the Era of Big...inside-BigData.com
In this Deck from the 2018 Swiss HPC Conference, Dave Turek from IBM presents: The Transformation of HPC: Simulation and Cognitive Methods in the Era of Big Data.
"There is a shift underway where HPC is beginning to be addressed with novel techniques and technologies including cognitive and analytic approaches to HPC problems and the arrival of the first quantum systems. This talk will showcase how IBM is merging cognitive, analytics, and quantum with classic simulation and modeling to create a new path for computational science."
Watch the video: https://wp.me/p3RLHQ-ik7
Learn more: http://ibm.com
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How HPC and large-scale data analytics are transforming experimental scienceinside-BigData.com
In this deck from DataTech19, Debbie Bard from NERSC presents: Supercomputing and the scientist: How HPC and large-scale data analytics are transforming experimental science.
"Debbie Bard leads the Data Science Engagement Group NERSC. NERSC is the mission supercomputing center for the USA Department of Energy, and supports over 7000 scientists and 700 projects with supercomputing needs. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic. She obtained her PhD at Edinburgh University, and has worked at Imperial College London as well as the Stanford Linear Accelerator Center (SLAC) in the USA, before joining the Data Department at NERSC, where she focuses on data-intensive computing and research, including supercomputing for experimental science and machine learning at scale."
Watch the video: https://wp.me/p3RLHQ-kLV
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Data-intensive applications on cloud computing resources: Applications in lif...Ola Spjuth
Presentation at the de.NBI 2017 symposium “The Future Development of Bioinformatics in Germany and Europe” held at the Center for Interdisciplinary Research (ZiF) of Bielefeld University, October 23-25, 2017.
https://www.denbi.de/symposium2017
High Performance Computing and the Opportunity with Cognitive TechnologyIBM Watson
With the ability to reduce “time to insight” and accelerate research breakthroughs by providing immense computational power, high performance computing is becoming increasingly important in the marketplace. Meanwhile, cognitive technology has risen to prominence, similarly accelerating new insight, but through a very different approach - by analyzing previously ignored unstructured data, which accounts for 80% of new data created today.
By combining the powerful computing power of the HPC market, along with the machine learning, natural language processing, and even computer vision techniques found within cognitive technology, there is a huge opportunity to accelerate breakthroughs and enable better decision making than ever before.
Watch the replay of the webinar: https://www.youtube.com/watch?v=Hxgieboj3W0
My Invited Talk Presentation given at the LISA 2014 Conference in Seattle, WA, Nov 12, 2014. I describe some of the Lessons Learned while tracking usage of the HPC Computing Facilities of the Lattice Quantum Chromodynamics (LQCD) colaborators. The reports cover compute clusters at Fermi National Accelerator Laboratory.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Accelerate your Kubernetes clusters with Varnish Caching
Statewide It Robert Henschel
1. Tuning Parallel Applications to
Accelerate Scientific Discoveries
Robert Henschel
rhensche@indiana.edu
October 2009
2. Contents
• PTI / High Performance Applications
• Performance of Scientific Codes
• IU and TeraGrid Compute Resources
• Optimizing for IU's HPC Systems
• Using TeraGrid HPC Systems
• HPA is Here to Help
Robert Henschel
3. What this talk will be about
• Making you aware of compute resources that you
can use for your work, to make you more
productive.
• Introducing the High Performance Applications
group and how we can help get research done
faster.
• Give you examples of what we have done for
researchers to make them more competitive in their
field.
Robert Henschel
4. PTI and High Performance Applications
• Pervasive Technology Institute
– Develop and deliver innovative information technology
to advance research, education, industry and society.
– School of Informatics
– School of Law
– University Information Technology Services
• High Performance Applications
– Part of the Digital Science Center of PTI
– Part of the Research Technologies of UITS
– Seven people that help IU researchers make efficient
use of IU and TeraGrid compute resources
Robert Henschel
5. Performance of Scientific Codes
• Supercomputing, or High Performance Computing (HPC),
is not just for computer geeks!
• Performance for computer scientists
– Amdahls law and scalability
– Efficient usage of functional units of processors
– Optimally using memory bandwidth
– Trying to avoid I/O as much as possible
• Performance for researchers
– When do I get the answer to my problem?
– When does my job run and when is it done?
Robert Henschel
6. IU and TeraGrid Compute Resources
• Two HPC systems at IU
– BigRed 30 TFLOPS (3000 cores)
– Quarry 7 TFLOPS (1000 cores)
• Several special purpose systems
– Small Cell B.E. Cluster
– MDGRAPE-2 machine
• Several storage resources
– IU Data Capacitor
– GPFS, RFS, HPSS
• Policy of open access to compute resources
Robert Henschel
8. IU and TeraGrid Compute Resources cont'd
• TeraGrid
– NSF funded HPC systems and support infrastructure
– 11 resource providers
– More than 1,500 TFLOPS (150,000 cores)
• Central allocation and support structure
Robert Henschel
9. Optimizing for IU's HPC Systems
• Help researchers access the central systems and
determine what system to use
• Install and optimize applications
• Provide guidance on compiler and library optimization
• Help with job submission, especially running many
thousands of jobs
Robert Henschel
10. Using TeraGrid HPC Systems
• Low barrier of entry
• Identify if a problem and workflow will work on the
TeraGrid
• Get a startup allocation
• Use it and identify if it is worth pursuing this further
• Submit a full allocation request
Robert Henschel
11. Contents – HPA is Here to Help
• HPA is Here to Help
– What We Do
• Recent Examples
– Integrating HPC Systems into an Electron
Microscope Workflow
– Migrating Research in Gas Giant Planets from IU
to TeraGrid HPC Systems
– Developing Computational Models to Predict
Drug-Drug Interactions
Robert Henschel
12. What We Do
• Consulting about HPC system usage
– From start to finish
– Optimize source code for architectures
• Help with TeraGrid allocation proposals
• Adapting and creating workflows for new environments
• Consulting for grant proposals
Robert Henschel
13. HPC Systems and an Electron Microscope
General Case
– Users have an instrument that produces a lot of data
on a daily basis
– This data needs to be stored and analyzed
Electron Microscope in Simon Hall (IU Bloomington)
– Microscope stores data on a Windows workstation
– Researcher does quality checks on local workstation
– IU Data Capacitor links workstations, IU HPC systems
and the IU long term archive together
Robert Henschel
14. Gas Giant Planets on the TeraGrid
General Case
– Users have a set workflow for analyzing data
– Locally available compute resources are not big
enough to keep up with demand
Understanding Gas Giant Planets
– IDL is used to visualize simulation data
• Commercial software, IU Astronomy has a license
– Simulation application needs to run on a large shared
memory system
– TeraGrid and IU Data Capacitor tie this workflow
together
Robert Henschel
15. Predicting Drug-Drug Interactions
General Case
– Researchers implement proof of concept research
algorithms
– Scaling from proof of concept to production science is
difficult
– The ability to add HPC expertise to grant proposals will
make the proposal more competitive
Computational Models to Predict Drug-Drug Interactions
– Drug exposure model developed in R
– Scaling to real world data sets not possible without
using HPC systems
– Porting to C and running on UITS hardware
Robert Henschel
16. What this talk was about
• Made you aware of compute resources that you can
use for your work, to make you more productive.
• Introduced the High Performance Applications
group and how we can help get research done
faster.
• Gave you examples of what we have done for
researchers to make them more competitive in their
field.
Robert Henschel
17. Acknowledgments
This material is based upon work supported by the National Science
Foundation under Grant Numbers 0116050 and 0521433. Any
opinions, findings and conclusions or recommendations expressed in
this material are those of the author and do not necessarily reflect the
views of the National Science Foundation (NSF).
This work was support in part by the Indiana Metabolomics and
Cytomics Initiative (METACyt). METACyt is supported in part by Lilly
Endowment, Inc.
This work was support in part by the Indiana Genomics Initiative. The
Indiana Genomics Initiative of Indiana University is supported in part by
Lilly Endowment, Inc.
This work was supported in part by Shared University Research grants
from IBM, Inc. to Indiana University.
Robert Henschel