Vídeo: https://www.youtube.com/watch?v=8cFqNwhQ7uE
Fator chave para a competitividade do País, da Ciência e da Indústria.
Palestra ministrada durante o Intel Innovation Week 2015 .
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
Fujitsu World Tour 2017 - Compute Platform For The Digital WorldFujitsu India
Significant performance increase combined with a rich feature set based on cutting edge technology results in compelling benefits across a broad variety of application scenarios.
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
Scale your research workloads faster with Intel on AWS. Learn how the performance and productivity of Intel Hardware and Software help bridge the gap between ideation and results in Data Science. Get started on your AI Developer Journey @ software.intel.com/ai.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
Fujitsu World Tour 2017 - Compute Platform For The Digital WorldFujitsu India
Significant performance increase combined with a rich feature set based on cutting edge technology results in compelling benefits across a broad variety of application scenarios.
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
Scale your research workloads faster with Intel on AWS. Learn how the performance and productivity of Intel Hardware and Software help bridge the gap between ideation and results in Data Science. Get started on your AI Developer Journey @ software.intel.com/ai.
Healthcare has became one of the most important aspects of everyones life. Its importance has surged due to the latests outbreaks and due to this latest pandemic it has become mandatory to collaborate to improve everyones Healthcare as soon as possible.
IBM has reacted quickly sharing not only its knowledge but also its Artificial Intelligence Supercomputers all around the world.
Those Supercomputers are helping to prevail this outbreak and also future ones.
They have completely different features compared to proposals from other players of this Supercomputers market.
We will try to make a quick look at the differences of those AI focused Supercomputers and how they can help in the R&D of Healthcare solutions for everyone, from those ones with access to a big IBM AI Supercomputer to those ones with access to only one small IBM AI focused server.
HPC + Ai: Machine Learning Models in Scientific Computinginside-BigData.com
In this video from the 2019 Stanford HPC Conference, Steve Oberlin from NVIDIA presents: HPC + Ai: Machine Learning Models in Scientific Computing.
"Most AI researchers and industry pioneers agree that the wide availability and low cost of highly-efficient and powerful GPUs and accelerated computing parallel programming tools (originally developed to benefit HPC applications) catalyzed the modern revolution in AI/deep learning. Clearly, AI has benefited greatly from HPC. Now, AI methods and tools are starting to be applied to HPC applications to great effect. This talk will describe an emerging workflow that uses traditional numeric simulation codes to generate synthetic data sets to train machine learning algorithms, then employs the resulting AI models to predict the computed results, often with dramatic gains in efficiency, performance, and even accuracy. Some compelling success stories will be shared, and the implications of this new HPC + AI workflow on HPC applications and system architecture in a post-Moore’s Law world considered."
Watch the video: https://youtu.be/SV3cnWf39kc
Learn more: https://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Large-Scale Optimization Strategies for Typical HPC Workloadsinside-BigData.com
In this deck from PASC 2019, Liu Yu from Inspur presents: Large-Scale Optimization Strategies for Typical HPC Workloads.
"Ensuring performance of applications running on large-scale clusters is one of the primary focuses in HPC research. In this talk, we will show our strategies on performance analysis and optimization for applications in different fields of research using large-scale HPC clusters. Our strategies are designed to comprehensively analyze runtime features of applications, parallel mode of the physical model, algorithm implementation and other technical details. This three levels of strategy covers platform optimization, technological innovation, and model innovation, and targeted optimization based on these features. State-of-the-art CPU instructions, network communication and other modules, and innovative parallel mode of some applications have been optimized. After optimization, it is expected that these applications will outperform their non-optimized counterparts with obvious increase in performance."
Watch the video: https://wp.me/p3RLHQ-kwB
Learn more: http://en.inspur.com/en/2403285/2403287/2403295/index.html
and
https://pasc19.pasc-conference.org/program/keynote-presentations/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
Everything is changing from Health Care to the Automotive markets without forgetting Financial markets or any type of engineering everything has stopped being created as an individual or best-case scenario a team effort to something that is being developed and perfectioned by using AI and hundreds of computers.And even AI is something that we no longer can run in a single computer, no matter how powerful it is. What drives everything today is HPC or High-Performance Computing heavily linked to AI In this session we will discuss about AI, HPC computing, IBM Power architecture and how it can help develop better Healthcare, better Automobiles, better financials and better everything that we run on them
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Innovating to Create a Brighter Future for AI, HPC, and Big Datainside-BigData.com
In this deck from the DDN User Group at ISC 2019, Alex Bouzari from DDN presents: Innovating to Create a Brighter Future for AI, HPC, and Big Data.
"In this rapidly changing landscape of HPC, DDN brings fresh innovation with the stability and support experience you need. Stay in front of your challenges with the most reliable long term partner in data at scale."
Watch the video: https://wp.me/p3RLHQ-kxm
Learn more: http://ddn.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Healthcare has became one of the most important aspects of everyones life. Its importance has surged due to the latests outbreaks and due to this latest pandemic it has become mandatory to collaborate to improve everyones Healthcare as soon as possible.
IBM has reacted quickly sharing not only its knowledge but also its Artificial Intelligence Supercomputers all around the world.
Those Supercomputers are helping to prevail this outbreak and also future ones.
They have completely different features compared to proposals from other players of this Supercomputers market.
We will try to make a quick look at the differences of those AI focused Supercomputers and how they can help in the R&D of Healthcare solutions for everyone, from those ones with access to a big IBM AI Supercomputer to those ones with access to only one small IBM AI focused server.
HPC + Ai: Machine Learning Models in Scientific Computinginside-BigData.com
In this video from the 2019 Stanford HPC Conference, Steve Oberlin from NVIDIA presents: HPC + Ai: Machine Learning Models in Scientific Computing.
"Most AI researchers and industry pioneers agree that the wide availability and low cost of highly-efficient and powerful GPUs and accelerated computing parallel programming tools (originally developed to benefit HPC applications) catalyzed the modern revolution in AI/deep learning. Clearly, AI has benefited greatly from HPC. Now, AI methods and tools are starting to be applied to HPC applications to great effect. This talk will describe an emerging workflow that uses traditional numeric simulation codes to generate synthetic data sets to train machine learning algorithms, then employs the resulting AI models to predict the computed results, often with dramatic gains in efficiency, performance, and even accuracy. Some compelling success stories will be shared, and the implications of this new HPC + AI workflow on HPC applications and system architecture in a post-Moore’s Law world considered."
Watch the video: https://youtu.be/SV3cnWf39kc
Learn more: https://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Large-Scale Optimization Strategies for Typical HPC Workloadsinside-BigData.com
In this deck from PASC 2019, Liu Yu from Inspur presents: Large-Scale Optimization Strategies for Typical HPC Workloads.
"Ensuring performance of applications running on large-scale clusters is one of the primary focuses in HPC research. In this talk, we will show our strategies on performance analysis and optimization for applications in different fields of research using large-scale HPC clusters. Our strategies are designed to comprehensively analyze runtime features of applications, parallel mode of the physical model, algorithm implementation and other technical details. This three levels of strategy covers platform optimization, technological innovation, and model innovation, and targeted optimization based on these features. State-of-the-art CPU instructions, network communication and other modules, and innovative parallel mode of some applications have been optimized. After optimization, it is expected that these applications will outperform their non-optimized counterparts with obvious increase in performance."
Watch the video: https://wp.me/p3RLHQ-kwB
Learn more: http://en.inspur.com/en/2403285/2403287/2403295/index.html
and
https://pasc19.pasc-conference.org/program/keynote-presentations/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
Everything is changing from Health Care to the Automotive markets without forgetting Financial markets or any type of engineering everything has stopped being created as an individual or best-case scenario a team effort to something that is being developed and perfectioned by using AI and hundreds of computers.And even AI is something that we no longer can run in a single computer, no matter how powerful it is. What drives everything today is HPC or High-Performance Computing heavily linked to AI In this session we will discuss about AI, HPC computing, IBM Power architecture and how it can help develop better Healthcare, better Automobiles, better financials and better everything that we run on them
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Innovating to Create a Brighter Future for AI, HPC, and Big Datainside-BigData.com
In this deck from the DDN User Group at ISC 2019, Alex Bouzari from DDN presents: Innovating to Create a Brighter Future for AI, HPC, and Big Data.
"In this rapidly changing landscape of HPC, DDN brings fresh innovation with the stability and support experience you need. Stay in front of your challenges with the most reliable long term partner in data at scale."
Watch the video: https://wp.me/p3RLHQ-kxm
Learn more: http://ddn.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
AN INTRESTING AND VERY USEFUL PRESENTATION ON SIMULATION ON PROTEUS.iN THE SIMULATION VIEWER WILL LEARN HOW TO OPERATE PROTEUS AND GLOW AN LED AND COMPARE THE POWER CONSUMPTION BETWEEN LED AND BULB/CFL.
The Marketing Journey: Transforming For SuccessSparkPost
Jay Henderson from IBM Marketing Cloud presents: The role of the marketing department is evolving at a rapid pace, with increasing pressure on teams to drive more revenue and to redefine the customer experience. While many CMOs are struggling to keep up, others are completely transforming the way they are doing business and meeting these challenges head-on. Hear insights from IBM's CMO research; assess where your company stands; and learn where the market leaders are focusing their efforts.
Sharing is the New Buying: How to Win in the Collaborative Economy [INFOGRAPH...Vision Critical
In the Collaborative Economy, people can get what they need from each other—rather than buying from established brands. Businesses need to understand this emergent market in order to embrace the opportunities it offers.
In partnership with Jeremiah Owyang of Crowd Companies, Vision Critical asked 90,112 people in the US, Canada and the UK about their participation in the Collaborative Economy. These infographics show our report's key findings.
To read the full report: http://bit.ly/SharingNewBuyingSH
This lecture aims to give some food for thought regarding how the current High Performance Computing systems (hardware and software) tends to merge with Big Data ones (Machine Learning, Analytics and Enterprise workloads) in order to meet both workloads demands sharing the same clusters.
Give Your Organization Better, Faster Insights & Answers with High Performanc...Dell World
From modeling and simulating new products to analyzing ‘Big Data’ for insights into customer behaviors, achieving better results faster can be crucial for competitive advantages and success. High performance computing (HPC), long used for academic/government research, has gone mainstream, and is now used by companies and organizations in all fields—from finance to pharmaceuticals, from marketing to manufacturing, from e-commerce to engineering, from healthcare to homeland defense. Dell is a leader in HPC and can help you get better, faster insights and answers, no matter what your organization desires to achieve.
In this video from Moabcon 2013, Dick Bland and Jérôme Labat from HP present: The New Style of IT: HP Update for Moabcon 2013.
"Cloud, Mobility, Security, and Big Data are transforming what the business expects from IT resulting in a “New Style of IT.” The result of alternative thinking from a proven industry leader, HP Moonshot is the world’s first software defined server that will accelerate innovation while delivering breakthrough efficiency and scale."
While the first spin of Moonshot is not targeted at HPC, Bland said that HP will be able to spin up new modules for the platform that could include FPGAs and ARM-based nodes more suited to high performance computing.
Learn more at: http://www.adaptivecomputing.com/company/news-and-events/events/moabcon-2013/moabcon-2013-full-agenda/
You can watch the video of this talk at this URL: http://inside-cloud.com/2013/04/video-the-new-style-of-it-hp-moonshot-update-for-moabcon-2013/
Red Hat Summit 2015: Red Hat Storage Breakfast sessionRed_Hat_Storage
See the presentation shared during a special breakfast session during Red Hat Summit 2015. Learn about our mission, what areas and communities are seeing strong growth, and much more.
If you're like most of the world, you're on an aggressive race to implement machine learning applications and on a path to get to deep learning. If you can give better service at a lower cost, you will be the winners in 2030. But infrastructure is a key challenge to getting there. What does the technology infrastructure look like over the next decade as you move from Petabytes to Exabytes? How are you budgeting for more colossal data growth over the next decade? How do your data scientists share data today and will it scale for 5-10 years? Do you have the appropriate security, governance, back-up and archiving processes in place? This session will address these issues and discuss strategies for customers as they ramp up their AI journey with a long term view.
IBM Special Announcement session Intel #IDF2013 September 10, 2013Cliff Kinard
Nice IBM System x announcement overview presentation from Intel IDF2013 held on September 10, 2013.
IBM NeXtScale System is a new dense offering from IBM. It is based on our experience with IBM iDataPlex and IBM BladeCenter along with a tight focus on emerging and future client requirements. Today we announced two components: IBM NeXtScale n1200 enclosure – a 6U enclosure that can hold up to 12 NeXtScale System servers IBM NeXtScale nx360 M4 server – a half-wide server with up to 2 processors, 8 DIMMs (256 GB), 2 PCIe 3.0 adapters, and 2 HDDs or 4 solid-state drives
Watch our North America webcast replay here:
http://event.on24.com/r.htm?e=670225&s=1&k=FC5CD17AB42385B40BCED29B8B61E2E8&partnerref=IBM09
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsRed_Hat_Storage
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Join us for an exciting and informative preview of the broadest range of next-generation systems optimized for tomorrow’s data center workloads, Powered by 4th Gen Intel® Xeon® Scalable Processors (formerly codenamed Sapphire Rapids).
Experts from Supermicro and Intel will discuss how the upcoming Supermicro X13 systems will enable new performance levels utilizing state-of-the-art technology, including DDR5, PCIe 5.0, Compute Express Link™ 1.1, and Intel® Advanced Matrix Extensions (Intel AMX).
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Computação de Alto Desempenho - Fator chave para a competitividade do País, da Ciência e da Indústria.
1.
2. Fator chave para a competitividade do País,
competitividade do País, da Ciência e da
Ciência e da Indústria
Igor Freitas, Engenheiro de Aplicação, 05/11/2015
3. 3
Agenda
O que é High Performance Computing ?
HPC & Competitividade da Indústria, da Ciência e do País
Iniciativas da Intel em HPC no Brasil
5. O que é High Performance Computing ?
5
“High-performance computing (HPC) is the use of parallel processing for running
advanced application programs efficiently, reliably and quickly. The term applies
especially to systems that function above a teraflop or 1012 floating-point
operations per second.”
or in a simpler way...
How to solve the hardest problems in the world regarding every
aspect of our lives using a powerful and efficiency supercomputer
6. Extending to New Dimensions
HPC pode ser utilizado em diferentes áreas da ciência e da indústria
6
Aplicações
em HPC
Aplicações
Empresariais
Análise de
Imagens médicas
Modelagem climática &
Previsão do Tempo
Mercado
Financeiro
Energia – Aplicações
sísmicas
Conteúdo Digital Dinâmica Molecular
Dinâmica dos
Fluídos
Manufatura e CAD/CAMSequenciamento de DNA Automação na
Indústria Eletrônica
Defesa &
Segurança
Mecanismos de
busca
Banco de dados
paralelos
Business Intelligence /
Data Mining
7. O que é High Performance Computing ?
Democratização da performance e operação de
supercomputadores
7
“Calculadora Automática de Sequência
Controlada ou “Mark I” da IBM”
Missão: ”desenvolver uma máquina
que pudesse fazer cálculos científicos
rápidos a fim de entender os assuntos
da guerra, tais como a trajetória das
ogivas”
“Isso envolvia a tradução de problemas
matemáticos para uma linguagem
numérica que o computador pudesse
entender.”
Grace Murray Hopper at the
UNIVAC keyboard, c. 1960 - Fonte
8. A democratização dos clusters de HPC
Os últimos 20 anos
108
105
$/FLOP
10
1994
1
2014
>15,000X
IMPROVEMENT1
YEAR Avanços na
Ciência
Alto ROI no processo de
Inovação Industrial
Beowulf Cluster
*Source: Intel per socket estimate comparing Intel DX4TM processor (Beowulf) versus Intel® Xeon PhiTM (Knights Corner)
Other brands and names are the property of their respective owners.
8
9. O que é High Performance Computing ?
HPC vs Big Data
FORTRAN / C++
Applications
MPI
High Performance
Java* Applications
Hadoop*
Simple to Use
SLURM
Supports large scale startup
YARN*
More resilient of hardware failures
Lustre*
Remote Storage
HDFS*, SPARK*
Local Storage
Compute & Memory
Focused
High Performance Components
Storage Focused
Standard Server Components
Server Storage
SSDs
Switch
Fabric
Infrastructure
Modelo de
Programação
Resource
Manager
Sistema de
arquivos
Hardware
Server Storage
HDDs
Switch
Ethernet
Infrastructure
Daniel Reed and Jack Dongarra, Exascale Computing and Big Data in Communications of the ACM journal, July 2015 (Vol 58, No.7), and Intel analysis
Other brands and names are the property of their respective owners. 9
10. O que é High Performance Computing ?
Big Data + HPC: Processamento “pesado” em tempo real
Small Data + Small
Compute
e.g. Data analysis
Big Data +
Small Compute
e.g. Search, Streaming,
Data Preconditioning
Small Data +
Big Compute
e.g. Mechanical Design, Multi-physics
Data
Compute
10
11. Visão da Intel para HPC
Balanced compute, storage, and interconnects based on workload
NETWORKING SOFTWARECOMPUTE STORAGE
11
12. Quebra de paradigma para Sistemas Massivamente Paralelos
Processador + Redes de alta velocidade + Memória = Knights Landing
Coprocessor
Fabric
Memory
Memory Bandwidth
~500 GB/s STREAM
Memory Capacity
Over 25x* KNC
Resiliency
Systems scalable to >100 PF
Power Efficiency
Over 25% better than card1
I/O
Up to 100 GB/s with int fabric
Cost
Less costly than discrete parts1
Flexibility
Limitless configurations
Density
3+ KNL with fabric in 1U3
Knights Landing
*Comparison to 1st Generation Intel® Xeon Phi™ 7120P Coprocessor (formerly codenamed Knights Corner)
1Results based on internal Intel analysis using estimated power consumption and projected component pricing in the 2015 timeframe. This analysis is
provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance.
2Comparison to a discrete Knights Landing processor and discrete fabric component.
3Theoretical density for air-cooled system; other cooling solutions and configurations will enable lower or higher density.
Server Processor
12
13. Arquitetura Única para HPC & Big Data
HPC Big Data
FORTRAN / C++
Applications
MPI
High Performance
Java* Applications
Hadoop*
Simple to Use
Lustre* with Hadoop* Adapter
Remote Storage
Compute & Big Data Capable
Scalable Performance Components
Server Storage
(SSDs and
Burst Buffers)
Intel®
Omni-Path
Architecture
Infrastructure
Programming
Model
Resource
Manager
File System
Hardware
*Other names and brands may be claimed as the property of others
HPC & Big Data-Aware Resource Manager
13
14. Próximos passos para HPC & Big Data
Hierarquia de Memória & Storage adaptável
Processor
Compute
Node
I/O Node
Remote
Storage
Compute
Today
Caches
Local Memory
SSD Storage
Parallel File System
(Hard Drive Storage)
HigherBandwidth.
LowerLatencyandCapacity
Some remote data moves
onto I/O node
I/O Node storage moves to
compute node
Local memory is now faster & in
processor package
Compute
Future
Caches
Non-Volatile Memory
Burst Buffer Storage
Parallel File System
(Hard Drive Storage)
In-Package High
Bandwidth Memory*
*cache, memory or hybrid mode 14
15. O que é High Performance Computing ?
#HPC Matters
15
HPC Transforms Parkinson's Disease - SC15
16. O que é High Performance Computing ?
#HPC Matters
16
SC 15 - Climate Modeling
18. HPC propicia uma nova Metodologia Científica
Inovação na Indústria
• Prediction
• Modeling & Simulation
• Experiment Refinement
• Physical
Prototyping
• Analysis
• Conclusion
• Refinement
• Physical
Prototyping
• Analysis
• Conclusion
• Refinement
• Hypothesis
• Hypothesis
1. Satava, Richard M. “The Scientific Method Is Dead-Long Live the (New) Scientific Method.” Journal of Surgical Innovation (June 2005).
• Prediction
To Compete, You Must Compute
Accelerates
the Method
Iterate
18
19. HPC & Competitividade da Indústria, da Ciência e
do País
19
• Ordem executiva do presidente Obama para um “programa
nacional de Supercomputação”
• HPC como “Top priority” para alavancar a competitividade dos
EUA
”In order to maximize the benefits
of HPC for economic competitiveness
and scientific discovery, the United
States Government must create a
coordinated Federal strategy in HPC
research, development, and
deployment”
Executive Order, Barack Obama
Fonte: The White House
Office of the Press Secretary
21. Topline
Innovation
Bottom-line
Costs
Got the most for their Autodesk
software investment with optimized
performance on Intel platforms
Intel® Xeon® Processor E5-2600
product family based solution
across workstations and clusters
reduced deployment and
maintenance costs
More compelling, accurate
visualization of car design
Avoid physical prototyping
spin by identifying body part
fit issues
Reduce turn-around from
identifying design changes
Audi Workflow
Real-time, photo-realistic predictive rendering
Virtual prototyped images
Images courtesy of The Audi Group, Used by permission
22. Intel® Xeon® Processor
E5-2600 product family enabled
artist workstations
Large, shared rendering
clusters configured with Intel®
Xeon® Processor E5-2600
product family
Large Cluster
Computation
Intel® Xeon ®
Workstation
DreamWorks Animation Results
Enables more iterations, improves movie production process
“By combining Xeon E5-2600 class
processors with a Xeon Phi coprocessor, we
are now able to provide artists with extremely
high-quality light transport simulation in large
scenes at interactive speeds. This enables us
to bring further technical innovation to bear
on the ways breathtaking film imagery is
created."
-- Evan
Smyth, Staff Architect
DreamWorks Animation
proprietary software
22
23. Genomics search algorithm
Intel based display device
(work done on cluster)
Expanded shared cluster
capacity with 100+ node Intel®
Xeon® processor E5-2600
product family cluster
• Compute capacity
expanded 61%
• Rack space increased by
only 22%
BLAST
Monsanto Result
Getting seeds to farmers quicker with fewer resources
Desktop
Large Cluster
28% faster BLAST workload
performance
Research team decreased time-to-
results from 2 weeks to 6 days
Source: Results courtesy Monsanto Corporation, 2012
23
25. Iniciativas da Intel em HPC no Brasil
Oil & Gas - Reservoir Simulator
at PETROBRAS
LNCC - National Laboratory for Scientific Computing
Largest HPC cluster in Latin America
NCC / UNESP
An Intel® Modern Code Partner
• Up to 10.5x performance
gains in their
Reservoir Simulator software
• Up to 30x performance gain
in Oil & Gas applications
• 5 HPC Hands-on Workshops
• 340 developers trained
• On-going white-papers together others Institutes 25
26. Iniciativas da Intel em HPC no Brasil
26
• Modernizing applications to increase parallelism and
scalability
• Leverage cores, caches, threads, and vector capabilities of
microprocessors and coprocessors.
• Current centers in Brazil
27. ¹Author: Gilvan Vieira - gilvan.vieira.coppetec@petrobras.com.br – PETROBRAS/CENPES
Estudo de Caso
PETROBRAS - Simulação de Reservatórios
Otimização do código através das ferramentas
Intel® VTune™ Amplifier e Intel® Compiler
Até 3.8x speedup em multiplicações de matrizes x vetores
(utilizando apenas 1 núcleo da CPU)
Ganhos de Performance¹
Assembly Fortran code using 3 scalar
instructions
C++ templated assembly code
1 vectorized , 2 scalar
C++ template version speedup vs Fortran original code
using Intel Compiler on Linux environment.
Part of the optimization: In this case VTune showed the vectorized
code was inneficiency , thus #pragma novector was used
27
28. ¹Author: Gilvan Vieira - gilvan.vieira.coppetec@petrobras.com.br – PETROBRAS / CENPES
Estudo de Caso
PETROBRAS - Simulação de Reservatórios
• Intel Trace Analyzer and Collector facilitated
the visualization of “serial effect communication” using
blocking MPI_Sendrecv calls, thus non-blocking calls
were used
• Event Timeline MPI communication using 16 ranks
Ganhos de performance em um ambiente paralelo utilizando 16 núcleos da
CPU através do uso da ferramenta Intel® Trace Analyzer & Collector¹
Ganhos de1.28x a 10.5x de performance em kernels de
multiplicação de matrizes x vetores
28
29. ¹Authors: Frederico L. Cabral – fcabral@lncc.br , Marcio Murad – murad@lncc.br, Carla Osthoff osthoff@lncc.br
Estudo de Caso
LNCC – Laboratório Nacional de Computação Científica
1º projeto: “Fine-Tuning Xeon architecture Vectorization and Parallelization of a
Numerical Method for convection-diffusion equations”
Aguardando publicação no volume CCIS 565, Springer:
"Second Latin American Conference, CARLA 2015, Petrópolis, Brazil, August
26-28, 2015, Proceedings/Revised Selected Papers".
Ganho de performance em um servidor Dual-socket Xeon®
utilizando 56 threads
30x performance gain vs código original
Cooperação Técnica com foco em projetos de pesquisa em Óleo & Gás
29
30. ¹Authors: Frederico L. Cabral – fcabral@lncc.br , Marcio Murad – murad@lncc.br, Carla Osthoff osthoff@lncc.br
1st passo: “não advinhe, meça !”
Otimize aplicações para uma única thread através de Vetorização
Passe um “raio-x” em sua aplicação com o Intel® VTune™ Amplifier
Foi identificado desperdício da CPU
Módulo de divisão da CPU sobrecarregado
Problemas de latência atrapalha a vetorização
Estudo de Caso
LNCC – Laboratório Nacional de Computação Científica
30
31. ¹Authors: Frederico L. Cabral – fcabral@lncc.br , Marcio Murad – murad@lncc.br, Carla Osthoff osthoff@lncc.br
3º Passo – Dê algumas “dicas” ao compilador para uso do paralelismo
dentro de cada core da CPU
double alfa_aux = 1.0 - 2.0*alfa;
#pragma simd vectorlengthfor(double), private(alfa)
#pragma vector nontemporal(U_old) //improves cache usage
#pragma prefetch *64:128
for (i = head+1 ; i <= N-2 ; i+=2)
{
U_old[i] = alfa*(U_new[i-1] + U_new[i+1]) + alfa_aux * U_new[i];
//U_old[i] = alfa*(U_new[i-1] + U_new[i+1]) + (1.0 - 2.0*alfa)*U_new[i];
}
Estudo de Caso
LNCC – Laboratório Nacional de Computação Científica
31
32. ¹Authors: Frederico L. Cabral – fcabral@lncc.br , Marcio Rentes Borges – marcio.rentes.borges@gmail.com , Carla Osthoff osthoff@lncc.br
2º Projeto: “Fine Tuning Optimization applied in a Porous Media Flow Application
using Intel Tools” (a ser publicado)
1ª fase: melhorar performance em aplicações single-
threads no processador Intel® Xeon®
Up to 4.1x performance gain vs original code
(resultados parciais)
Estudo de Caso
LNCC – Laboratório Nacional de Computação Científica
Cooperação Técnica com foco em projetos de pesquisa em Óleo & Gás
32
33. Estudo de Caso
FATEC – Baixada Santista Rubens Lara
”Parallel Recommender System Based on the Intel® Xeon® and Xeon Phi™ “
Predição de performance através do Intel® Advisor antes de investir esforços otimizando
o código
Xeon: 16 threads seria o melhor cenário
Xeon Phi : 120 threads seria o melhor cenário
33
34. Intel Compiler report
Understand what optimizations were performed...and how to extract the maximum performance.
LOOP BEGIN at regressao-xeon.c(116,18) inlined into regressao-xeon.c(55,6)
remark #15389: vectorization support: reference beta_756 has unaligned access [ regressao-xeon.c(118,11) ]
remark #15389: vectorization support: reference entrada_756 has unaligned access [ regressao-xeon.c(118,11) ] remark
#15381: vectorization support: unaligned access used inside loop body
remark #15427: loop was completely unrolled
remark #15399: vectorization support: unroll factor set to 6
remark #15301: SIMD LOOP WAS VECTORIZED
remark #15450: unmasked unaligned unit stride loads: 2
remark #15475: --- begin vector loop cost summary ---
remark #15476: scalar loop cost: 12
remark #15477: vector loop cost: 13.500
remark #15478: estimated potential speedup: 3.640
remark #15479: lightweight vector operations: 7
remark #15488: --- end vector loop cost summary ---
LOOP END
double *beta = (double*) _mm_malloc (TOTBETAS * sizeof(double), AVX_ALIGN);
HINTS TO DECLARE DATA ALIGNED
TO ASSIST VECTORIZATON
Estudo de Caso
FATEC – Baixada Santista Rubens Lara
”Parallel Recommender System Based on the Intel® Xeon® and Xeon Phi™ “
34
35. Partial conclusions – First part
• Intel Advisor performance predictions were very precise
• Despite “OpenMP + MKL Offload to Xeon Phi” showed 1.2x speedup, there is room
for higher speedups !
• Possible path: investigate a MPI + OpenMP version to explore Xeon + Xeon Phi
1
2.28
3.03
4.58 4.71 4.85
1 4 8 16 24 32
Speedup
Threads
Using only host processors as the number of threads is
increasing.
1
1.23
OPENMP+MKL OPENMP+MKL
OFFLOAD
Speedup
Speedup achieved by enabling Automatic Offload in MKL
Estudo de Caso
FATEC – Baixada Santista Rubens Lara
”Parallel Recommender System Based on the Intel® Xeon® and Xeon Phi™ “
35
Editor's Notes
Key Message: The markets and applications where Intel Xeon Phi can be applied will continue to grow as HPC is applied to other areas such as search, parallel data bases, mission critical apps, and large scale data mining for business applications. What is shown here are the traditional HPC applications and examples of use in the enterprise segment.
Traditional HPC applications:
Energy
Oil & gas exploration
Climate modeling & weather simulation
Medical imaging
Image processing
Molecular dynamics
Computational fluid dynamics
CAD/CAM/CAE
Digital content creation
Financial analysis (Monte Carlo/Black Scholes)
Gene sequencing
Crash simulations
Bio-chemistry
Emerging HPC applications in the enterprise market:
Parallel databases
Search
Business Intelligence & data mining
They use different systems…Today’s HPC and Big Data ecosystems are very different from the HW components though the SW stack including the programming model.
The key areas of debate between the two HPC and Big Data camps are the choices of programming model, resource manager, file system, and hardware.
Attribution – LEGAL
New workflows are emerging….Big Data and traditional HPC workloads will continue, but user demand for real time analysis & decision making requires applying HPC to “really” Big Data as part of a workflow or combined in new workloads. This isn’t a convergence of existing workloads, but new usage demands driving converging system requirements.
Fast Data examples per the Matsuoka’s presentation (Blue Waters Symposium Jun’15) : Convolutional Neural Nets, Deep Machine Learning Genomics (“the new fast big kind…metagenome analysis”), Uncertainty Quantification. Some other examples per Matsuoka…
social network-related large graph processing, social simulation, genomics with advanced sequence matching and weather problems that require real-time large data assimilation. …NOTICE the distinction between what people commonly call (and arguably over position as) “big data” vs the extremely big data that is being discussed here.
Per Metagenomics is the study of genetic material recovered directly from environmental samples. The broad field may also be referred to as environmental genomics, ecogenomics or community genomics. While traditional microbiology and microbial genome sequencing and genomics rely upon cultivated clonal cultures, early environmental gene sequencing cloned specific genes (often the 16S rRNA gene) to produce a profile of diversity in a natural sample. Such work revealed that the vast majority of microbial biodiversity had been missed by cultivation-based methods.[1] Recent studies use either "shotgun" or PCR directed sequencing to get largely unbiased samples of all genes from all the members of the sampled communities.[2] Because of its ability to reveal the previously hidden diversity of microscopic life, metagenomics offers a powerful lens for viewing the microbial world that has the potential to revolutionize understanding of the entire living world.[3] As the price of DNA sequencing continues to fall, metagenomics now allows microbial ecology to be investigated at a much greater scale and detail than before.…. The point is that traditional genomic sequencing focuses on single clone cultures, while metagenomics involves sequencing much, much greater diversity
What a converged arch might look like
Acknowledge that users have invested in different programming models which are arguably better suited for their specific needs. Thus converged stack needs to accommodate those differences
Resource manager looks at the incoming big data or hpc or fast data workload and adapts/configures the system for best processing of the workload.
File system is built with remote storage but has an adapter to accommodate Hadoop workloads that presume local storage.
Hardware is optimized for performance with use of fabric and SSDs/Burst Buffers to support HPC and HPC/Big Data (ie Fast Data)
Key enabler is a new software stack…a new memory/storage hierarchy to better support both BD and HPC….
Memory-Storage capabilities move storage closer to the compute.. By moving the data closer to compute we’re also effectively changing the profile of the traditional pyramid shape to one that is more top heavy. We are moving the “center of data” (analogous to the concept of a shape’s center of mass) closer to compute.
Both HPC and BD use these capabilities, but their usage is weighted differently. For example, HPC emphasizes high bandwidth configurable memory. Big Data uses in package memory, but focuses on configurable memory and local application storae.
For HPC (by tier and main benefits in bold)
In-package memory benefits
High Bandwidth Configurable (cache, memory, flat)
Local App Storage
NVM benefits
Local Storage
Temporal Storage
Burst Buffer benefits
Faster Checkpointing
Quicker Recovery
Better App Performance
&&
BIG DATA
In-package memory benefits
Configurable memory
Local App Storage
High Bandwidth
NVM benefits
Local Storage Temporal Storage
Burst Buffer benefits
Better App Performance
Quicker Recovery
Faster Checkpointing
Remote storage / other benefits
Run Hadoop on HPC infrastructure**
Key Message: Technical Computing is a key enabler of the latest evolution of scientific methodology
A new methodology has been emerging from the scientific (nonmedical) community: the introduction of modeling and simulation as an integral part of the research and development process. This is possible because of technical computing and the ability to process massive amounts of detailed data in parallel – what we call heterogeneous computing.
Because of the complex computing capabilities of technical computing, modeling and simulation have become essential elements of research and development.
In the new model, after the hypothesis is proposed, modern scientists, researchers, and engineers perform numerous simulations and modeling of the hypothesis in order to design an effective experiment. This allows for an iterative optimization of the experiment design to be performed on the computer, which can take the form of virtual prototyping and virtual testing and evaluation. After this iterative step, when the best experiment design has been refined, the actual experiment is conducted in the laboratory. The value of this new approach is that early modeling and simulation saves time and money that can be better used for conducting the live experiment.
We’ll show you how companies ranging from life sciences, to manufacturing, to oil & gas exploration are partnering with Intel to use this methodology to get products out faster, more feature rich, and with better quality --- all at lower cost.
OK I think everyone knows Dyson – they are the cool vacuum cleaner company who also makes a fan-less fan. You know I have one of these and it is amazing powerful and amazingly quite.
What Dyson did with simulation based design is very cool
They explored 200 design iterations in the same time they would have explored 10 not bad
But look what it did they improved the airflow 2.5X the original concept - they took a good idea and made it great
Very cool, very fast and amazingly innovative again
So Dyson exemplified this idea – they broke the mold in several ways
They got rid of the fan to reduce the noise
They tested more ideas in less time and ended up with a very cool product
You can do the same thing too
With ANSYS innovative companies like Dyson, manufacturer of the Dyson Air Multiplier™ fan as well vacuums and hand driers, are now able to employ an idea known as design of experiment (DOE) to Create and test up to 10 geometric variations of things like the Dyson Air Multiplier dimensions. In this case the team investigated 200 different design iterations using simulation, which was 10 times the number that would have been possible had physical prototyping been the primary design tool.
21
DreamWorks Animation notes:
DreamWorks Animation is developing their own proprietary animation and lighting software utilizing Intel Software Development tools
New animation and lighting software will enable more iterations of scenes to get the perfect character performances and shot depth
Enabling more iterations improves the movie production process by permitting artists to continue to be productive instead of waiting on scene renders before attempting new changes
This improvement is similar to enabling additional prototypes of a product to get the right innovation
28% faster BLAST workload performance compared to cluster configuration prior to upgrade
61% compute capacity increase compared to cluster configuration prior to upgrade
22% increase in rack space compared to cluster configuration prior to upgrade
PETROBRAS
Our engagement with the Research Center for Oil & Gas focused on exploration and production (the core activity of PETROBRAS), have been producing substantial results. One example is the 10.5x performance gain in their Reservoir Simulator software optimized to run in Intel Xeon servers.
LNCC – National Laboratory for Scientific Computing
Is home to the largest supercomputer in Latin America with capacity of 1 Petaflops, equipment has Intel® Xeon® E5 processors and Intel® Xeon® Phi™ coprocessors
Since May, 2015 Intel signed a Technical Cooperation agreement to anchor the research In “New Computing Models for Enhanced Oil Recovery”, on Intel architecture.
Intel Modern Code with UNESP-NCC
The São Paulo State University – UNESP, part of the state of São Paulo public higher education system, is one of the largest universities in Brazil, and its Center for Scientific Computing (CSC) operates two large Linux-based HPC clusters to support the university research community.
It’s a pleasure to announce they become our Intel Modern Code Partner in Latin American focused on code modernization and dissemination of improvements and innovations in parallel processing to the broader HPC community.