This is a presentation I presented at NVIDIA AI Conference in Korea. It's about building the largest GPU - DGX-2, the most powerful supercomputer in one node.
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
Implementing AI: High Performance Architectures: A Universal Accelerated Comp...KTN
The Implementing AI: High Performance Architectures webinar, hosted by KTN and eFutures, was the fourth event in the Implementing AI webinar series.
The focus of the webinar was the impact of processing AI data on data centres - particularly from the technology perspective. Timothy Lanfear, Director of Solution Architecture and Engineering EMEA, NVIDIA, presented on a Universal Accelerated Computing Platform.
NVIDIA compute GPUs and software toolkits are key drivers behind major advancements in machine learning. Of particular interest is a technique called "deep learning", which utilizes what are known as Convolution Neural Networks (CNNs) having landslide success in computer vision and widespread adoption in a variety of fields such as autonomous vehicles, cyber security, and healthcare. In this talk is presented a high level introduction to deep learning where we discuss core concepts, success stories, and relevant use cases. Additionally, we will provide an overview of essential frameworks and workflows for deep learning. Finally, we explore emerging domains for GPU computing such as large-scale graph analytics, in-memory databases.
https://tech.rakuten.co.jp/
Dell and NVIDIA for Your AI workloads in the Data CenterRenee Yao
Join us and learn more about how Dell PowerEdge C4140 Rack Server, powered by four of NVIDIA V100s, the world’s most powerful GPU, address training and inference for the most demanding HPC, data visualization and AI workloads. This enables organizations to take advantage of the convergence of HPC and data analytics and realize advancements in areas including fraud detection, image processing, financial investment analysis and personalized medicine.
This is a presentation I presented at NVIDIA AI Conference in Korea. It's about building the largest GPU - DGX-2, the most powerful supercomputer in one node.
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
Implementing AI: High Performance Architectures: A Universal Accelerated Comp...KTN
The Implementing AI: High Performance Architectures webinar, hosted by KTN and eFutures, was the fourth event in the Implementing AI webinar series.
The focus of the webinar was the impact of processing AI data on data centres - particularly from the technology perspective. Timothy Lanfear, Director of Solution Architecture and Engineering EMEA, NVIDIA, presented on a Universal Accelerated Computing Platform.
NVIDIA compute GPUs and software toolkits are key drivers behind major advancements in machine learning. Of particular interest is a technique called "deep learning", which utilizes what are known as Convolution Neural Networks (CNNs) having landslide success in computer vision and widespread adoption in a variety of fields such as autonomous vehicles, cyber security, and healthcare. In this talk is presented a high level introduction to deep learning where we discuss core concepts, success stories, and relevant use cases. Additionally, we will provide an overview of essential frameworks and workflows for deep learning. Finally, we explore emerging domains for GPU computing such as large-scale graph analytics, in-memory databases.
https://tech.rakuten.co.jp/
Dell and NVIDIA for Your AI workloads in the Data CenterRenee Yao
Join us and learn more about how Dell PowerEdge C4140 Rack Server, powered by four of NVIDIA V100s, the world’s most powerful GPU, address training and inference for the most demanding HPC, data visualization and AI workloads. This enables organizations to take advantage of the convergence of HPC and data analytics and realize advancements in areas including fraud detection, image processing, financial investment analysis and personalized medicine.
Nvidia Deep Learning Solutions - Alex SabatierSri Ambati
Alex Sabatier from Nvidia talks about the future of Deep Learning from an chipmaker perspective
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Breaking New Frontiers in Robotics and Edge Computing with AIDustin Franklin
This NVIDIA webinar will cover the latest tools and techniques to deploy advanced AI at the edge, including Jetson TX2 and TensorRT. Get up to speed on recent developments in robotics and deep learning.
By participating you'll learn:
1. How to build high-performance, energy-efficient embedded systems
2. Workflows for training AI in the cloud and deploying at the edge
3. The latest upcoming JetPack release and its performance improvements.
4. Real-time deep learning primitives for autonomous navigation.
5. NVIDIA’s latest Isaac Initiative for robotics
HPE and NVIDIA are delivering a leading portfolio of optimized AI solutions that transform business and industry to gain deeper insights and facilitate solving the world’s greatest challenges. Join this session to learn about how NVIDIA V100, the world’s most powerful GPU, powering HPE 6500 Systems, the HPE AI Systems, to provide new business insights and outcomes.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2JrUYLl.
Alison Lowndes talks about the HW & SW that comprise NVIDIA's GPU computing platform for AI, across PC to data center, cloud to edge, training to inference. She details current state-of-the-art research & recent internal work combining robotics with virtual reality & reinforcement learning in an end-to-end simulator for training and testing robots. Filmed at qconlondon.com.
Alison Lowndes is responsible for NVIDIA's Artificial Intelligence Developer Relations in the EMEA region. She consults on a wide range of AI applications, including planetary defence with NASA & the SETI Institute and continues to manage the community of AI & Machine Learning researchers around the world.
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
Supercomputing has swept rapidly from the far edges of science to the heart of our everyday lives. And propelling it forward – bringing it into the mobile phone already in your pocket and the car in your driveway – is GPU acceleration, NVIDIA CEO Jen-Hsun Huang told a packed house at a rollicking event kicking off this week’s SC15 annual supercomputing show in Austin. The event draws 10,000 researchers, national lab directors and others from around the world.
Kevin Shaw at AI Frontiers: AI on the Edge: Bringing Intelligence to Small De...AI Frontiers
The edge is the domain of the Internet of Things, of personal medical devices, of cars that understand the world, of machines that self-regulate and more. These devices share a common constraint: they can't send full data to the cloud for processing. This talk will review the changing needs for AI at the edge, the demands of learning networks on small cores and changing hardware being provided to meet these demands.
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
Nvidia Deep Learning Solutions - Alex SabatierSri Ambati
Alex Sabatier from Nvidia talks about the future of Deep Learning from an chipmaker perspective
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Breaking New Frontiers in Robotics and Edge Computing with AIDustin Franklin
This NVIDIA webinar will cover the latest tools and techniques to deploy advanced AI at the edge, including Jetson TX2 and TensorRT. Get up to speed on recent developments in robotics and deep learning.
By participating you'll learn:
1. How to build high-performance, energy-efficient embedded systems
2. Workflows for training AI in the cloud and deploying at the edge
3. The latest upcoming JetPack release and its performance improvements.
4. Real-time deep learning primitives for autonomous navigation.
5. NVIDIA’s latest Isaac Initiative for robotics
HPE and NVIDIA are delivering a leading portfolio of optimized AI solutions that transform business and industry to gain deeper insights and facilitate solving the world’s greatest challenges. Join this session to learn about how NVIDIA V100, the world’s most powerful GPU, powering HPE 6500 Systems, the HPE AI Systems, to provide new business insights and outcomes.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2JrUYLl.
Alison Lowndes talks about the HW & SW that comprise NVIDIA's GPU computing platform for AI, across PC to data center, cloud to edge, training to inference. She details current state-of-the-art research & recent internal work combining robotics with virtual reality & reinforcement learning in an end-to-end simulator for training and testing robots. Filmed at qconlondon.com.
Alison Lowndes is responsible for NVIDIA's Artificial Intelligence Developer Relations in the EMEA region. She consults on a wide range of AI applications, including planetary defence with NASA & the SETI Institute and continues to manage the community of AI & Machine Learning researchers around the world.
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
Supercomputing has swept rapidly from the far edges of science to the heart of our everyday lives. And propelling it forward – bringing it into the mobile phone already in your pocket and the car in your driveway – is GPU acceleration, NVIDIA CEO Jen-Hsun Huang told a packed house at a rollicking event kicking off this week’s SC15 annual supercomputing show in Austin. The event draws 10,000 researchers, national lab directors and others from around the world.
Kevin Shaw at AI Frontiers: AI on the Edge: Bringing Intelligence to Small De...AI Frontiers
The edge is the domain of the Internet of Things, of personal medical devices, of cars that understand the world, of machines that self-regulate and more. These devices share a common constraint: they can't send full data to the cloud for processing. This talk will review the changing needs for AI at the edge, the demands of learning networks on small cores and changing hardware being provided to meet these demands.
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
Gömülü Sistemlerde Derin Öğrenme UygulamalarıFerhat Kurt
Gömülü sistemler özellikle düşük güç harcayarak yüksek işlem gücü sağladığından drone, elektro-optik, robotik ve otonom sistemlerde yaygın bir şekilde kullanılmaktadır.
Bu eğitimimizde derin öğrenme uygulamalarının çalıştırılabildiği gömülü sistemler (FPGA ve GPU), örnek uygulamalar ve uygulama geliştirme süreci anlatılmıştır.
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
Semiconductors are the driving force behind the AI evolution and enable its adoption across various application areas ranging from connected and automated driving to smart healthcare and wearables. Given that, electronics research, design and manufacturing communities around the world are increasingly investing in specialized AI chips providing less latency, greater processing power, higher bandwidth and faster performance. AI also attracts new technology players to invest in making their own specialized AI chips, changing the electronics manufacturing landscape and moving the AI technology towards machine learning, deep learning and neural networks.
Introduction to Software Defined Visualization (SDVis)Intel® Software
Software defined visualization (SDVis) is an open-source initiative from Intel and industry collaborators. Improve the visual fidelity, performance, and efficiency of prominent visualization solutions, while supporting the rapidly growing big data use on workstations through high-performance computing (HPC) on supercomputing clusters without memory limitations and cost of GPU-based solutions.
SDVIs and In-Situ Visualization on TACC's StampedeIntel® Software
Speaker: Paul Navrátil, Texas Advanced Computing Center (TACC)
The design emphasis for supercomputing systems has moved from raw performance to performance-per-watt, and as a result, supercomputing architectures are converging on processors with wide vector units and many processing cores per chip. Such processors are capable of performant image rendering purely in software. This improved capability is fortuitous, since the prevailing homogeneous system designs lack dedicated, hardware-accelerated rendering subsystems for use in data visualization. Reliance on this “software-defined” rendering capability will grow in importance since, due to growing data sizes, visualizations must be performed on the same machine where the data is produced. Further, as data sizes outgrow disk I/O capacity, visualization will be increasingly incorporated into the simulation code itself (in situ visualization).
This talk presents recent work in high-fidelity visualization using the OSPRay ray tracing framework on TACC’s local and remote visualization systems. We present work using OSPRay within ParaView Catalyst in situ framework from Kitware, including capitalizing on opportunities to reduce data costs migrating through VTK filters for visualization. We highlight the performance opportunities and advantages of Intel® Advanced Vector Extensions 512, the memory system improvements possible with Intel® Xeon Phi™ processor multi-channel DRAM (MCDRAM) and the Intel® Omni-Path Architecture interconnect.
In this deck from FOSDEM'19, Christoph Angerer from NVIDIA presents: Rapids - Data Science on GPUs.
"The next big step in data science will combine the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions while taking advantage of the same technology that enables dramatic increases in speed in deep learning. This session highlights the progress that has been made on RAPIDS, discusses how you can get up and running doing data science on the GPU, and provides some use cases involving graph analytics as motivation.
GPUs and GPU platforms have been responsible for the dramatic advancement of deep learning and other neural net methods in the past several years. At the same time, traditional machine learning workloads, which comprise the majority of business use cases, continue to be written in Python with heavy reliance on a combination of single-threaded tools (e.g., Pandas and Scikit-Learn) or large, multi-CPU distributed solutions (e.g., Spark and PySpark). RAPIDS, developed by a consortium of companies and available as open source code, allows for moving the vast majority of machine learning workloads from a CPU environment to GPUs. This allows for a substantial speed up, particularly on large data sets, and affords rapid, interactive work that previously was cumbersome to code or very slow to execute. Many data science problems can be approached using a graph/network view, and much like traditional machine learning workloads, this has been either local (e.g., Gephi, Cytoscape, NetworkX) or distributed on CPU platforms (e.g., GraphX). We will present GPU-accelerated graph capabilities that, with minimal conceptual code changes, allows both graph representations and graph-based analytics to achieve similar speed ups on a GPU platform. By keeping all of these tasks on the GPU and minimizing redundant I/O, data scientists are enabled to model their data quickly and frequently, affording a higher degree of experimentation and more effective model generation. Further, keeping all of this in compatible formats allows quick movement from feature extraction, graph representation, graph analytic, enrichment back to the original data, and visualization of results. RAPIDS has a mission to build a platform that allows data scientist to explore data, train machine learning algorithms, and build applications while primarily staying on the GPU and GPU platforms."
Learn more: https://rapids.ai/
and
https://fosdem.org/2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
A talk on reducing costs & increasing efficiencies by designing, testing & engineering in simulation first, plus examples of robotics & environmental capability.
NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2019 (#GTC19) in Silicon Valley, where he introduced breakthroughs in pro graphics with NVIDIA Omniverse; in data science with NVIDIA-powered Data Science Workstations; in inference and enterprise computing with NVIDIA T4 GPU-powered servers; in autonomous machines with NVIDIA Jetson Nano and the NVIDIA Isaac SDK; in autonomous vehicles with NVIDIA Safety Force Field and DRIVE Constellation; and much more.
Similar to Possibilities of generative models (20)
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
6. 6
NVIDIA CUDA-X AI ECOSYSTEM
FRAMEWORKS CLOUD DEPLOYMENT
Workstation CloudServer
DA GRAPH DLTRAINML DLINFERENCE
Amazon
SageMaker
Serving
Amazon
SageMaker Neo
Google
Cloud ML
CUDA-X AI
CUDA
AzureMachineLearning
8. 8
NVIDIA TOPS MLPERF EDGE SOC BENCHMARKS
0.0x
0.5x
1.0x
1.5x
MobileNet-v1 ResNet-50
v. 1.5
SSD
MobileNet-v1
SSD
ResNet-34
Qualcomm SDM855 Intel i3-1005G1 NVIDIA Xavier
MLPerf v0.5 Inference Closed; Retrieved from www.mlperf.org 6 November 2019. Single stream performance derived from reported MLPerf latencies. GNMT omitted due to
no submissions among edge and mobile form factor SOCs in v0.5. MLPerf name and logo are trademarks. See www.mlperf.org for more information.
Per-AcceleratorPerformance
SINGLE-STREAM SCENARIO
X
= No result submittedX
Best Inference Performance Among Commercially Available Edge And Mobile SoCs
X X
0
0.5
1
1.5
MobileNet-v1 ResNet-50
v. 1.5
SSD
MobileNet-v1
SSD
ResNet-34
Qualcomm SDM855 Intel i3-1005G1 NVIDIA Xavier
Per-AcceleratorPerformance
MULTI-STREAM SCENARIO
X X X X X X X X
https://github.com/mlperf/inference_results_v0.5/tree/master/open/NVIDIA
www.mlperf.org
17. 17
IMAGE BASED DL IS EASY
Object detection Semantic Segmentation
Figures copyright Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun,
2015. [Faster R-CNN]
Figures copyright Preferred Networks Inc., 2016.
18. 18
Numerous applications
3D DL IS EXCITING
Simulation Medical imaging Autonomous driving
Manipulation Robotics Augmentedreality
* This slide is best viewed in "slide show" mode.
19. 19
KAOLIN
- A Pytorch library for 3D DL
- Supports a wide range of 3D data representations
- Convenient dataloading/preprocessing/conversions
- Large collection of 3D neural nets to choose from
- Optimized implementations
- Omniverse-Kit integration for easy rendering,
interactive visualization, and much more.
https://gitlab-
master.nvidia.com/Toronto_
DL_Lab/kaolin
20. AI SEES IN TEXTURES, NOT SHAPES
https://arxiv.org/pdf/1811.12231.pdf
University of Tübingen & University of Edinburgh
21. 21
What’s really going on?
My Python Program
DNN Framework
CPUs GPUs
System Memory Network
Drives PCI Express
22. 22
It’s
Like tuning an orchestra
GPU
CPU
GPU
CPU
System Memory
SSD A
SSD B
Network
PCI Express
23. 23
NVIDIA Nsight Systems
• Balance your workload across multiple CPUs and GPUs
• Locate idle CPU and GPU time
• Locate redundant synchronizations
• Locate optimization opportunities
• Improve application’s performance
System Wide Profiling Tool
developer.nvidia.com/tools-overview
42. 42NVIDIA CONFIDENTIAL. DO NOT DISTRIBUTE.
ISAAC
Isaac Robot Engine – Modular robot framework | Isaac Sim - Virtual roboticslaboratory
Isaac Gym – Reinforcement learning simulator | Isaac Robot Apps – Kaya, Carter and Link
Available at developer.nvidia.com/isaac-sdk
CARTER (Xavier)KAYA (Nano) LINK (Multi Xavier)
JETSONNANO
ISAAC OPEN TOOLBOX
Sensor and
Actuator Drivers Core Libraries GEMS Reference DNN Tools
CUDA-X
Isaac Robot Engine
JETSONTX2 JETSONAGX XAVIER
Isaac Sim Isaac Gym
46. 46
SAFE AV VALIDATION – THE CHALLENGES
Highly Complex System
Large Computers, DNNs, Sensors
Real-Life Scenario Coverage
Account for Rare & Unpredictable Cases
Continuous Reaction Loop
Vehicle & World are Dependent
54. 54
CONDITIONAL GANS
Generates output consistent with the training set
Generator
(Regressor)
Discriminator
(Classifier)
Generated
Target
• If the output is under-constrained your output will look fuzzy.
• It won’t capture the true peaks and valleys
• We can overcome this problem using a conditional GAN
• Generates output matching the training set
Loss fcn
55. 55
The High Altitude Water Cherenkov (HAWC) Observatory
A cosmogenic gamma ray observatory, examining
some of the most energetic light in the universe
Located on Pico de Orizaba, Mexico
High duty cycle, high statics, high energy
physics experiment
Daniel Ho, Gefen Kohavi, Michael Gussert
56. 56
PixelCNN
Images sampled from a
PixelCNN model trained
on PMT charge data show
realistic features.
- smooth gaussian
distribution for dense
events
- good distribution of
event sparsity
- varying angles /
direction of hits
Generated Images
60. 60
A new vocoder for speech synthesis built on a flow based generative model
Fast, completely parallel inference procedure
150X real-time on one GPU
mel-spectrogram audio samples
WaveGlowTacoTron
Hello, world!
Waveglow synthesized speech
http://nv-adlr.github.io/WaveGlow
61. SOTA NLP Techniques
Transformer: A confluence of all three SOTA NLP
61
● Encoder + Decoder Structure
● Attention mechanism
● Self-Attention within each encoder &
decoder
No more recurrent structure!
Let’s witness the power of
GPU parallelization!
67. National Academy of Sciences, Siyu He et al, Flatiron Institute https://arxiv.org/pdf/1811.06533.pdf
https://news.developer.nvidia.com/researchers-develop-the-first-deep-learning-based-3d-simulation-of-the-universe/
71. 71
GET STARTED WITH NGC
Deploy containers:
ngc.nvidia.com
Learn more about NGC offering:
nvidia.com/ngc
Technical information:
developer.nvidia.com
Explore the NGC Registry for DL, ML & HPC
74. RAPIDS
RAPIDS
GPU Accelerated End-to-End Data Science
RAPIDS is a set of open source libraries for GPU accelerating
data preparation and machine learning.
OSS website: rapids.ai
GPU Memory
Data Preparation VisualizationModel Training
cuGraph
Graph Analytics
cuML
Machine Learning
cuDF
Data Preparation
75. 75
NVIDIA DATA LOADING LIBRARY (DALI)
Fast Data Processing Library for Accelerating Deep Learning
DALI in DL Training Workflow
Currently supports:
• ResNet50 (Image Classification), SSD (Object Detection)n
• Input Formats – JPEG, LMDB, RecordIO, TFRecord, COCO,
H.264, HVEC
• Python/C++APIsto define, build & run an input pipeline
Full input pipeline acceleration including
data loading and augmentation
Drop-in integration with direct plugins to DL
frameworks and open source bindings
Portable workflows through multiple input
formats and configurable graphs
Flexible through configurable graphs and
custom operators
Over 1000 GitHub stars | Top 50 ML Projects (out of 22,000 in 2018)
76. 76
NVIDIA TensorRT
From Every Framework, Optimized For Each Target Platform
TESLA V100
DRIVE AGX
TESLA T4
JETSON Xavier
NX
NVIDIA DLA
TensorRT
77. TF-TRT = TF + TRT
https://docs.nvidia.com/deeplearning/dgx/tf-trt-user-guide/index.html
78. 78NVIDIA CONFIDENTIAL. DO NOT DISTRIBUTE.
NVIDIA TRANSFER LEARNING TOOLKIT
FEATURES
Model pruning reduces size of the
model resulting in faster inference
Faster Inference with
Model Pruning
GPU-accelerated high performance
models trained on large scale
datasets.
Efficient Pre-trained
Models
Re-training models, adding custom
data for multi GPU training using
an easy to use tool
Training with
Multiple GPUs
Packaged in a container easily
accessible from NVIDIA GPU Cloud
website. All code dependencies
are managed automatically
Containerization
Abstraction from having deep
knowledge of frameworks, simple
intuitive interface to the features
Abstraction
Models exported using TLT are easily
consumable for inference with Deep
Stream SDK
Integration
79. 7979
AUTOMATIC MIXED PRECISION
Insert ~two lines of code to use Automatic Mixed-Precision and get up to a 3X speedup
Support for TensorFlow, PyTorch and MXNet
Easy to Use and Great Performance
Automatic mixed precision applies two techniques to maximize performance while preserving accuracy:
1) Optimizing per operation precision by casting to FP16 or FP32
2) Dynamic loss scaling to properly handle gradient accumulation
NEW
80. AUTOMATIC MIXED PRECISION
● Speed-up: 1.5x - 3x
● Memory footprint reduction: increase batch
size up to 2x, more capability
● No accuracy drop
Speedup Your Network Across Frameworks With Just Two Lines of Code
Tensor
Cores
NVIDIA
AMP
Frameworks
Models
CNN, RNN, GAN,
RL, NCF…
82. 82NVIDIA CONFIDENTIAL. DO NOT DISTRIBUTE.
ANNOUNCING MAGNUM IO
NVIDIA's Multi-GPU, Multi-Node Networking and Storage IO Optimization Stack
CUDA
CUDA-X
Desktop Development Data Center Solutions
GPU-Accelerated
Cloud
Supercomputers
Magnum IO
Transport Protocol | System Interconnect | Network Topology | Storage
Simulation ML/DL Data Analytics Visualization
83. 83
NVIDIA DGX SUPERPOD
Mellanox EDR 100G InfiniBand Network
Mellanox Smart Director Switches
In-Network Computing Acceleration Engines
Fast and Efficient Storage Access with RDMA
Up to 130Tb/s Switching Capacity per Switch
Ultra-Low Latency of 300ns
Integrated Network Manager
Terabit-Speed InfiniBand
Networking per Node
…
Rack 1 Rack 16
Compute
Backplane
Switch
Storage
Backplane
Switch
64 DGX-2
GPFS
200 Gb/s per
node
800 Gb/s per
node
White paper:
https://nvidia.highspot.com/items/5d073ad681171721086b2788
84. 84
JETSON XAVIER NX
Up to 21 DL TOPS AI Performance
10W | 15W
384 CUDA Cores | 48 Tensor Cores
6 core CPU | 8 GB Memory
45x70mm
$399
Xavier Performance. Nano Size.
Get started today:
- Jetson AGX Xavier Developer Kit + software patch
- Documentation on Jetson Download Center
- SOM available Q1 2020
85. 85
THE JETSON FAMILY
for AI at the Edge and Autonomous System designs
Same software
Listed prices are for 1000u+ | Full specs at developer.nvidia.com/jetson * TX2i: 10-20W
7.5 – 15W*
50mm x 87mm
JETSON TX2 series
1.3 TFLOPS (FP16)
5 - 10W
45mm x 70mm
JETSON NANO
0.5 TFLOPS (FP16)
10 – 30W
100mm x 87mm
JETSON AGX XAVIER series
11 TFLOPS (FP16)
32 TOPS (INT8)
10 - 15W
45mm x 70mm
JETSON Xavier NX
6 TFLOPS (FP16)
21 TOPS (INT8)
AI at the edge Fully autonomous machines
87. Fundamentals
Accelerated Computing
Game Development &
Digital Content
Finance
NVIDIA DEEP LEARNING
INSTITUTE
Online self-paced labs and instructor-led
workshops on deep learning and
accelerated computing
Take self-paced labs at
www.nvidia.co.uk/dlilabs
View upcoming workshops and request a
workshop onsite at www.nvidia.co.uk/dli
Educators can join the University
Ambassador Program to teach DLI courses
on campus and access resources. Learn
more at www.nvidia.com/dli
Intelligent Video
Analytics
Healthcare
Robotics
Autonomous Vehicles
Virtual Reality
88. 88
NVIDIA
INCEPTION
PROGRAM
Accelerates AI startups with a boost of
GPU tools, tech and deep learning expertise
Startup Qualifications
Driving advances in the field of AI
Business plan
Incorporated
Web presence
Technology
DL startup kit*
Pascal Titan X
Deep Learning Institute (DLI) credit
Connect with a DL tech expert
DGX-1 ISV discount*
Software release notification
Live webinar and office hours
*By application
Marketing
Inclusion in NVIDIAmarketing efforts
GPU Technology Conference(GTC)
discount
Emerging Company Summit (ECS)
participation+
Marketing kit
One-page story template
eBook template
Inception web badge and banners
Social promotion request form
Event opportunities list
Promotion at industry events
GPU ventures+
+By invitation
www.nvidia.com/inception
90. 90
CONNECT
Connect with hundreds of experts
from top industry, academic,
startup, and government
organizations
LEARN
Gain insight and valuable
hands-on training through
over 500+ sessions
DISCOVER
See how GPU technology is
creating breakthroughs in deep
learning, cybersecurity, data
science, healthcare and more
INNOVATE
Explore disruptive innovations
that can transform your work
JOIN US AT GTC 2020 | USE VIP CODE NVALOWNDES FOR 25% OFF
March 22—26, 2020 | Silicon Valley
Don’t miss the premier AI conference.
www.nvidia.com/gtc
91. 91
March 22 | Full-Day Workshops
March 23 - 26 | Conference & Training
Get the hands-on experience you need to transform the
future of AI, high-performance computingand more with
NVIDIA’sDeep Learning Institute (DLI).
Register for GTC 2020 to earn certification in full-day
workshops, join instructor-led sessions, and start self-
paced training.
www.nvidia.com/en-us/gtc/sessions/training/
THE LATEST DEEP LEARNING
DEVELOPER TOOLS