This document provides an overview of AI and GPU technologies from NVIDIA. It discusses NVIDIA's GPU computing platforms like DGX, Jetson, and AGX which are used for AI training and inference. It also summarizes NVIDIA's tools and frameworks like CUDA, TensorRT, and DeepStream which help accelerate AI workflows. Finally, it promotes NVIDIA's training resources like the Deep Learning Institute to help developers get started with AI.
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
NVIDIA Deep Learning Institute 2017 基調講演NVIDIA Japan
このスライドは 2017 年 1 月 17 日 (火)、ベルサール高田馬場で開催された「NVIDIA Deep Learning Institute 2017」の基調講演にて、NVIDIA Chief Scientist and SVP of Research の Bill Dally が講演したものです。
Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions an...Grigori Fursin
Slides from ARM's Research Summit'17 about "Community-Driven and Knowledge-Guided Optimization of AI Applications Across the Whole SW/HW Stack" (http://cKnowledge.org/repo , http://cKnowledge.org/ai , http://tinyurl.com/zlbxvmw , https://developer.arm.com/research/summit )
Co-designing the whole AI/SW/HW stack in terms of speed, accuracy, energy consumption, size, costs and other metrics has become extremely complex, long and costly. With no rigorous methodology for analyzing performance and accumulating optimisation knowledge, we are simply destined to drown in the ever growing number of design choices, system
features and conflicting optimisation goals.
We present our novel community-driven approach to solve the above problems. Originating from natural sciences, this approach is embodied in Collective Knowledge (CK), our open-source cross-platform workflow framework and repository for automatic, collaborative and reproducible experimentation. CK helps organize, unify and share representative workloads, data sets, AI frameworks, libraries, compilers, scripts, models and other artifacts as customizable and reusable components with a common JSON API.
CK helps bring academia, industry and end-users together to
gradually expose optimisation choices at all levels (e.g. from parameterized models and algorithmic skeletons to compiler
flags and hardware configurations) and autotune them across diverse inputs and platforms. Optimization knowledge gets continuously aggregated in public or private repositories such as cKnowledge.org/repo in a reproducible way, and can be then mined and extrapolated to predict better AI algorithm choices, compiler transformations and hardware designs.
We also demonstrate how we use this approach in practice together with ARM and other companies to adapt to a Cambrian AI/SW/HW explosion by creating an open repository of reusable AI artifacts, and then collaboratively optimising and co-designing the whole deep learning stack (software, hardware and models).
Accelerating open science and AI with automated, portable, customizable and r...Grigori Fursin
Validating experimental results from articles has finally become a norm at many systems and ML conferences. Nowadays, more than half of accepted papers pass artifact evaluation and share related code and data. Unfortunately, lack of a common experimental framework, common research methodology and common formats places an increasing burden on evaluators to validate a growing number of ad-hoc artifacts. Furthermore, having too many ad-hoc artifacts and Docker snapshots is almost as bad as not having any (!), since they cannot be easily reused, customized and built upon.
While overviewing more than 100 papers during artifact evaluation at PPoPP, CGO, PACT, Supercomputing and other conferences, we noticed that many of them use similar experimental setups, benchmarks, models, data sets, environments and platforms. This motivated us to develop Collective Knowledge (CK), an open workflow framework with a unified Python API to automate common researchers’ tasks such as detecting software and hardware dependencies, installing missing packages, downloading data sets and models, compiling and running programs, performing autotuning and co-design, crowdsourcing time-consuming experiments across computing resources provided by volunteers similar to SETI@home, applying statistical analysis and machine learning, validating results and plotting them on a common scoreboard for open and fair comparison, automatically generating interactive articles, and so on: http://cKnowledge.org.
In this presentation we will introduce CK concepts and present several real world use cases from General Motors and Arm
on collaborative benchmarking, autotuning and co-design of efficient software/hardware stacks for deep learning. We also present results and reusable CK components from the 1st ACM ReQuEST optimization tournament: http://cKnowledge.org/request. Finally, we introduce our latest initiative to create
an open repository of reusable research components and workflows to reboot and accelerate open science, quantum computing and AI!
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
NVIDIA Deep Learning Institute 2017 基調講演NVIDIA Japan
このスライドは 2017 年 1 月 17 日 (火)、ベルサール高田馬場で開催された「NVIDIA Deep Learning Institute 2017」の基調講演にて、NVIDIA Chief Scientist and SVP of Research の Bill Dally が講演したものです。
Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions an...Grigori Fursin
Slides from ARM's Research Summit'17 about "Community-Driven and Knowledge-Guided Optimization of AI Applications Across the Whole SW/HW Stack" (http://cKnowledge.org/repo , http://cKnowledge.org/ai , http://tinyurl.com/zlbxvmw , https://developer.arm.com/research/summit )
Co-designing the whole AI/SW/HW stack in terms of speed, accuracy, energy consumption, size, costs and other metrics has become extremely complex, long and costly. With no rigorous methodology for analyzing performance and accumulating optimisation knowledge, we are simply destined to drown in the ever growing number of design choices, system
features and conflicting optimisation goals.
We present our novel community-driven approach to solve the above problems. Originating from natural sciences, this approach is embodied in Collective Knowledge (CK), our open-source cross-platform workflow framework and repository for automatic, collaborative and reproducible experimentation. CK helps organize, unify and share representative workloads, data sets, AI frameworks, libraries, compilers, scripts, models and other artifacts as customizable and reusable components with a common JSON API.
CK helps bring academia, industry and end-users together to
gradually expose optimisation choices at all levels (e.g. from parameterized models and algorithmic skeletons to compiler
flags and hardware configurations) and autotune them across diverse inputs and platforms. Optimization knowledge gets continuously aggregated in public or private repositories such as cKnowledge.org/repo in a reproducible way, and can be then mined and extrapolated to predict better AI algorithm choices, compiler transformations and hardware designs.
We also demonstrate how we use this approach in practice together with ARM and other companies to adapt to a Cambrian AI/SW/HW explosion by creating an open repository of reusable AI artifacts, and then collaboratively optimising and co-designing the whole deep learning stack (software, hardware and models).
Accelerating open science and AI with automated, portable, customizable and r...Grigori Fursin
Validating experimental results from articles has finally become a norm at many systems and ML conferences. Nowadays, more than half of accepted papers pass artifact evaluation and share related code and data. Unfortunately, lack of a common experimental framework, common research methodology and common formats places an increasing burden on evaluators to validate a growing number of ad-hoc artifacts. Furthermore, having too many ad-hoc artifacts and Docker snapshots is almost as bad as not having any (!), since they cannot be easily reused, customized and built upon.
While overviewing more than 100 papers during artifact evaluation at PPoPP, CGO, PACT, Supercomputing and other conferences, we noticed that many of them use similar experimental setups, benchmarks, models, data sets, environments and platforms. This motivated us to develop Collective Knowledge (CK), an open workflow framework with a unified Python API to automate common researchers’ tasks such as detecting software and hardware dependencies, installing missing packages, downloading data sets and models, compiling and running programs, performing autotuning and co-design, crowdsourcing time-consuming experiments across computing resources provided by volunteers similar to SETI@home, applying statistical analysis and machine learning, validating results and plotting them on a common scoreboard for open and fair comparison, automatically generating interactive articles, and so on: http://cKnowledge.org.
In this presentation we will introduce CK concepts and present several real world use cases from General Motors and Arm
on collaborative benchmarking, autotuning and co-design of efficient software/hardware stacks for deep learning. We also present results and reusable CK components from the 1st ACM ReQuEST optimization tournament: http://cKnowledge.org/request. Finally, we introduce our latest initiative to create
an open repository of reusable research components and workflows to reboot and accelerate open science, quantum computing and AI!
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2JrUYLl.
Alison Lowndes talks about the HW & SW that comprise NVIDIA's GPU computing platform for AI, across PC to data center, cloud to edge, training to inference. She details current state-of-the-art research & recent internal work combining robotics with virtual reality & reinforcement learning in an end-to-end simulator for training and testing robots. Filmed at qconlondon.com.
Alison Lowndes is responsible for NVIDIA's Artificial Intelligence Developer Relations in the EMEA region. She consults on a wide range of AI applications, including planetary defence with NASA & the SETI Institute and continues to manage the community of AI & Machine Learning researchers around the world.
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
NVIDIA CEO Jen-Hsun Huang introduces NVLink and shares a roadmap of the GPU. Primary topics also include an introduction of the GeForce GTX Titan Z, CUDA for machine learning, and Iray VCA.
Nvidia Deep Learning Solutions - Alex SabatierSri Ambati
Alex Sabatier from Nvidia talks about the future of Deep Learning from an chipmaker perspective
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Kicking off the first in a series of global GPU Technology Conferences, NVIDIA co-founder and CEO Jen-Hsun Huang today at GTC China unveiled technology that will accelerate the deep learning revolution that is sweeping across industries. Huang spoke in front of a crowd of more than 2,500 scientists, engineers, entrepreneurs and press, gathered in Beijing for a day devoted to deep learning and AI. On stage he announced the Tesla P4 and P40 GPU accelerators for inferencing production workloads for AI services and, a small, energy-efficient AI supercomputer for highway driving — the NVIDIA DRIVE PX 2 for AutoCruise.
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
Building upon the foundational understanding of deep learning, this talk will cover a variety of applications of artificial intelligence for problem-solving and how you can both get started and become proficient with NVIDIA’s hardware, open-source software & classes. We will also discuss the role of games engines both historically and current day in teaching today's AI systems.
Alison B Lowndes - Fueling the Artificial Intelligence Revolution with Gaming...Codemotion
Building upon the foundational understanding of deep learning, this talk will cover a wide variety of applications of artificial intelligence for problem-solving and how you can both get started and become proficient with NVIDIA’s hardware, open-source software & classes. We will also discuss the role of games engines both historically and current day in teaching today's AI systems.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2JrUYLl.
Alison Lowndes talks about the HW & SW that comprise NVIDIA's GPU computing platform for AI, across PC to data center, cloud to edge, training to inference. She details current state-of-the-art research & recent internal work combining robotics with virtual reality & reinforcement learning in an end-to-end simulator for training and testing robots. Filmed at qconlondon.com.
Alison Lowndes is responsible for NVIDIA's Artificial Intelligence Developer Relations in the EMEA region. She consults on a wide range of AI applications, including planetary defence with NASA & the SETI Institute and continues to manage the community of AI & Machine Learning researchers around the world.
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
NVIDIA CEO Jen-Hsun Huang introduces NVLink and shares a roadmap of the GPU. Primary topics also include an introduction of the GeForce GTX Titan Z, CUDA for machine learning, and Iray VCA.
Nvidia Deep Learning Solutions - Alex SabatierSri Ambati
Alex Sabatier from Nvidia talks about the future of Deep Learning from an chipmaker perspective
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Kicking off the first in a series of global GPU Technology Conferences, NVIDIA co-founder and CEO Jen-Hsun Huang today at GTC China unveiled technology that will accelerate the deep learning revolution that is sweeping across industries. Huang spoke in front of a crowd of more than 2,500 scientists, engineers, entrepreneurs and press, gathered in Beijing for a day devoted to deep learning and AI. On stage he announced the Tesla P4 and P40 GPU accelerators for inferencing production workloads for AI services and, a small, energy-efficient AI supercomputer for highway driving — the NVIDIA DRIVE PX 2 for AutoCruise.
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
Building upon the foundational understanding of deep learning, this talk will cover a variety of applications of artificial intelligence for problem-solving and how you can both get started and become proficient with NVIDIA’s hardware, open-source software & classes. We will also discuss the role of games engines both historically and current day in teaching today's AI systems.
Alison B Lowndes - Fueling the Artificial Intelligence Revolution with Gaming...Codemotion
Building upon the foundational understanding of deep learning, this talk will cover a wide variety of applications of artificial intelligence for problem-solving and how you can both get started and become proficient with NVIDIA’s hardware, open-source software & classes. We will also discuss the role of games engines both historically and current day in teaching today's AI systems.
As artificial intelligence sweeps across the technology landscape, NVIDIA unveiled today at its annual GPU Technology Conference a series of new products and technologies focused on deep learning, virtual reality and self-driving cars.
OpenACC and Open Hackathons Monthly Highlights: September 2022.pptxOpenACC
Stay up-to-date on the latest news, research and resources. This month's edition covers the Princeton GPU Hackathon, OpenACC at SC22, updates from GNU Tools Cauldron, the upcoming UK DPU Hackathon, relevant research and more!
Semiconductors are the driving force behind the AI evolution and enable its adoption across various application areas ranging from connected and automated driving to smart healthcare and wearables. Given that, electronics research, design and manufacturing communities around the world are increasingly investing in specialized AI chips providing less latency, greater processing power, higher bandwidth and faster performance. AI also attracts new technology players to invest in making their own specialized AI chips, changing the electronics manufacturing landscape and moving the AI technology towards machine learning, deep learning and neural networks.
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
In this deck from FOSDEM'19, Christoph Angerer from NVIDIA presents: Rapids - Data Science on GPUs.
"The next big step in data science will combine the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions while taking advantage of the same technology that enables dramatic increases in speed in deep learning. This session highlights the progress that has been made on RAPIDS, discusses how you can get up and running doing data science on the GPU, and provides some use cases involving graph analytics as motivation.
GPUs and GPU platforms have been responsible for the dramatic advancement of deep learning and other neural net methods in the past several years. At the same time, traditional machine learning workloads, which comprise the majority of business use cases, continue to be written in Python with heavy reliance on a combination of single-threaded tools (e.g., Pandas and Scikit-Learn) or large, multi-CPU distributed solutions (e.g., Spark and PySpark). RAPIDS, developed by a consortium of companies and available as open source code, allows for moving the vast majority of machine learning workloads from a CPU environment to GPUs. This allows for a substantial speed up, particularly on large data sets, and affords rapid, interactive work that previously was cumbersome to code or very slow to execute. Many data science problems can be approached using a graph/network view, and much like traditional machine learning workloads, this has been either local (e.g., Gephi, Cytoscape, NetworkX) or distributed on CPU platforms (e.g., GraphX). We will present GPU-accelerated graph capabilities that, with minimal conceptual code changes, allows both graph representations and graph-based analytics to achieve similar speed ups on a GPU platform. By keeping all of these tasks on the GPU and minimizing redundant I/O, data scientists are enabled to model their data quickly and frequently, affording a higher degree of experimentation and more effective model generation. Further, keeping all of this in compatible formats allows quick movement from feature extraction, graph representation, graph analytic, enrichment back to the original data, and visualization of results. RAPIDS has a mission to build a platform that allows data scientist to explore data, train machine learning algorithms, and build applications while primarily staying on the GPU and GPU platforms."
Learn more: https://rapids.ai/
and
https://fosdem.org/2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
At the technology meeting of the Association of Independent Research Centers (http://airi.org): An overview of recent Scientific Computing activities at Fred Hutch, Seattle
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
Ultra Fast Deep Learning in Hybrid Cloud using Intel Analytics Zoo & AlluxioAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Ultra Fast Deep Learning in Hybrid Cloud using Intel Analytics Zoo & Alluxio
Jennie Wang, Software Engineer (Intel)
Tsai Louie, Software Engineer (Intel)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2019 (#GTC19) in Silicon Valley, where he introduced breakthroughs in pro graphics with NVIDIA Omniverse; in data science with NVIDIA-powered Data Science Workstations; in inference and enterprise computing with NVIDIA T4 GPU-powered servers; in autonomous machines with NVIDIA Jetson Nano and the NVIDIA Isaac SDK; in autonomous vehicles with NVIDIA Safety Force Field and DRIVE Constellation; and much more.
Similar to Harnessing AI for the Benefit of All. (20)
A talk on reducing costs & increasing efficiencies by designing, testing & engineering in simulation first, plus examples of robotics & environmental capability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Harnessing AI for the Benefit of All.
1. 1
ALISON B LOWNDES
AI DevRel | EMEA
@alisonblowndes
November 2019
Harnessing the power of AI for
All Humankind
Session 2: The Benefits of Space for All
14. 14
NVIDIA ROBOTICS
RESEARCH LAB SEATTLE
Drive breakthrough robotics research
to enable the next-generation of
robots that safely work alongside
humans, transforming industries such
as manufacturing, logistics,
healthcare, and more
17. 17
THE RISE OF GPU COMPUTING
Big Data Needs Algorithms and Compute That Scales
1980 1990 2000 2010 2020
Original data up to the year 2010 collected and plotted by M. Horowitz,
F. Labonte, O. Shacham, K. Olukotun, L. Hammond, and C. Batten New plot and data collected for 2010-2015 by K. Rupp
103
105
107
1.5X per year
END OF MOORE’S LAW
Single-threaded perf
GPU-Computing perf
1.5X per year
1.1X per year
CPU vs. GPU
18. 18
BUILDING AN AI MODEL
AI MODELFEATURES DEPLOYMENTDATA
DATA
ANALYTICS
MACHINE
LEARNING
MODEL
VALIDATION
NEW DATA
19. 19
BUILDING AN AI PRODUCT
SENSORS
PERCEIVE REASON
PLAN
DATA
DATA
ANALYTICS
MACHINE
LEARNING
AI MODEL
VALIDATION
ACTUATORSAI MODEL
20. 20
HARNESSING
AI
Step I: Build a data fabric for your organization
Step II: Define your objective
Step III: Hire the right talent
Step IV: Identify key processes to augment with AI
Step V: Create a sandbox lab environment
Step VI: Operationalize successful pilots
Step VII: Scale up for enterprise-wide adoption
Step VIII: Drive cultural change
42. 42
ANNOUNCING ISAAC OPEN SDK
Isaac Robot Engine – Modular robot framework | Isaac Sim - Virtual robotics laboratory
Isaac Gym – Reinforcement learning simulator | Isaac Robot Apps – Kaya, Carter and Link
Available at developer.nvidia.com/isaac-sdk
CARTER (Xavier)KAYA (Nano) LINK (Multi Xavier)
JETSON NANO
ISAAC OPEN TOOLBOX
Sensor and
Actuator Drivers Core Libraries GEMS Reference DNN Tools
CUDA-X
Isaac Robot Engine
JETSON TX2 JETSON AGX XAVIER
Isaac Sim Isaac Gym
43. 43
NVIDIA AGX
Family of Systems for
Embedded AI HPC
Self-driving cars
Robotics
Smart Cities
Healthcare
47. 47
COMPUTATIONAL SCALE REQUIRED
3 million labeled images
1 DGX-1 trains 300k labeled images on 1 DNN in 1 day
10 DNNs required for self-driving
10 parallel experiments at all times
100 DGX-1 per car
49. 49
DRIVE CONSTELLATION
Runs DRIVE Sim Simulator
Hardware in the Loop System Level Simulator
Simulate Rare and Difficult Conditions
Scalable Platform | Data Center Solution
Timing Accurate and Bit Accurate
Virtual Reality AV Simulator
57. 57
TWO FUNDAMENTAL NEEDS
Fast filtering, FFTs, correlations, convolutions, resampling, etc to
process increasingly larger bandwidths of signals at increasingly fast
rates and do increasingly cool stuff we couldn’t do before
Artificial Intelligence techniques applied to spectrum sensing, signal
identification, spectrum collaboration, and anomaly detection
59. Global Modeling and Assimilation Office
gmao.gsfc.nasa.govGMAO
National Aeronautics and Space Administration
XGBoost for simulating atmospheric chemistry
christoph.a.keller@nasa.gov
61. 61
TRANSFER
LEARNING
CLARA AI TOOLKIT
Build, Manage And Deploy AI Applications For Radiology
AI-Assisted Annotation – Hours to Minutes | 10x Less Training Data Needed | 13 Pre-Trained Models
Reference Training and Deployment Pipelines | Available at developer.nvidia.com/clara
AI
DEPLOYMENT
PRE-TRAINED
MODELS
AI-ASSISTED
ANNOTATION
62. 62
End to End NVIDIA Deep Learning Workflow
Pre-Trained models * Annotation Assistant * Training & adaptation * Applications ready to integrate with
Clara Platform
63. 63
Integrating the Third and Fourth Pillars of Scientific Discovery
AI
New algorithms and models
with potential to increase
model size and accuracy
HPC
40+ years of algorithms
based on first principles
theory
Commercially
viable fusion
energy
Understanding
cosmological dark
energy and matter
Clinically viable
precision medicine
Improve or validate the
Standard Model of
Physics
Climate/weather
forecasts with ultra-
high fidelity
Dramatically Improves Accuracy and /or Time-to-Solution at Large Scale
CONVERGENCE OF HPC AND AI
64. 64
EXASCALE AI FOR
CLIMATE PREDICTION
The ability to accurately predict the path of extreme
weather systems can save lives and safeguard global
economies. Researchers at Lawrence Berkeley National
Laboratory used a climate dataset on the Summit
supercomputer with NVIDIA Volta Tensor Core GPUs
to train a deep neural network to identify extreme
weather patterns from high-resolution climate
simulations. They achieved a performance
of 1.13 exaflops, the fastest
deep learning algorithm
reported.
Pictured: high-quality segmentation results produced by deep learning on climate datasets.
Image credit: NERSC
65. 65
FIVE ROADS TO GPU COMPUTING
GPU Libraries
______________
Drop-in replacement for
existing libraries
cuBLAS, CUDA Math,
cuSPARSE, cuRAND,
cuSOLVER, nvGRAPH, cuDNN,
cuFFT, Thrust
OPEN-ACC
______________
Comment-based
directives in
C / C++ / Fortran
Single source code
parallelization for
multiple architectures
CUDA
______________
Parallel Programming
Model for GPUs in C, C++,
Fortran, Python, MATLAB
Specialized Kernels for
general purpose GPU
RAPIDS
______________
GPU Acceleration of
Traditional Machine
Learning
Accelerate Scikit-Learn
style ML algorithms
DEEP LEARNING
______________
GPU accelerated deep
learning frameworks
TensorFlow, Pytorch
Build GPU-accelerated
functions directly
from data
66. 66
PURPOSE-BUILT AI SUPERCOMPUTERS
AI WORKSTATION AI DATA CENTER
Universal SW for Deep Learning
Predictable execution across
platforms
Pervasive reach
NGC DL SOFTWARE STACK
The Essential
Instrument for AI
Research
DGX-1
The Personal
AI Supercomputer
DGX Station
The World’s Most Powerful
AI System for the Most
Complex AI Challenges
DGX-2
68. 68
GET STARTED WITH NGC
Deploy containers:
ngc.nvidia.com
Learn more about NGC offering:
nvidia.com/ngc
Technical information:
developer.nvidia.com
Explore the NGC Registry for DL, ML & HPC
69. RAPIDS
RAPIDS
GPU Accelerated End-to-End Data Science
RAPIDS is a set of open source libraries for GPU accelerating
data preparation and machine learning.
OSS website: rapids.ai
GPU Memory
Data Preparation VisualizationModel Training
cuGraph
Graph Analytics
cuML
Machine Learning
cuDF
Data Preparation
72. 72
JETSON NANO DEVKIT & XAVIER NX SOM
Up to 21 DL TOPS (15w) | NX: 8 GB Memory | 45x70mm
CUDA-X acceleration stack | High-resolution sensor support | Runs all CUDA-X AI models
NX available from nvidia.com and distributors worldwide in March 2020
74. 74
DEEP LEARNING INSTITUTE
Training Labs
Nanodegrees
nvidia.com/DLI
TWO DAYS TO A DEMO
Create your first demo today
developer.nvidia.com/
embedded/twodaystoademo
JETSON DEVELOPER KIT
AGX Xavier Developer Kit $699
Xavier NX software patch
developer.nvidia.com/
buy-jetson
GTC
Largest event for GPU
developers
gputechconf.com
JETSON - START NOW
75. Fundamentals
Accelerated Computing
Game Development &
Digital Content
Finance
NVIDIA DEEP LEARNING
INSTITUTE
Online self-paced labs and instructor-led
workshops on deep learning and
accelerated computing
Take self-paced labs at
www.nvidia.co.uk/dlilabs
View upcoming workshops and request a
workshop onsite at www.nvidia.co.uk/dli
Educators can join the University
Ambassador Program to teach DLI courses
on campus and access resources. Learn
more at www.nvidia.com/dli
Intelligent Video
Analytics
Healthcare
Robotics
Autonomous Vehicles
Virtual Reality
76. 76
CONNECT
Connect with experts from
NVIDIA, GE Healthcare, NSF
Carnegie Mellon, Google, and
other leading organizations
LEARN
Gain insight and valuable
hands-on training through
over 100 sessions
DISCOVER
See how GPU technologies
are creating amazing
breakthroughs in important
fields such as deep learning
INNOVATE
Explore disruptive
innovations that can
transform your work
Don’t miss the premier AI conference.
nvidia.com/en-us/gtc/
Join us | Use VIP code NVALOWNDES for 25% off
March 22-26, 2020 | San Jose, CA