Jensen Huang, founder and CEO of NVIDIA, discusses the rise of GPU computing and artificial intelligence. He outlines how GPUs have enabled massive performance increases for deep learning workloads. NVIDIA is introducing new products like the Tesla V100 GPU and DGX-1 server to further accelerate AI research and commercial applications. These announcements position NVIDIA to power continued growth in AI and deep learning.
OpenACC April Monthly Highlights are full of the latest OpenACC news, events, resources and more. Learn about upcoming events, including ISC, and explore GTC recorded sessions covering a variety of OpenACC topics.
GPU-Accelerating A Deep Learning Anomaly Detection PlatformNVIDIA
Learn from Satish Dandu, Michael Balint, and Joshua Patterson on how to accelerate anomaly detection and inferencing by using deep learning and GPU data pipelines.
Check out the latest in OpenACC this month including the PGI 18.1 release, GTC 2018 activity, paper highlights, upcoming events and a call for paper submissions.
OpenACC April Monthly Highlights are full of the latest OpenACC news, events, resources and more. Learn about upcoming events, including ISC, and explore GTC recorded sessions covering a variety of OpenACC topics.
GPU-Accelerating A Deep Learning Anomaly Detection PlatformNVIDIA
Learn from Satish Dandu, Michael Balint, and Joshua Patterson on how to accelerate anomaly detection and inferencing by using deep learning and GPU data pipelines.
Check out the latest in OpenACC this month including the PGI 18.1 release, GTC 2018 activity, paper highlights, upcoming events and a call for paper submissions.
GPU Computing with Python and Anaconda: The Next FrontierNVIDIA
Learn how Python is becoming the glue that binds data science, how rapid integration empowers data scientists to combine new technologies, and the two primary goals in store for Anaconda.
Stay up-to-date with the OpenACC Monthly Highlights. February's edition covers the updated specification OpenACC 3.2, upcoming GPU Hackathons and Bootcamps, OpenACC's BOF at SC21 , recent research, new resources and more!
Stay up-to-date with the OpenACC Monthly Highlights. July's edition covers the OpenACC Summit 2021, GCC, upcoming GPU Hackathons and Bootcamps, Sunita Chandrasekaran named as PI for SOLLVE Project, recent research and more!
In this deck from the University of Houston CACDS HPC Workshop, Jeff Larkin from Nvidia presents: The Past, Present, and Future of OpenACC.
"OpenACC is an open specification for programming accelerators with compiler directives. It aims to provide a simple path for accelerating existing applications for a wide range of devices in a performance portable way. This talk with discuss the history and goals of OpenACC, how it is being used today, and what challenges it will address in the future."
Watch the video presentation: http://wp.me/p3RLHQ-dTm
Stay up-to-date with the OpenACC Monthly Highlights. July's edition covers the OpenACC Summit 2021, upcoming GPU Hackathons and Bootcamps, PEARC21 panel review , recent research, new resources and more!
From weather and climate to seismic imaging to aeronautics, OpenACC sessions featured at GTC20 are helping to facilitate discussions, educate attendees and encourage networking and collaboration.
Sessions cover a broad range of topics, the “Meet the Experts” session enabled one-on-one deep dives into using OpenACC to solve specific challenges, posters highlight how OpenACC is being applied to current science applications, and the on-demand tutorial delivers hands-on skills building.
Learn about the accomplishments and activities of the OpenACC organization over the course of 2019. This OpenACC Highlights covers the newest additions to the OpenACC leadership, the updated specification, conference participation, GPU Hackathons and more.
Developing, experimenting, and deploying ML models at scale requires substantial tooling, scripting, tracking, versioning, and monitoring.
Watch full video here: https://cnvrg.io/webinars-and-workshops/scaling-mlops-on-nvidia-dgx-systems/
Data scientists want to do data science – and are slowed down by MLOps and DevOps tasks.
They lack user friendly tools needed to track experiments, attach resources, manage datasets and launch multiple ML pipelines.
In this presentation cnvrg.io CEO, Yochay Ettun will host a special guest from NVIDIA, Sr. Product Manager for NVIDIA DGX systems, Michael Balint, and discuss how to optimize the use of any NVIDIA DGX and NVIDIA GPU asset both on-prem or in the cloud with the cnvrg.io machine learning platform.
We will show best practices to reach high utilization of NVIDIA DGX systems, while conducting meta-scheduling across multiple heterogeneous Kubernetes/OpenShift/Linux server clusters.
In addition, we will introduce the concept of production flows, which automate hundreds of models from the data hub to deployment. We will wrap up with a real-life demo of flows, exercising many experiments across DGX platforms.
What you will learn:
- Creating a data science flow: from data to deployment, while attaching different NVIDIA DGX Kubernetes clusters to each step of the flow
- The concept of meta-scheduler: scheduling experiments disperse resources or other schedulers, accomplishing high utilization at scale
- How the NVIDIA DGX ecosystem with cnvrg.io makes GPU assets consumed easily, with one-click, bypassing complexity of MLOps
- How to leverage NGC containers in ML pipelines
You can watch the full presentation along with audio and video in the link here: https://cnvrg.io/webinars-and-workshops/scaling-mlops-on-nvidia-dgx-systems/
In this deck from FOSDEM'19, Thomas Schwinge presents: Speeding up Programs with OpenACC in GCC.
"Proven in production use for decades, GCC (the GNU Compiler Collection) offers C, C++, Fortran, and other compilers for a multitude of target systems. Over the last few years, we -- formerly known as "CodeSourcery", now a group in "Mentor, a Siemens Business" -- added support for the directive-based OpenACC programming model. Requiring only few changes to your existing source code, OpenACC allows for easy parallelization and code offloading to accelerators such as GPUs. We will present a short introduction of GCC and OpenACC, implementation status, examples, and performance results.
OpenACC is a user-driven directive-based performance-portable parallel programming model designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model."
Watch the video: https://wp.me/p3RLHQ-jOR
Learn more: https://fosdem.org/2019/
and
https://www.openacc.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers working on applications for the new Frontier supercomputer, using OpenACC for weather forecasting, upcoming GPU Hackathons and Bootcamps, and new resources!
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univainside-BigData.com
In this deck from the Univa Breakfast Briefing at ISC 2018, Duncan Poole from NVIDIA describes how the company is accelerating HPC in the Cloud.
Learn more: https://www.nvidia.com/en-us/data-center/dgx-systems/
and
http://univa.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Today’s groundbreaking scientific discoveries are taking place in HPC data centers. Using containers, researchers and scientists gain the flexibility to run HPC application containers on NVIDIA Volta-powered systems including Quadro-powered workstations, NVIDIA DGX Systems, and HPC clusters.
GPU Computing with Python and Anaconda: The Next FrontierNVIDIA
Learn how Python is becoming the glue that binds data science, how rapid integration empowers data scientists to combine new technologies, and the two primary goals in store for Anaconda.
Stay up-to-date with the OpenACC Monthly Highlights. February's edition covers the updated specification OpenACC 3.2, upcoming GPU Hackathons and Bootcamps, OpenACC's BOF at SC21 , recent research, new resources and more!
Stay up-to-date with the OpenACC Monthly Highlights. July's edition covers the OpenACC Summit 2021, GCC, upcoming GPU Hackathons and Bootcamps, Sunita Chandrasekaran named as PI for SOLLVE Project, recent research and more!
In this deck from the University of Houston CACDS HPC Workshop, Jeff Larkin from Nvidia presents: The Past, Present, and Future of OpenACC.
"OpenACC is an open specification for programming accelerators with compiler directives. It aims to provide a simple path for accelerating existing applications for a wide range of devices in a performance portable way. This talk with discuss the history and goals of OpenACC, how it is being used today, and what challenges it will address in the future."
Watch the video presentation: http://wp.me/p3RLHQ-dTm
Stay up-to-date with the OpenACC Monthly Highlights. July's edition covers the OpenACC Summit 2021, upcoming GPU Hackathons and Bootcamps, PEARC21 panel review , recent research, new resources and more!
From weather and climate to seismic imaging to aeronautics, OpenACC sessions featured at GTC20 are helping to facilitate discussions, educate attendees and encourage networking and collaboration.
Sessions cover a broad range of topics, the “Meet the Experts” session enabled one-on-one deep dives into using OpenACC to solve specific challenges, posters highlight how OpenACC is being applied to current science applications, and the on-demand tutorial delivers hands-on skills building.
Learn about the accomplishments and activities of the OpenACC organization over the course of 2019. This OpenACC Highlights covers the newest additions to the OpenACC leadership, the updated specification, conference participation, GPU Hackathons and more.
Developing, experimenting, and deploying ML models at scale requires substantial tooling, scripting, tracking, versioning, and monitoring.
Watch full video here: https://cnvrg.io/webinars-and-workshops/scaling-mlops-on-nvidia-dgx-systems/
Data scientists want to do data science – and are slowed down by MLOps and DevOps tasks.
They lack user friendly tools needed to track experiments, attach resources, manage datasets and launch multiple ML pipelines.
In this presentation cnvrg.io CEO, Yochay Ettun will host a special guest from NVIDIA, Sr. Product Manager for NVIDIA DGX systems, Michael Balint, and discuss how to optimize the use of any NVIDIA DGX and NVIDIA GPU asset both on-prem or in the cloud with the cnvrg.io machine learning platform.
We will show best practices to reach high utilization of NVIDIA DGX systems, while conducting meta-scheduling across multiple heterogeneous Kubernetes/OpenShift/Linux server clusters.
In addition, we will introduce the concept of production flows, which automate hundreds of models from the data hub to deployment. We will wrap up with a real-life demo of flows, exercising many experiments across DGX platforms.
What you will learn:
- Creating a data science flow: from data to deployment, while attaching different NVIDIA DGX Kubernetes clusters to each step of the flow
- The concept of meta-scheduler: scheduling experiments disperse resources or other schedulers, accomplishing high utilization at scale
- How the NVIDIA DGX ecosystem with cnvrg.io makes GPU assets consumed easily, with one-click, bypassing complexity of MLOps
- How to leverage NGC containers in ML pipelines
You can watch the full presentation along with audio and video in the link here: https://cnvrg.io/webinars-and-workshops/scaling-mlops-on-nvidia-dgx-systems/
In this deck from FOSDEM'19, Thomas Schwinge presents: Speeding up Programs with OpenACC in GCC.
"Proven in production use for decades, GCC (the GNU Compiler Collection) offers C, C++, Fortran, and other compilers for a multitude of target systems. Over the last few years, we -- formerly known as "CodeSourcery", now a group in "Mentor, a Siemens Business" -- added support for the directive-based OpenACC programming model. Requiring only few changes to your existing source code, OpenACC allows for easy parallelization and code offloading to accelerators such as GPUs. We will present a short introduction of GCC and OpenACC, implementation status, examples, and performance results.
OpenACC is a user-driven directive-based performance-portable parallel programming model designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model."
Watch the video: https://wp.me/p3RLHQ-jOR
Learn more: https://fosdem.org/2019/
and
https://www.openacc.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers working on applications for the new Frontier supercomputer, using OpenACC for weather forecasting, upcoming GPU Hackathons and Bootcamps, and new resources!
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univainside-BigData.com
In this deck from the Univa Breakfast Briefing at ISC 2018, Duncan Poole from NVIDIA describes how the company is accelerating HPC in the Cloud.
Learn more: https://www.nvidia.com/en-us/data-center/dgx-systems/
and
http://univa.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Today’s groundbreaking scientific discoveries are taking place in HPC data centers. Using containers, researchers and scientists gain the flexibility to run HPC application containers on NVIDIA Volta-powered systems including Quadro-powered workstations, NVIDIA DGX Systems, and HPC clusters.
Implementing AI: High Performance Architectures: A Universal Accelerated Comp...KTN
The Implementing AI: High Performance Architectures webinar, hosted by KTN and eFutures, was the fourth event in the Implementing AI webinar series.
The focus of the webinar was the impact of processing AI data on data centres - particularly from the technology perspective. Timothy Lanfear, Director of Solution Architecture and Engineering EMEA, NVIDIA, presented on a Universal Accelerated Computing Platform.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the NVIDIA GPU Technology Conference, Axel Koehler presents: Inside the Volta GPU Architecture and CUDA 9.
"The presentation will give an overview about the new NVIDIA Volta GPU architecture and the latest CUDA 9 release. The NVIDIA Volta architecture powers the worlds most advanced data center GPU for AI, HPC, and Graphics. Volta features a new Streaming Multiprocessor (SM) architecture and includes enhanced features like NVLINK2 and the Multi-Process Service (MPS) that delivers major improvements in performance, energy efficiency, and ease of programmability. New features like Independent Thread Scheduling and the Tensor Cores enable Volta to simultaneously deliver the fastest and most accessible performance. CUDA is NVIDIA''s parallel computing platform and programming model. You''ll learn about new programming model enhancements and performance improvements in the latest CUDA9 release."
Watch the video: https://wp.me/p3RLHQ-iB7
Learn more: https://www.nvidia.com/en-us/gtc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
NVIDIA founder and CEO Jensen Huang took the stage in Munich — one of the hubs of the global auto industry — to introduce a powerful new AI computer for fully autonomous vehicles and a new VR application for those who design them.
In this deck from Switzerland HPC Conference, Gunter Roeth from NVIDIA presents: Deep Learning on the SaturnV Cluster.
"Machine Learning is among the most important developments in the history of computing. Deep learning is one of the fastest growing areas of machine learning and a hot topic in both academia and industry. It has dramatically improved the state-of-the-art in areas such as speech recognition, computer vision, predicting the activity of drug molecules, and many other machine learning tasks. The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters."
Watch the video: http://wp.me/p3RLHQ-gDv
Learn more: http://www.nvidia.com/object/dgx-saturnv.html
and
http://hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This talk was given at H2O World 2018 NYC and can be viewed here: https://youtu.be/rKoBJcnsFpM
Speaker's Bio:
Mateusz is a software developer who loves all things distributed, machine learning and hates buzzwords. His favourite hobby data juggling. He obtained his M.Sc. in Computer Science from AGH UST in Krakow, Poland, during which he did an exchange at L’ECE Paris in France and worked on distributed flight booking systems. After graduation he move to Tokyo to work as a researcher at Fujitsu Laboratories on machine learning and NLP projects, where he is still currently based. In his spare time he tries to be part of the IT community by organizing, attending and speaking at conferences and meet ups.
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
H2O World 2017 Keynote - Jim McHugh, VP & GM of Data Center, NVIDIASri Ambati
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the recording: https://youtu.be/NyaJ7uDroww.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://www.twitter.com/h2oai.
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
Similar to GTC 2017: Powering the AI Revolution (20)
We pioneered accelerated computing to tackle challenges no one else can solve. Now, the AI moment has arrived. Discover how our work in AI and the metaverse is profoundly impacting society and transforming the world’s largest industries.
Promising to transform trillion-dollar industries and address the “grand challenges” of our time, NVIDIA founder and CEO Jensen Huang shared a vision of an era where intelligence is created on an industrial scale and woven into real and virtual worlds at GTC 2022.
Our passion is to inspire and enable the da Vincis and Einsteins of our time, so they can see and create the future. We pioneered graphics, accelerated computing, and AI to tackle challenges ordinary computers cannot solve. See how we're continuously inventing the future--from our early days as a chip maker to transformers of the Metaverse.
Outlining a sweeping vision for the “age of AI,” NVIDIA CEO Jensen Huang Monday kicked off the GPU Technology Conference.
Huang made major announcements in data centers, edge AI, collaboration tools and healthcare in a talk simultaneously released in nine episodes, each under 10 minutes.
“AI requires a whole reinvention of computing – full-stack rethinking – from chips, to systems, algorithms, tools, the ecosystem,” Huang said, standing in front of the stove of his Silicon Valley home.
Behind a series of announcements touching on everything from healthcare to robotics to videoconferencing, Huang’s underlying story was simple: AI is changing everything, which has put NVIDIA at the intersection of changes that touch every facet of modern life.
More and more of those changes can be seen, first, in Huang’s kitchen, with its playful bouquet of colorful spatulas, that has served as the increasingly familiar backdrop for announcements throughout the COVID-19 pandemic.
“NVIDIA is a full stack computing company – we love working on extremely hard computing problems that have great impact on the world – this is right in our wheelhouse,” Huang said. “We are all-in, to advance and democratize this new form of computing – for the age of AI.”
This GTC is one of the biggest yet. It features more than 1,000 sessions—400 more than the last GTC—in 40 topic areas. And it’s the first to run across the world’s time zones, with sessions in English, Chinese, Korean, Japanese, and Hebrew.
The Best of AI and HPC in Healthcare and Life SciencesNVIDIA
Trends. Success stories. Training. Networking.
The GPU Technology Conference brings this all to one place. Meet the people pioneering the future of healthcare and life sciences and learn how to apply the latest AI and HPC tools to your research.
NVIDIA BioBert, an optimized version of BioBert was created specifically for biomedical and clinical domains, providing this community easy access to state-of-the-art NLP models.
Top 5 Deep Learning and AI Stories - August 30, 2019NVIDIA
Read the top five news stories in artificial intelligence and learn how innovations in AI are transforming business across industries like healthcare and finance and how your business can derive tangible benefits by implementing AI the right way.
Seven Ways to Boost Artificial Intelligence ResearchNVIDIA
Higher education institutions have long been the backbone of scientific breakthroughs, view this slideshare to learn seven easy ways to help elevate your research.
Learn about the benefits of joining the NVIDIA Developer Program and the resources available to you as a registered developer. This slideshare also provides the steps of getting started in the program as well as an overview of the developer engagement platforms at your disposal. developer.nvidia.com/join
If you were unable to attend GTC 2019 or couldn't make it to all of the sessions you had on your list, check out the top four DGX POD sessions from the conference on-demand.
In this special edition of "This week in Data Science," we focus on the top 5 sessions for data scientists from GTC 2019, with links to the free sessions available on demand.
This Week in Data Science - Top 5 News - April 26, 2019NVIDIA
What's new in data science? Flip through this week's Top 5 to read a report on the most coveted skills for data scientists, top universities building AI labs, data science workstations for AI deployment, and more.
NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2019 (#GTC19) in Silicon Valley, where he introduced breakthroughs in pro graphics with NVIDIA Omniverse; in data science with NVIDIA-powered Data Science Workstations; in inference and enterprise computing with NVIDIA T4 GPU-powered servers; in autonomous machines with NVIDIA Jetson Nano and the NVIDIA Isaac SDK; in autonomous vehicles with NVIDIA Safety Force Field and DRIVE Constellation; and much more.
Check out these DLI training courses at GTC 2019 designed for developers, data scientists & researchers looking to solve the world’s most challenging problems with accelerated computing.
Transforming Healthcare at GTC Silicon ValleyNVIDIA
The GPU Technology Conference (GTC) brings together the leading minds in AI and healthcare that are driving advances in the industry - from top radiology departments and medical research institutions to the hottest startups from around the world. Can't miss panels and trainings at GTC Silicon Valley
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the upcoming NVIDIA GTC 2019, complete schedule of GPU hackathons and more!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
2. 2
LIFE AFTER MOORE’S LAW
1980 1990 2000 2010 2020
102
103
104
105
106
107
40 Years of Microprocessor Trend Data
Original data up to the year 2010 collected and plotted by M. Horowitz, F. Labonte,
O. Shacham, K. Olukotun, L. Hammond, and C. Batten New plot and data collected
for 2010-2015 by K. Rupp
Single-threaded perf
1.5X per year
1.1X per year
Transistors
(thousands)
3. 3
1980 1990 2000 2010 2020
GPU-Computing perf
1.5X per year
1000X
by
2025
RISE OF GPU COMPUTING
Original data up to the year 2010 collected and plotted by M. Horowitz, F. Labonte,
O. Shacham, K. Olukotun, L. Hammond, and C. Batten New plot and data collected
for 2010-2015 by K. Rupp
102
103
104
105
106
107
Single-threaded perf
1.5X per year
1.1X per year
APPLICATIONS
SYSTEMS
ALGORITHMS
CUDA
ARCHITECTURE
4. 4
RISE OF GPU COMPUTING
GPU Developers
11X in 5 Years
GTC Attendees
3X in 5 Years
2017 2017
511,0007,000
20122012
CUDA Downloads
in 2016
1M+
6. 6
ERA OF MACHINE LEARNING
“A Quest for Intelligence”
— Fei-Fei Li
“The Master Algorithm”
— Pedro Domingos
7. 7
BIG BANG OF MODERN AI
Auto
Encoders
GANLSTM
IDSIA
CNN on GPU
Stanford &
NVIDIA
Large-scale
DNN on GPU
U Toronto
AlexNet
on GPU
CaptioningNVIDIA BB8 Style TransferBRETTImageNet
Google Photo
Arterys
FDA Approved AlphaGo
Super
Resolution Deep Voice
Baidu
DuLight
NMT
Superhuman
ASR
Reinforcement
Learning
Transfer
Learning
10. 10
$5B
BIG BANG OF MODERN AI
Udacity Students in AI Programs
100X in 2 Years
NIPS, ICML, CVPR, ICLR Attendees
2X in 2 Years
2016 2017
20,00013,000
20152014
AI Startups Investments
9X in 4 Years
$5B
20162012
12. 12
NVIDIA INCEPTION —
1,300 DEEP LEARNING STARTUPS
HEALTHCARE
BUSINESS INTELLIGENCE & VISUALIZATION
DEVELOPMENT PLATFORMS
RETAIL, ETAIL
IOT & MANUFACTURING
PLATFORMs & APIs
DATA MANAGEMENT
AEC
FINANCIAL SECURITY, IVA
CYBERAUTONOMOUS MACHINES
13. 13
SAP AI FOR THE
ENTERPRISE
First commercial AI offerings
from SAP
Brand Impact, Service Ticketing,
Invoice-to-Record applications
Powered by NVIDIA GPUs on
DGX-1 and AWS
14. 14
MODEL COMPLEXITY IS EXPLODING
2016 — Baidu Deep Speech 22015 — Microsoft ResNet 2017 — Google NMT
105 ExaFLOPS
8.7 Billion Parameters
20 ExaFLOPS
300 Million Parameters
7 ExaFLOPS
60 Million Parameters
15. 15
ANNOUNCING TESLA V100
GIANT LEAP FOR AI & HPC
VOLTA WITH NEW TENSOR CORE
21B xtors | TSMC 12nm FFN | 815mm2
5,120 CUDA cores
7.5 FP64 TFLOPS | 15 FP32 TFLOPS
NEW 120 Tensor TFLOPS
20MB SM RF | 16MB Cache
16GB HBM2 @ 900 GB/s
300 GB/s NVLink
16. 16
NEW TENSOR CORE
New CUDA TensorOp instructions
& data formats
4x4 matrix processing array
D[FP32] = A[FP16] * B[FP16] + C[FP32]
Optimized for deep learning
Activation Inputs Weights Inputs Output Results
17. 17
ANNOUNCING TESLA V100
GIANT LEAP FOR AI & HPC
VOLTA WITH NEW TENSOR CORE
Compared to Pascal
1.5X General-purpose FLOPS for HPC
12X Tensor FLOPS for DL training
6X Tensor FLOPS for DL inferencing
21. 21
ANNOUNCING NEW FRAMEWORK RELEASES
FOR VOLTA
Hours
CNN Training
(ResNet-50)
Hours
Multi-Node Training with NCCL 2.0
(ResNet-50)
0 5 10 15 20 25
64x V100
8x V100
8x P100
0 10 20 30 40 50
V100
P100
K80
Hours
LSTM Training
(Neural Machine Translation)
0 10 20 30 40 50
8x V100
8x P100
8x K80
22. 22
ANNOUNCING
NVIDIA DGX-1 WITH TESLA V100
ESSENTIAL INSTRUMENT OF AI RESEARCH
960 Tensor TFLOPS | 8x Tesla V100 | NVLink Hybrid Cube
From 8 days on TITAN X to 8 hours
400 servers in a box
23. 23
ANNOUNCING
NVIDIA DGX-1 WITH TESLA V100
ESSENTIAL INSTRUMENT OF AI RESEARCH
960 Tensor TFLOPS | 8x Tesla V100 | NVLink Hybrid Cube
From 8 days on TITAN X to 8 hours
400 servers in a box
$149,000
Order today: nvidia.com/DGX-1
25. 25
ANNOUNCING
NVIDIA DGX STATION
PERSONAL DGX
480 Tensor TFLOPS | 4x Tesla V100 16GB
NVLink Fully Connected | 3x DisplayPort
1500W | Water Cooled
$69,000
Order today: nvidia.com/DGX-Station
26. 26
ANNOUNCING
HGX-1 WITH TESLA V100
VERSATILE GPU CLOUD COMPUTING
8x Tesla V100 with NVLINK Hybrid Cube
2C:8G | 2C:4G | 1C:2G Configurable
NVIDIA Deep Learning, GRID graphics,
CUDA HPC stacks
27. 27
ANNOUNCING TENSORRT
FOR TENSORFLOW
Graph optimizations for vertical and horizontal layer fusion | GPU-specific optimizations
Import models from Caffe and TensorFlow
Compiler for Deep Learning inferencing
next layer
Relu
bias
1x1 conv 3x3 conv
bias
Relu
5x5 conv
bias
Relu
1x1 conv
bias
Relu
Relu
bias
1x1 conv
Relu
bias
1x1 conv
max pool
previous layer
28. 28
next layer
1x1 CBR 3x3 CBR 5x5 CBR 1x1 CBR
1x1 CBR 1x1 CBR max pool
previous layer
ANNOUNCING TENSORRT
FOR TENSORFLOW
Graph optimizations for vertical and horizontal layer fusion | GPU-specific optimizations
Import models from Caffe and TensorFlow
Compiler for Deep Learning Inferencing
next layer
bias bias bias bias
max pool
previous layer
Relu Relu Relu Relu
1x1 conv 3x3 conv 5x5 conv 1x1 conv
Relu
bias
1x1 conv
Relu
bias
1x1 conv
29. 29
next layer
3x3 CBR 5x5 CBR 1x1 CBR
1x1 CBR max pool
previous layer
concat
1x1 CBR 3x3 CBR 5x5 CBR 1x1 CBR
max pool
previous layer
1x1 CBR 1x1 CBR
ANNOUNCING TENSORRT
FOR TENSORFLOW
Graph optimizations for vertical and horizontal layer fusion | GPU-specific optimizations
Import models from Caffe and TensorFlow
Compiler for Deep Learning Inferencing
30. 30
next layer
3x3 CBR 5x5 CBR 1x1 CBR
max pool
previous layer
1x1 CBR
ANNOUNCING TENSORRT
FOR TENSORFLOW
Graph optimizations for vertical and horizontal layer fusion | GPU-specific optimizations
Import models from Caffe and TensorFlow
Compiler for Deep Learning Inferencing
31. 31
next layer
3x3 CBR 5x5 CBR 1x1 CBR
max pool
previous layer
1x1 CBR
ANNOUNCING TENSORRT
FOR TENSORFLOW
Graph optimizations for vertical and horizontal layer fusion | GPU-specific optimizations
Import models from Caffe and TensorFlow
Compiler for Deep Learning Inferencing
33. 33
THE CASE FOR GPU ACCELERATED
DATACENTERS
Tesla V100 Reduce 15X500 Nodes CPU Servers 33 Nodes GPU Accelerated Server
300K inf/s Datacenter
@300 inf/s > 1K CPUs
1K CPUs > 500 Nodes
@$3K
@500W
= $1.5M
= 250KW
34. 34
NVIDIA
DEEP LEARNING STACK
DEEP LEARNING FRAMEWORKS
DEEP LEARNING LIBRARIES
NVIDIA cuDNN, NCCL,
cuBLAS, TensorRT
CUDA DRIVER
OPERATING SYSTEM
GPU
SYSTEM
35. 35
Registry of
Containers, Datasets,
and Pre-trained models
NVIDIA
GPU CLOUD
CSPs
ANNOUNCING
NVIDIA GPU CLOUD
Containerized in NVDocker | Optimization across the full stack
Always up-to-date | Fully tested and maintained by NVIDIA | Beta in July
GPU-accelerated Cloud Platform Optimized for Deep Learning
36. 36
POWER OF GPU COMPUTING
0
8
16
24
32
40
AMBER Performance (ns/day)
P100
2016
K80
2015
K40
2014
K20
2013
AMBER 12
CUDA 4
AMBER 14
CUDA 5
AMBER 14
CUDA 6
AMBER 16
CUDA 8
0
2400
4800
7200
9600
12000
GoogleNet Performance (i/s)
cuDNN 2
CUDA 6
cuDNN 4
CUDA 7
cuDNN 6
CUDA 8
NCCL 1.6
cuDNN 7
CUDA 9
NCCL 2
8x K80
2014
8x Maxwell
2015
DGX-1
2016
DGX-1V
2017
41. 41
AI PROCESSOR FOR AUTONOMOUS MACHINES
XAVIER
30 TOPS DL
30W
Custom ARM64 CPU
512 Core Volta GPU
10 TOPS DL Accelerator
General
Purpose
Architectures
Domain
Specific
Accelerators
Energy Efficiency
CPU
FPGA
CUDA
GPU
DLA
Pascal
Volta
42. 42
AI PROCESSOR FOR AUTONOMOUS MACHINES
XAVIER
30 TOPS DL
30W
Custom ARM64 CPU
512 Core Volta GPU
10 TOPS DL Accelerator
General
Purpose
Architectures
Domain
Specific
Accelerators
Energy Efficiency
CPU
CUDA
GPU
DLA
Volta
+
43. 43
ANNOUNCING XAVIER DLA
NOW OPEN SOURCE
Early Access July | General Release September
Command Interface
Tensor Execution Micro-controller
Memory Interface
Input DMA
(Activations
and Weights)
Unified
512KB
Input
Buffer
Activations
and
Weights
Sparse Weight
Decompression
Native
Winograd
Input
Transform
MAC
Array
2048 Int8
or
1024 Int16
or
1024 FP16
Output
Accumulators
Output
Postprocess
or
(Activation
Function,
Pooling
etc.)
Output
DMA
46. 46
ANNOUNCING ISAAC ROBOT SIMULATOR
NVIDIA GPU Computer
ISAAC
Robot Simulator
OpenAI
GYM
Robot &
Environment
Definition
Virtual Jetson
47. 47
ANNOUNCING ISAAC ROBOT SIMULATOR
NVIDIA GPU Computer
ISAAC
Robot Simulator
OpenAI
GYM
Robot &
Environment
Definition
Virtual Jetson
LEARN
& PLAN
SEE, HEAR,
TOUCH
ACT
48. 48
NVIDIA POWERING THE AI REVOLUTION
NVIDIA GPU Cloud
ISAAC
NVIDIA GPU in Every Cloud
Xavier DLA
Open Source
DGX-1 and DGX StationTesla V100
TensorRT
Tensor Core
NVIDIA
GPU CLOUD
CSPs