As artificial intelligence sweeps across the technology landscape, NVIDIA unveiled today at its annual GPU Technology Conference a series of new products and technologies focused on deep learning, virtual reality and self-driving cars.
At CES 2016, we made a series of announcements highlighting our work to advance the biggest trends in the industry — self-driving cars, artificial intelligence and
virtual reality. The focus of our news was NVIDIA DRIVE, an end-to-end deep learning platform for self-driving cars.
At a press event kicking off CES 2016, we unveiled artificial intelligence technology that will let cars sense the world around them and pilot a safe route forward.
Dressed in his trademark black leather jacket, speaking to a crowd of some 400 automakers, media and analysts, NVIDIA CEO Jen-Hsun Huang revealed DRIVE PX 2, an automotive supercomputing platform that processes 24 trillion deep learning operations a second. That’s 10 times the performance of the first-generation DRIVE PX, now being used by more than 50 companies in the automotive world.
The new DRIVE PX 2 delivers 8 teraflops of processing power. It has the processing power of 150 MacBook Pros. And it’s the size of a lunchbox in contrast to other autonomous-driving technology being used today, which takes up the entire trunk of a mid-sized sedan.
“Self-driving cars will revolutionize society,” Huang said at the beginning of his talk. “And NVIDIA’s vision is to enable them.”
Kicking off the first in a series of global GPU Technology Conferences, NVIDIA co-founder and CEO Jen-Hsun Huang today at GTC China unveiled technology that will accelerate the deep learning revolution that is sweeping across industries. Huang spoke in front of a crowd of more than 2,500 scientists, engineers, entrepreneurs and press, gathered in Beijing for a day devoted to deep learning and AI. On stage he announced the Tesla P4 and P40 GPU accelerators for inferencing production workloads for AI services and, a small, energy-efficient AI supercomputer for highway driving — the NVIDIA DRIVE PX 2 for AutoCruise.
NVIDIA Deep Learning Institute 2017 基調講演NVIDIA Japan
このスライドは 2017 年 1 月 17 日 (火)、ベルサール高田馬場で開催された「NVIDIA Deep Learning Institute 2017」の基調講演にて、NVIDIA Chief Scientist and SVP of Research の Bill Dally が講演したものです。
At CES 2016, we made a series of announcements highlighting our work to advance the biggest trends in the industry — self-driving cars, artificial intelligence and
virtual reality. The focus of our news was NVIDIA DRIVE, an end-to-end deep learning platform for self-driving cars.
At a press event kicking off CES 2016, we unveiled artificial intelligence technology that will let cars sense the world around them and pilot a safe route forward.
Dressed in his trademark black leather jacket, speaking to a crowd of some 400 automakers, media and analysts, NVIDIA CEO Jen-Hsun Huang revealed DRIVE PX 2, an automotive supercomputing platform that processes 24 trillion deep learning operations a second. That’s 10 times the performance of the first-generation DRIVE PX, now being used by more than 50 companies in the automotive world.
The new DRIVE PX 2 delivers 8 teraflops of processing power. It has the processing power of 150 MacBook Pros. And it’s the size of a lunchbox in contrast to other autonomous-driving technology being used today, which takes up the entire trunk of a mid-sized sedan.
“Self-driving cars will revolutionize society,” Huang said at the beginning of his talk. “And NVIDIA’s vision is to enable them.”
Kicking off the first in a series of global GPU Technology Conferences, NVIDIA co-founder and CEO Jen-Hsun Huang today at GTC China unveiled technology that will accelerate the deep learning revolution that is sweeping across industries. Huang spoke in front of a crowd of more than 2,500 scientists, engineers, entrepreneurs and press, gathered in Beijing for a day devoted to deep learning and AI. On stage he announced the Tesla P4 and P40 GPU accelerators for inferencing production workloads for AI services and, a small, energy-efficient AI supercomputer for highway driving — the NVIDIA DRIVE PX 2 for AutoCruise.
NVIDIA Deep Learning Institute 2017 基調講演NVIDIA Japan
このスライドは 2017 年 1 月 17 日 (火)、ベルサール高田馬場で開催された「NVIDIA Deep Learning Institute 2017」の基調講演にて、NVIDIA Chief Scientist and SVP of Research の Bill Dally が講演したものです。
NVIDIA CEO Jen-Hsun Huang introduces NVLink and shares a roadmap of the GPU. Primary topics also include an introduction of the GeForce GTX Titan Z, CUDA for machine learning, and Iray VCA.
Nvidia Deep Learning Solutions - Alex SabatierSri Ambati
Alex Sabatier from Nvidia talks about the future of Deep Learning from an chipmaker perspective
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Supercomputing has swept rapidly from the far edges of science to the heart of our everyday lives. And propelling it forward – bringing it into the mobile phone already in your pocket and the car in your driveway – is GPU acceleration, NVIDIA CEO Jen-Hsun Huang told a packed house at a rollicking event kicking off this week’s SC15 annual supercomputing show in Austin. The event draws 10,000 researchers, national lab directors and others from around the world.
Opening Keynote at GTC 2015: Leaps in Visual ComputingNVIDIA
NVIDIA CEO and co-founder Jen-Hsun Huang took the stage for the GPU Technology Conference in the San Jose Convention Center to present some major announcements on March 17, 2015. You'll find out how NVIDIA is innovating in the field of deep learning, what NVIDIA DRIVE PX can do for automakers, and where Pascal, the next-generation GPU architecture, fits in the new performance roadmap.
At the 2018 GPU Technology Conference in Silicon Valley, NVIDIA CEO Jensen Huang announced the new "double-sized" 32GB Volta GPU; unveiled the NVIDIA DGX-2, the power of 300 servers in a box; showed an expanded inference platform with TensorRT 4 and Kubernetes on NVIDIA GPU; and revealed the NVIDIA GPU Cloud registry with 30 GPU-optimized containers and made it available from more cloud service providers. GTC attendees also got a sneak peek of the latest NVIDIA DRIVE software stack and the next DRIVE AI car computer, "Orin," along with developments in the NVIDIA Isaac platform for robotics and Project Clara, NVIDIA's medical imaging supercomputer.
NVIDIA Volta Tensor Core GPU achieves new AI performance milestones in ResNet-50 for a single chip, single node, and single cloud instance. Explore the performance improvements.
Building upon the foundational understanding of deep learning, this talk will cover a variety of applications of artificial intelligence for problem-solving and how you can both get started and become proficient with NVIDIA’s hardware, open-source software & classes. We will also discuss the role of games engines both historically and current day in teaching today's AI systems.
Building New Realities in AEC with NVIDIA Quadro VR WebinarNVIDIA
Register to watch this on-demand webinar at http://info.nvidianews.com/proviz-webinar-series-provr.html
Discover the coming innovations in the use of virtual reality for building design, share new technologies and VR workflow integrations that are being used today, and get a look at what’s coming next in VR from NVIDIA.
Key takeaways:
- Learn about our VR technologies and solutions, pro apps for VR, and professional VR best practices.
- Hear how the AEC industry is integrating VR into their clients’ design experiences.
- Share your findings in the role of VR for immersive building design and ask questions during the live chat session.
Presented by Dave Weinstein, Andrew Rink, and Ron Swidler.
CES has been a bellwether of technology trends for five decades. This year, the world’s largest technology tradeshow showcased the latest advances of the greatest computing challenge of all time — artificial intelligence. NVIDIA Founder and CEO Jen-Hsun Huang kicked off the 50th anniversary event with his unique perspective on AI and a series of announcements across the gaming, smart home and automotive industries. This presentation is a summary of the keynote with a sampling of the resulting press coverage.
Crowning its award-winning lineup of Pascal™ architecture-based GPUs, NVIDIA unveiled its fastest gaming GPU ever -- the GeForce® GTX 1080 Ti. Packed with extreme gaming horsepower, the GeForce GTX 1080 Ti delivers up to 35 percent more performance of the GTX 1080 and comes with 11GB of next-generation GDDR5X memory, running at a staggering 11Gbps, for the ultimate in memory bandwidth and gaming prowess. GTX 1080 Ti graphics cards, including the NVIDIA Founders Edition, will be available worldwide from NVIDIA GeForce partners beginning March 10, and starting at $699.
NVIDIA CEO Jen-Hsun Huang introduces NVLink and shares a roadmap of the GPU. Primary topics also include an introduction of the GeForce GTX Titan Z, CUDA for machine learning, and Iray VCA.
Nvidia Deep Learning Solutions - Alex SabatierSri Ambati
Alex Sabatier from Nvidia talks about the future of Deep Learning from an chipmaker perspective
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Supercomputing has swept rapidly from the far edges of science to the heart of our everyday lives. And propelling it forward – bringing it into the mobile phone already in your pocket and the car in your driveway – is GPU acceleration, NVIDIA CEO Jen-Hsun Huang told a packed house at a rollicking event kicking off this week’s SC15 annual supercomputing show in Austin. The event draws 10,000 researchers, national lab directors and others from around the world.
Opening Keynote at GTC 2015: Leaps in Visual ComputingNVIDIA
NVIDIA CEO and co-founder Jen-Hsun Huang took the stage for the GPU Technology Conference in the San Jose Convention Center to present some major announcements on March 17, 2015. You'll find out how NVIDIA is innovating in the field of deep learning, what NVIDIA DRIVE PX can do for automakers, and where Pascal, the next-generation GPU architecture, fits in the new performance roadmap.
At the 2018 GPU Technology Conference in Silicon Valley, NVIDIA CEO Jensen Huang announced the new "double-sized" 32GB Volta GPU; unveiled the NVIDIA DGX-2, the power of 300 servers in a box; showed an expanded inference platform with TensorRT 4 and Kubernetes on NVIDIA GPU; and revealed the NVIDIA GPU Cloud registry with 30 GPU-optimized containers and made it available from more cloud service providers. GTC attendees also got a sneak peek of the latest NVIDIA DRIVE software stack and the next DRIVE AI car computer, "Orin," along with developments in the NVIDIA Isaac platform for robotics and Project Clara, NVIDIA's medical imaging supercomputer.
NVIDIA Volta Tensor Core GPU achieves new AI performance milestones in ResNet-50 for a single chip, single node, and single cloud instance. Explore the performance improvements.
Building upon the foundational understanding of deep learning, this talk will cover a variety of applications of artificial intelligence for problem-solving and how you can both get started and become proficient with NVIDIA’s hardware, open-source software & classes. We will also discuss the role of games engines both historically and current day in teaching today's AI systems.
Building New Realities in AEC with NVIDIA Quadro VR WebinarNVIDIA
Register to watch this on-demand webinar at http://info.nvidianews.com/proviz-webinar-series-provr.html
Discover the coming innovations in the use of virtual reality for building design, share new technologies and VR workflow integrations that are being used today, and get a look at what’s coming next in VR from NVIDIA.
Key takeaways:
- Learn about our VR technologies and solutions, pro apps for VR, and professional VR best practices.
- Hear how the AEC industry is integrating VR into their clients’ design experiences.
- Share your findings in the role of VR for immersive building design and ask questions during the live chat session.
Presented by Dave Weinstein, Andrew Rink, and Ron Swidler.
CES has been a bellwether of technology trends for five decades. This year, the world’s largest technology tradeshow showcased the latest advances of the greatest computing challenge of all time — artificial intelligence. NVIDIA Founder and CEO Jen-Hsun Huang kicked off the 50th anniversary event with his unique perspective on AI and a series of announcements across the gaming, smart home and automotive industries. This presentation is a summary of the keynote with a sampling of the resulting press coverage.
Crowning its award-winning lineup of Pascal™ architecture-based GPUs, NVIDIA unveiled its fastest gaming GPU ever -- the GeForce® GTX 1080 Ti. Packed with extreme gaming horsepower, the GeForce GTX 1080 Ti delivers up to 35 percent more performance of the GTX 1080 and comes with 11GB of next-generation GDDR5X memory, running at a staggering 11Gbps, for the ultimate in memory bandwidth and gaming prowess. GTX 1080 Ti graphics cards, including the NVIDIA Founders Edition, will be available worldwide from NVIDIA GeForce partners beginning March 10, and starting at $699.
Inception Awards: The Top Six AI Startups Changing The WorldNVIDIA
Discover how these winning AI startups are impacting the world through accurate biomagnetic imaging, cybersecurity enhancement, construction safety, and more.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”
Compare Streaming Media Players With NVIDIA SHIELDNVIDIA
If you’re thinking about buying a next-gen smart TV console after hearing about the new Apple TV, we have good news: You’ve got options.
We introduced our own next-gen smart TV console — NVIDIA SHIELD Android TV — back in May. And it offers extraordinary capabilities.
SHIELD offers 3x more performance, plus more features and more ways to game. It’s still the only smart TV console that can stream 4K content. And — thanks to its support for Chromecast — it connects your mobile devices directly to your living room display.
NVIDIA Testimony at Senate Commerce, Science, and Transportation Committee He...NVIDIA
Rob Csongor, VP and General Manager of NVIDIA's automotive business, provides his testimony on the important subject of self-driving vehicle technology.
Managing Container Images with Amazon ECR - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- How Amazon ECR Lifecycle Policies work to lower costs and reduce image sprawl
- How to configure and test rules for automated image cleanup
- Best practices for getting started using Lifecycle Policies today
This PPT is about AI 100 Startups all over the world based on "The AI 100 -CB insights".
In this research paper, you can find each capital, scale, general info(ref: CB Insights), and features.
Working with Amazon Lex Chatbots in Amazon Connect - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to setup Amazon Connect
- Learn the value of Amazon Connect and virtual assistants
- Learn how to use Amazon Lex chatbots in Amazon Lex
Top 5 Deep Learning and AI Stories - November 3, 2017NVIDIA
Read this week's top 5 news updates in deep learning and AI: Pentagon official says that AI and machine learning will revolutionize the US intelligence community; how AI could spot lung cancer faster; AI researchers can now access optimized deep learning framework containers through NVIDIA GPU Cloud; AI4ALL improves student access to AI resources by partnering with NVIDIA Deep Learning Institute; the Deep Learning Institute expands its courses to address the growing demand for AI talent.
WKS401 Deploy a Deep Learning Framework on Amazon ECS and EC2 Spot InstancesAmazon Web Services
Deep learning is an implementation of machine learning that uses neural networks to solve difficult and complex problems, such as computer vision, natural language processing, and recommendations. Due to the availability of deep learning libraries and frameworks, developers have the ability to enhance the capabilities of their applications and projects.
In this workshop, you learn how to build and deploy a powerful deep learning framework called MXNet on containers. The portability and resource management benefit of containers means developers can focus less on infrastructure and more on building. The labs start by demonstrating the automation capabilities of AWS CloudFormation to stand up core infrastructure; as an added bonus, you use Spot Fleet to leverage the cost benefits of using Spot Instances, especially for developer environments. Then, you walk through creating an MXNet container in Docker and deploying it with Amazon ECS. Finally, you walk through an image classification demo of MXNet to validate that everything is working as expected.
Pre-reqs: Laptop and AWS account
Sentiment Analysis Using Apache MXNet and Gluon - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to easily get started with Deep Learning on AWS using the AWS Deep Learning AMI
- Learning how to use Apache MXNet and Gluon to start and scale deep learning projects
- Learn how to build an LSTM network for sentiment analysis
Building Serverless Websites with Lambda@Edge - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Understand how to take advantage of Lambda@Edge and Amazon CloudFront
- Response generation with Lambda@Edge
- How to optimize Lambda@Edge responses with CloudFront cache usage
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
NVIDIA compute GPUs and software toolkits are key drivers behind major advancements in machine learning. Of particular interest is a technique called "deep learning", which utilizes what are known as Convolution Neural Networks (CNNs) having landslide success in computer vision and widespread adoption in a variety of fields such as autonomous vehicles, cyber security, and healthcare. In this talk is presented a high level introduction to deep learning where we discuss core concepts, success stories, and relevant use cases. Additionally, we will provide an overview of essential frameworks and workflows for deep learning. Finally, we explore emerging domains for GPU computing such as large-scale graph analytics, in-memory databases.
https://tech.rakuten.co.jp/
Axel Koehler from Nvidia presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
“Accelerated computing is transforming the data center that delivers unprecedented through- put, enabling new discoveries and services for end users. This talk will give an overview about the NVIDIA Tesla accelerated computing platform including the latest developments in hardware and software. In addition it will be shown how deep learning on GPUs is changing how we use computers to understand data.”
In related news, the GPU Technology Conference takes place April 4-7 in Silicon Valley.
Watch the video presentation: http://insidehpc.com/2016/03/tesla-accelerated-computing/
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Alison B Lowndes - Fueling the Artificial Intelligence Revolution with Gaming...Codemotion
Building upon the foundational understanding of deep learning, this talk will cover a wide variety of applications of artificial intelligence for problem-solving and how you can both get started and become proficient with NVIDIA’s hardware, open-source software & classes. We will also discuss the role of games engines both historically and current day in teaching today's AI systems.
Dell and NVIDIA for Your AI workloads in the Data CenterRenee Yao
Join us and learn more about how Dell PowerEdge C4140 Rack Server, powered by four of NVIDIA V100s, the world’s most powerful GPU, address training and inference for the most demanding HPC, data visualization and AI workloads. This enables organizations to take advantage of the convergence of HPC and data analytics and realize advancements in areas including fraud detection, image processing, financial investment analysis and personalized medicine.
NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2019 (#GTC19) in Silicon Valley, where he introduced breakthroughs in pro graphics with NVIDIA Omniverse; in data science with NVIDIA-powered Data Science Workstations; in inference and enterprise computing with NVIDIA T4 GPU-powered servers; in autonomous machines with NVIDIA Jetson Nano and the NVIDIA Isaac SDK; in autonomous vehicles with NVIDIA Safety Force Field and DRIVE Constellation; and much more.
OpenACC and Open Hackathons Monthly Highlights: September 2022.pptxOpenACC
Stay up-to-date on the latest news, research and resources. This month's edition covers the Princeton GPU Hackathon, OpenACC at SC22, updates from GNU Tools Cauldron, the upcoming UK DPU Hackathon, relevant research and more!
H2O World 2017 Keynote - Jim McHugh, VP & GM of Data Center, NVIDIASri Ambati
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the recording: https://youtu.be/NyaJ7uDroww.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://www.twitter.com/h2oai.
Semiconductors are the driving force behind the AI evolution and enable its adoption across various application areas ranging from connected and automated driving to smart healthcare and wearables. Given that, electronics research, design and manufacturing communities around the world are increasingly investing in specialized AI chips providing less latency, greater processing power, higher bandwidth and faster performance. AI also attracts new technology players to invest in making their own specialized AI chips, changing the electronics manufacturing landscape and moving the AI technology towards machine learning, deep learning and neural networks.
Nvidia Corporation, more commonly referred to as Nvidia, is an American technology company incorporated in Delaware and based in Santa Clara, California. It designs graphics processing units for the gaming and professional markets, as well as system on a chip units for the mobile computing and automotive market.
We pioneered accelerated computing to tackle challenges no one else can solve. Now, the AI moment has arrived. Discover how our work in AI and the metaverse is profoundly impacting society and transforming the world’s largest industries.
Promising to transform trillion-dollar industries and address the “grand challenges” of our time, NVIDIA founder and CEO Jensen Huang shared a vision of an era where intelligence is created on an industrial scale and woven into real and virtual worlds at GTC 2022.
Our passion is to inspire and enable the da Vincis and Einsteins of our time, so they can see and create the future. We pioneered graphics, accelerated computing, and AI to tackle challenges ordinary computers cannot solve. See how we're continuously inventing the future--from our early days as a chip maker to transformers of the Metaverse.
Outlining a sweeping vision for the “age of AI,” NVIDIA CEO Jensen Huang Monday kicked off the GPU Technology Conference.
Huang made major announcements in data centers, edge AI, collaboration tools and healthcare in a talk simultaneously released in nine episodes, each under 10 minutes.
“AI requires a whole reinvention of computing – full-stack rethinking – from chips, to systems, algorithms, tools, the ecosystem,” Huang said, standing in front of the stove of his Silicon Valley home.
Behind a series of announcements touching on everything from healthcare to robotics to videoconferencing, Huang’s underlying story was simple: AI is changing everything, which has put NVIDIA at the intersection of changes that touch every facet of modern life.
More and more of those changes can be seen, first, in Huang’s kitchen, with its playful bouquet of colorful spatulas, that has served as the increasingly familiar backdrop for announcements throughout the COVID-19 pandemic.
“NVIDIA is a full stack computing company – we love working on extremely hard computing problems that have great impact on the world – this is right in our wheelhouse,” Huang said. “We are all-in, to advance and democratize this new form of computing – for the age of AI.”
This GTC is one of the biggest yet. It features more than 1,000 sessions—400 more than the last GTC—in 40 topic areas. And it’s the first to run across the world’s time zones, with sessions in English, Chinese, Korean, Japanese, and Hebrew.
The Best of AI and HPC in Healthcare and Life SciencesNVIDIA
Trends. Success stories. Training. Networking.
The GPU Technology Conference brings this all to one place. Meet the people pioneering the future of healthcare and life sciences and learn how to apply the latest AI and HPC tools to your research.
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
NVIDIA BioBert, an optimized version of BioBert was created specifically for biomedical and clinical domains, providing this community easy access to state-of-the-art NLP models.
Top 5 Deep Learning and AI Stories - August 30, 2019NVIDIA
Read the top five news stories in artificial intelligence and learn how innovations in AI are transforming business across industries like healthcare and finance and how your business can derive tangible benefits by implementing AI the right way.
Seven Ways to Boost Artificial Intelligence ResearchNVIDIA
Higher education institutions have long been the backbone of scientific breakthroughs, view this slideshare to learn seven easy ways to help elevate your research.
Learn about the benefits of joining the NVIDIA Developer Program and the resources available to you as a registered developer. This slideshare also provides the steps of getting started in the program as well as an overview of the developer engagement platforms at your disposal. developer.nvidia.com/join
If you were unable to attend GTC 2019 or couldn't make it to all of the sessions you had on your list, check out the top four DGX POD sessions from the conference on-demand.
In this special edition of "This week in Data Science," we focus on the top 5 sessions for data scientists from GTC 2019, with links to the free sessions available on demand.
This Week in Data Science - Top 5 News - April 26, 2019NVIDIA
What's new in data science? Flip through this week's Top 5 to read a report on the most coveted skills for data scientists, top universities building AI labs, data science workstations for AI deployment, and more.
Check out these DLI training courses at GTC 2019 designed for developers, data scientists & researchers looking to solve the world’s most challenging problems with accelerated computing.
Transforming Healthcare at GTC Silicon ValleyNVIDIA
The GPU Technology Conference (GTC) brings together the leading minds in AI and healthcare that are driving advances in the industry - from top radiology departments and medical research institutions to the hottest startups from around the world. Can't miss panels and trainings at GTC Silicon Valley
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the upcoming NVIDIA GTC 2019, complete schedule of GPU hackathons and more!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. 2
Academia Games
Finance Manufacturing
Internet Oil & Gas
National Labs Automotive
Defense M & E
2X Accelerated Systems,
96% of New Systems on NVIDIA
2X GTC Attendees 4X CUDA Developers,
10X in Hyperscale + Auto
Auto Internet
Gov't / Labs Academia
M&E Finance
Aerospace / Defense Manufacturing
Oil & Gas IT / HW / SW
Medical
LEAPS IN ADOPTION
2012 2016
4x
300K
0
20
40
60
80
100
120
Nov 2013 Nov 2014 Nov 2015
#acceleratedsystems
5,500
2,350
2012 2016
4. 4
NVIDIA GAMEWORKS
Volumetric Lighting | Voxel Accelerated Ambient Occlusion | Hybrid Frustum Traced Shadows
Available Now
COMPUTEWORKS
HairWorks WaveWorks FlameWorks
and other technologies such as:
Clothing, VXGI, Flex, Destruction
GAMEWORKS VRWORKS DESIGNWORKS DRIVEWORKS JETPACK
PhysX
5. 5
NVIDIA DESIGNWORKS
Adobe support of MDL | Siemens NX adopts Iray
COMPUTEWORKS
MDL OptiX Path Rendering
and other technologies such as:
GL Extensions, GRID, GPU Direct for Video, Mosaic, VXGI, Warp and Blend
GAMEWORKS VRWORKS DESIGNWORKS DRIVEWORKS JETPACK
Iray
6. 6
NVIDIA VRWORKS
Oculus Rift and HTC Vive integration | Epic, Max Play and Unity game engines
Available Now
COMPUTEWORKS
VR SLI Context Priority Warp and Blend
and other technologies such as:
Direct Mode, GPUDirect for Video
GAMEWORKS VRWORKS DESIGNWORKS DRIVEWORKS JETPACK
Multi-Res Shading
7. 7
NVIDIA COMPUTEWORKS
CUDA 8 — Available June | cuDNN 5 — Available April | nvGRAPH — Available June
IndeX plug-in for ParaView — Available May
COMPUTEWORKS
cuDNN
and other technologies such as:
AMGx, cuSOLVER, cuSPARSE, OpenACC, NSIGHT, THRUST
GAMEWORKS VRWORKS DESIGNWORKS DRIVEWORKS JETPACK
CUDA nvGRAPH IndeX
8. 8
NVIDIA DRIVEWORKS
JPL — Available Now | EAP — Available Q2’16
General Release — Available Q1’17
COMPUTEWORKS
Detection Localization HD Maps
GAMEWORKS VRWORKS DESIGNWORKS DRIVEWORKS JETPACK
SensorFusion
and other technologies such as:
Driving, Planning
9. 9
NVIDIA JETPACK
Jetson TX1: 24 images/s/W | GIE - GPU Inference Engine — Available May
COMPUTEWORKS
DIGITS Workflow VisionWorks Jetson Media SDK
and other technologies such as:
Linux4Tegra, NSIGHT EE, OpenCV4Tegra, OpenGL, System Trace, Visual Profiler, Vulkan
GAMEWORKS VRWORKS DESIGNWORKS DRIVEWORKS JETPACK
Deep Learning SDK
10. 10
VR: A START OF A NEW PLATFORM
New York Times ships
Cardboard to subscribers
Microsoft demonstrates
Holoportation
Google announces Jump
VR camera platform
Samsung, Oculus, HTC
release headsets
VR Startups Raise
$1.5B in funding
13. 13
IRAY VR
Breakthrough Photoreal VR — Available Starting in June
Rasterize depth buffer at headset
eye positions
Reconstruct image for new viewpoint
from depth and multiple probes
Pre-render light probes surrounding
region of interest
15. 15
IRAY VR LITE
Available in June
2. Download Iray
for 3ds Max Plug-in
1. Design in 3ds Max 3. Download
Android Viewer
4. Get VR HMD
16. 16
AN AMAZING YEAR IN AI
AlphaGo
Rivals a World Champion
Microsoft & Google
“Superhuman” Image
Recognition
Microsoft
“Super Deep Network”
Berkeley’s Brett
One network,
everything robotics
Deep Speech 2
One network, 2 languages
A New Computing Model
Hits Pop Culture
17. 17
A NEW COMPUTING MODEL
Deep Learning Object Detection
DNN + Data + HPC
Traditional Computer Vision
Experts + Time
Deep Learning Achieves
“Superhuman” Results
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
2009 2010 2011 2012 2013 2014 2015 2016
Traditional CV
Deep Learning
ImageNet
19. 19
Ad Service
Technology
Investment
Media
Oil & Gas
Mfg
Retail
Other
$500B OPPORTUNITY OVER 10 YRS
Deep Learning Software Revenue
by Industry
Deep Learning Total Revenue
by Segment
IBM: “Cognitive business represents
a $2T opportunity”
SOURCE: “Deep Learning for Enterprise Applications,” 4Q 2015, Tractica
20. 20
NVIDIA GPU FOR HYPERSCALE
10X Speed up | 20 images/s/W Cloud Services Powered by AI
TESLA M40 + TESLA M4
21. 21
Soumith Chintala
AI Research Engineer, Facebook
“ Unsupervised Representation
Learning with Deep
Convolutional Generative
Adversarial Networks.”
— Soumith Chintala, Facebook AI Research
Alec Radford & Luke Metz indico Research
26. 26
“ This is a new era of computing. New
approaches to the underlying technologies
will be required for AI and cognitive. The
combination of NVIDIA Pascal GPUs and IBM
POWER accelerates Watson’s learning of new
skills. Together, IBM and NVIDIA will advance
the artificial intelligence industry.”
Dr. John Kelly III, SVP,
Cognitive Solutions & IBM Research
“ NVIDIA GPU is accelerating progress in AI.
As neural nets become larger and larger,
we not only need faster GPUs with larger
and faster memory, but also much faster
GPU-to-GPU communication, as well as
hardware that can take advantage of
reduced-precision arithmetic. This is
precisely what Pascal delivers.”
Yann LeCun, Director of AI Research, Facebook
“ Microsoft is developing super deep neural
networks that are more than 1000 layers. NVIDIA
Tesla P100’s impressive horsepower will enable
Microsoft’s CNTK to accelerate AI breakthroughs.”
Xuedong Huang, Chief Speech Scientist,
Microsoft Research
“ AI computers are like space rockets: The bigger
the better. Pascal’s throughput and interconnect
will make the biggest rocket we’ve seen yet.”
Andrew Ng, Chief Scientist, Baidu
28. 28
GPU-ACCELERATED DL FOR EVERY MARKET
IBM: “Cognitive business represents
a $2T opportunity”
Deep Learning
in the Cloud
Deep Learning
for Enterprise
Ad Service
Technology
Investment
Media
Oil & Gas
Mfg
Retail
Other
SOURCE: “Deep Learning for Enterprise Applications,” 4Q 2015, Tractica
29. 29
Engineered for deep learning | 170TF FP16 | 8x Tesla P100
NVLink hybrid cube mesh | Accelerates major AI frameworks
NVIDIA DGX-1
WORLD’S FIRST DEEP LEARNING SUPERCOMPUTER
31. 31
“250 SERVERS IN-A-BOX”
DUAL XEON DGX-1
FLOPS (CPU + GPU) 3 TF 170 TF
AGGREGATE NODE BW 76 GB/s 768 GB/s
ALEXNET TRAIN TIME 150 HOURS 2 HOURS
TRAIN IN 2 HOURS >250 NODES* 1 NODE
*Caffe Training on Multi-node Distributed-memory Systems Based on Intel® Xeon® Processor E5 Family (extrapolated)
Gennady Fedorov (Intel)'s picture Submitted by Gennady Fedorov (Intel), Vadim P. (Intel) on October 29, 2015
https://software.intel.com/en-us/articles/caffe-training-on-multi-node-distributed-memory-systems-based-on-intel-xeon-processor-e5
32. 32
12X SPEED-UP IN ONE YEAR
1.33 billion images/day
25 Hours
2 Hours
GTC 2015
4 Maxwell GPUS
GTC 2016
8 Pascal GPUS
33. 33
Bryan Catanzaro
Senior Researcher, Baidu
Time series input
“Time series output”
GPU0
GPU1
Model
Parallel
Data
Parallel
Recurrent Neural Nets Model + Data Parallelism
34. 34
Add Model Parallelism over NVLINK Compose with Data Parallelism
Persistent RNNs:
Peak FLOPs at batch of 8
weights
keep in
registers
repeat ~300 times repeat ~300 times
GPU0
GPU1
GPU2
GPU3
Data
Parallel
Strong scale to 32X more processors
36. 36
170TF | “250 servers in-a-box” | nvidia.com/dgx1
$129,000
NVIDIA DGX-1
WORLD’S FIRST DEEP LEARNING SUPERCOMPUTER
37. 37
PIONEERS IN AI RESEARCH
Frameworks for Multi-GPU Pascal
Large-scale Deep Learning
Reinforcement Learning
Unsupervised and Transfer Learning
Natural Language Understanding
Autonomous Driving
Medical Applications
38. 38
DEEP LEARNING FOR MEDICINE
NVIDIA Founding Technology Partner of MGH Center of Clinical Data Science
10B Medical images on DGX-1 to advance radiology, pathology, genomics
40. 40
Uber Enters the Race
Toyota Invests $1B
in AI Lab
Volvo Drive Me on
Public Roads in 2017
NHTSA: Computer
Counts as Driver
Tesla Model 3:
300K pre-orders
AN AMAZING YEAR FOR SELF-DRIVING CARS
Audi, BMW, Daimler
Buy HERE
Tesla Model S Auto-pilot
Baidu Enters the Race
Honda, Nissan, Toyota
Team Up
GM Buys Cruise
42. 42
World’s first DL-powered car
computing platform
One scalable architecture — from DNN
training to cluster, infotainment, ADAS,
autonomous driving, and mapping
Open platform
NVIDIA DRIVE PX
AI CAR COMPUTER
Training on
DGX-1
Driving with
DriveWorks
KALDI
LOCALIZATION
MAPPING
DRIVENET
DAVENET
NVIDIA DGX-1 NVIDIA DRIVE PX
43. 43
NVIDIA DRIVE PX
PERCEPTION
Training on
DGX-1
Driving with
DriveWorks
KALDI
LOCALIZATION
MAPPING
DRIVENET
DAVENET
NVIDIA DGX-1 NVIDIA DRIVE PX
NVIDIA DRIVENET
#1 accuracy score for KITTI car detection