The CVPR annual conference showcases the most important advances in computer vision, pattern recognition, machine learning and artificial intelligence. Catch up on the top 5 announcements that came out of CVPR 2018.
The annual GPU Technology Conference focused on the promising field of deep learning in 2015. And we made four major announcements that will fuel its advancement: Titan X, the world's fastest GPU; DIGITS DevBox, GPU deep learning platform; Pascal GPU architecture; NVIDIA DRIVE PX, deep learning platform for self-driving cars. The press responded to these announcements with quotes, featured in this presentation, including ones from Mashable, Forbes, re/code, and The Wall Street Journal. The week-long event was shared in astounding numbers with many blog posts and streaming keynotes.
This presentation covers how deep learning is transforming industries; our role in key markets such as VR, robotics, and self-driving cars; and our culture of craftsmanship, giving, and learning. This also includes highlights on how we are driving the transformations in gaming through GeForce GTX GPUs and the GeForce Experience, and how we’re helping accelerate scientific discovery through GPU computing and our long-term commitment to CUDA architecture.
A Year of Innovation Using the DGX-1 AI SupercomputerNVIDIA
As one of TechCrunch's top AI stories, the NVIDIA DGX-1 has pioneered advancements in healthcare, data analytics, and robotic solutions with leading researchers and enterprises around the world.
The Best of AI and HPC in Healthcare and Life SciencesNVIDIA
Trends. Success stories. Training. Networking.
The GPU Technology Conference brings this all to one place. Meet the people pioneering the future of healthcare and life sciences and learn how to apply the latest AI and HPC tools to your research.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”
The annual GPU Technology Conference focused on the promising field of deep learning in 2015. And we made four major announcements that will fuel its advancement: Titan X, the world's fastest GPU; DIGITS DevBox, GPU deep learning platform; Pascal GPU architecture; NVIDIA DRIVE PX, deep learning platform for self-driving cars. The press responded to these announcements with quotes, featured in this presentation, including ones from Mashable, Forbes, re/code, and The Wall Street Journal. The week-long event was shared in astounding numbers with many blog posts and streaming keynotes.
This presentation covers how deep learning is transforming industries; our role in key markets such as VR, robotics, and self-driving cars; and our culture of craftsmanship, giving, and learning. This also includes highlights on how we are driving the transformations in gaming through GeForce GTX GPUs and the GeForce Experience, and how we’re helping accelerate scientific discovery through GPU computing and our long-term commitment to CUDA architecture.
A Year of Innovation Using the DGX-1 AI SupercomputerNVIDIA
As one of TechCrunch's top AI stories, the NVIDIA DGX-1 has pioneered advancements in healthcare, data analytics, and robotic solutions with leading researchers and enterprises around the world.
The Best of AI and HPC in Healthcare and Life SciencesNVIDIA
Trends. Success stories. Training. Networking.
The GPU Technology Conference brings this all to one place. Meet the people pioneering the future of healthcare and life sciences and learn how to apply the latest AI and HPC tools to your research.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”
See the superhuman breakthroughs in modern artificial intelligence powered by GPUs and the NVIDIA DGX-1, the world's first deep learning computer in a box. Deep learning is delivering revolutionary results in all industries, and there's 35x growth in the number of organizations engaged with NVIDIA to apply this technology.
NVIDIA's invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.
The AI Opportunity in Federal - Key Highlights from GTC DC 2018NVIDIA
Every industry will be empowered by AI from autonomous vehicles and robotics to healthcare and agriculture. The computational power that AI can provide will streamline workflows, maximize efficiencies, and open doors to new discoveries.
Read updates highlighting what’s hot in high performance computing, with this week's edition focusing on news of NVIDIA's announcements at Supercomputing 2016.
Building New Realities in AEC with NVIDIA Quadro VR WebinarNVIDIA
Register to watch this on-demand webinar at http://info.nvidianews.com/proviz-webinar-series-provr.html
Discover the coming innovations in the use of virtual reality for building design, share new technologies and VR workflow integrations that are being used today, and get a look at what’s coming next in VR from NVIDIA.
Key takeaways:
- Learn about our VR technologies and solutions, pro apps for VR, and professional VR best practices.
- Hear how the AEC industry is integrating VR into their clients’ design experiences.
- Share your findings in the role of VR for immersive building design and ask questions during the live chat session.
Presented by Dave Weinstein, Andrew Rink, and Ron Swidler.
NVIDIA Volta Tensor Core GPU achieves new AI performance milestones in ResNet-50 for a single chip, single node, and single cloud instance. Explore the performance improvements.
In this special edition of "This week in Data Science," we focus on the top 5 sessions for data scientists from GTC 2019, with links to the free sessions available on demand.
As the AI revolution gains momentum, NVIDIA founder and CEO Jensen Huang took the stage in Beijing to show the latest technology for accelerating its mass adoption.
His talk — to more than 3,500 scientists, engineers and press gathered for the three-day event — kicks off a GTC world tour where, in the months, ahead we’ll bring our story to an expected live audience of some 22,000 in Munich, Tel Aviv, Taipei, Washington and Tokyo.
NVIDIA Testimony at Senate Commerce, Science, and Transportation Committee He...NVIDIA
Rob Csongor, VP and General Manager of NVIDIA's automotive business, provides his testimony on the important subject of self-driving vehicle technology.
Building a Stronger Future for Radiology: Takeaways from RSNA 2017NVIDIA
At RSNA 2017, NVIDIA announced partnerships, showcased the latest technologies revolutionizing medical imaging, offered NVIDIA Deep Learning Institute (DLI) workshops and more.
See the superhuman breakthroughs in modern artificial intelligence powered by GPUs and the NVIDIA DGX-1, the world's first deep learning computer in a box. Deep learning is delivering revolutionary results in all industries, and there's 35x growth in the number of organizations engaged with NVIDIA to apply this technology.
NVIDIA's invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.
The AI Opportunity in Federal - Key Highlights from GTC DC 2018NVIDIA
Every industry will be empowered by AI from autonomous vehicles and robotics to healthcare and agriculture. The computational power that AI can provide will streamline workflows, maximize efficiencies, and open doors to new discoveries.
Read updates highlighting what’s hot in high performance computing, with this week's edition focusing on news of NVIDIA's announcements at Supercomputing 2016.
Building New Realities in AEC with NVIDIA Quadro VR WebinarNVIDIA
Register to watch this on-demand webinar at http://info.nvidianews.com/proviz-webinar-series-provr.html
Discover the coming innovations in the use of virtual reality for building design, share new technologies and VR workflow integrations that are being used today, and get a look at what’s coming next in VR from NVIDIA.
Key takeaways:
- Learn about our VR technologies and solutions, pro apps for VR, and professional VR best practices.
- Hear how the AEC industry is integrating VR into their clients’ design experiences.
- Share your findings in the role of VR for immersive building design and ask questions during the live chat session.
Presented by Dave Weinstein, Andrew Rink, and Ron Swidler.
NVIDIA Volta Tensor Core GPU achieves new AI performance milestones in ResNet-50 for a single chip, single node, and single cloud instance. Explore the performance improvements.
In this special edition of "This week in Data Science," we focus on the top 5 sessions for data scientists from GTC 2019, with links to the free sessions available on demand.
As the AI revolution gains momentum, NVIDIA founder and CEO Jensen Huang took the stage in Beijing to show the latest technology for accelerating its mass adoption.
His talk — to more than 3,500 scientists, engineers and press gathered for the three-day event — kicks off a GTC world tour where, in the months, ahead we’ll bring our story to an expected live audience of some 22,000 in Munich, Tel Aviv, Taipei, Washington and Tokyo.
NVIDIA Testimony at Senate Commerce, Science, and Transportation Committee He...NVIDIA
Rob Csongor, VP and General Manager of NVIDIA's automotive business, provides his testimony on the important subject of self-driving vehicle technology.
Building a Stronger Future for Radiology: Takeaways from RSNA 2017NVIDIA
At RSNA 2017, NVIDIA announced partnerships, showcased the latest technologies revolutionizing medical imaging, offered NVIDIA Deep Learning Institute (DLI) workshops and more.
At the 2018 GPU Technology Conference in Silicon Valley, NVIDIA CEO Jensen Huang announced the new "double-sized" 32GB Volta GPU; unveiled the NVIDIA DGX-2, the power of 300 servers in a box; showed an expanded inference platform with TensorRT 4 and Kubernetes on NVIDIA GPU; and revealed the NVIDIA GPU Cloud registry with 30 GPU-optimized containers and made it available from more cloud service providers. GTC attendees also got a sneak peek of the latest NVIDIA DRIVE software stack and the next DRIVE AI car computer, "Orin," along with developments in the NVIDIA Isaac platform for robotics and Project Clara, NVIDIA's medical imaging supercomputer.
At CES 2016, we made a series of announcements highlighting our work to advance the biggest trends in the industry — self-driving cars, artificial intelligence and
virtual reality. The focus of our news was NVIDIA DRIVE, an end-to-end deep learning platform for self-driving cars.
Seven Ways to Boost Artificial Intelligence ResearchNVIDIA
Higher education institutions have long been the backbone of scientific breakthroughs, view this slideshare to learn seven easy ways to help elevate your research.
Nvidia Corporation, more commonly referred to as Nvidia, is an American technology company incorporated in Delaware and based in Santa Clara, California. It designs graphics processing units for the gaming and professional markets, as well as system on a chip units for the mobile computing and automotive market.
NVIDIA is the world leader in visual computing. The GPU, our invention, serves as the visual cortex
of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions like self-learning machines and self-driving cars.
NVIDIA vGPU - Introduction to NVIDIA Virtual GPULee Bushen
Lee Bushen, Senior Solutions Architect at NVIDIA covers the basics of NVIDIA Virtual GPU.
- Why vGPU?
- How does it work?
- What are the main considerations for VDI?
- Which GPU is right for me?
- Which License do I need?
A talk on reducing costs & increasing efficiencies by designing, testing & engineering in simulation first, plus examples of robotics & environmental capability.
We pioneered accelerated computing to tackle challenges no one else can solve. Now, the AI moment has arrived. Discover how our work in AI and the metaverse is profoundly impacting society and transforming the world’s largest industries.
Promising to transform trillion-dollar industries and address the “grand challenges” of our time, NVIDIA founder and CEO Jensen Huang shared a vision of an era where intelligence is created on an industrial scale and woven into real and virtual worlds at GTC 2022.
Our passion is to inspire and enable the da Vincis and Einsteins of our time, so they can see and create the future. We pioneered graphics, accelerated computing, and AI to tackle challenges ordinary computers cannot solve. See how we're continuously inventing the future--from our early days as a chip maker to transformers of the Metaverse.
Outlining a sweeping vision for the “age of AI,” NVIDIA CEO Jensen Huang Monday kicked off the GPU Technology Conference.
Huang made major announcements in data centers, edge AI, collaboration tools and healthcare in a talk simultaneously released in nine episodes, each under 10 minutes.
“AI requires a whole reinvention of computing – full-stack rethinking – from chips, to systems, algorithms, tools, the ecosystem,” Huang said, standing in front of the stove of his Silicon Valley home.
Behind a series of announcements touching on everything from healthcare to robotics to videoconferencing, Huang’s underlying story was simple: AI is changing everything, which has put NVIDIA at the intersection of changes that touch every facet of modern life.
More and more of those changes can be seen, first, in Huang’s kitchen, with its playful bouquet of colorful spatulas, that has served as the increasingly familiar backdrop for announcements throughout the COVID-19 pandemic.
“NVIDIA is a full stack computing company – we love working on extremely hard computing problems that have great impact on the world – this is right in our wheelhouse,” Huang said. “We are all-in, to advance and democratize this new form of computing – for the age of AI.”
This GTC is one of the biggest yet. It features more than 1,000 sessions—400 more than the last GTC—in 40 topic areas. And it’s the first to run across the world’s time zones, with sessions in English, Chinese, Korean, Japanese, and Hebrew.
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
NVIDIA BioBert, an optimized version of BioBert was created specifically for biomedical and clinical domains, providing this community easy access to state-of-the-art NLP models.
Top 5 Deep Learning and AI Stories - August 30, 2019NVIDIA
Read the top five news stories in artificial intelligence and learn how innovations in AI are transforming business across industries like healthcare and finance and how your business can derive tangible benefits by implementing AI the right way.
Learn about the benefits of joining the NVIDIA Developer Program and the resources available to you as a registered developer. This slideshare also provides the steps of getting started in the program as well as an overview of the developer engagement platforms at your disposal. developer.nvidia.com/join
If you were unable to attend GTC 2019 or couldn't make it to all of the sessions you had on your list, check out the top four DGX POD sessions from the conference on-demand.
This Week in Data Science - Top 5 News - April 26, 2019NVIDIA
What's new in data science? Flip through this week's Top 5 to read a report on the most coveted skills for data scientists, top universities building AI labs, data science workstations for AI deployment, and more.
NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2019 (#GTC19) in Silicon Valley, where he introduced breakthroughs in pro graphics with NVIDIA Omniverse; in data science with NVIDIA-powered Data Science Workstations; in inference and enterprise computing with NVIDIA T4 GPU-powered servers; in autonomous machines with NVIDIA Jetson Nano and the NVIDIA Isaac SDK; in autonomous vehicles with NVIDIA Safety Force Field and DRIVE Constellation; and much more.
Check out these DLI training courses at GTC 2019 designed for developers, data scientists & researchers looking to solve the world’s most challenging problems with accelerated computing.
Transforming Healthcare at GTC Silicon ValleyNVIDIA
The GPU Technology Conference (GTC) brings together the leading minds in AI and healthcare that are driving advances in the industry - from top radiology departments and medical research institutions to the hottest startups from around the world. Can't miss panels and trainings at GTC Silicon Valley
Stay up-to-date on the latest news, events and resources for the OpenACC community. This month’s highlights covers the upcoming NVIDIA GTC 2019, complete schedule of GPU hackathons and more!
The promise of AI to provide better patient care through accelerated workflows and increased diagnostic capabilities was in full display at RSNA. Catch up with all the news and highlights from the event.
Top 5 Deep Learning and AI Stories - November 30, 2018NVIDIA
Read this week's top 5 news updates in deep learning and AI: 75 healthcare companies partner with NVIDIA to power the future of radiology, NeurIPS conference showcases the latest in AI research, NVIDIA's new research lab pushes machine learning boundaries, Israeli AI startup restores speech abilities to stroke victims and others with impaired language, and radiologists can detect anomalies in medical images with deep learning.
Top 5 AI and Deep Learning Stories - November 9, 2018NVIDIA
Read this week's top 5 news updates in deep learning and AI: DGX-2 supercomputers arrive fueling scientific discovery; AI pioneer talks about the future of AI; radiology poised for transformation with AI; the rise of AI developers in India; discover AI in federal government.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. THE CVPR ANNUAL CONFERENCE SHOWCASES THE
MOST IMPORTANCE ADVANCES IN COMPUTER VISION,
PATTERN RECOGNITION, MACHINE LEARNING AND
ARTIFICIAL INTELLIGENCE.
3. THE CONFERENCE HAS GROWN SIGNIFICANTLY
WITH 6,000 ATTENDEES THIS YEAR, INCLUDING
THE WORLD’S LEADING RESEARCHERS.
4. NEW TOOLS TO REV THE ENGINE OF MODERN AI
At CVPR 2018, NVIDIA unveiled a series of tools
and deep learning research, including NVIDIA
Apex, an open-source extension that helps
developers accelerate their AI research by tapping
the multi-precision capabilities of NVIDIA’s Volta
Tensor Core GPUs.
NVIDIA also launched DALI – an open-source library
designed as a plug-in that works with all major
frameworks. It allows users to define configurable
data processing graphs to accelerate image decode
and augmentation steps typical in AI training to
the GPU.
Source: https://blogs.nvidia.com/blog/2018/06/21/cvpr-nvidia-brings-new-tensor-core-gpu-ai-tools-super-slomo-cutting-edge-research/
READ BLOG
5. RESEARCH AT THE CUTTING EDGE
CVPR is where researchers present cutting-edge
computer vision research. NVIDIA Research
presented 14 papers that answer questions at the
intersection of art, science and practical
challenges.
Among them was a newly developed deep learning
system that converts standard videos into slow-
motion. This can turn a video from 30 frames per
second(fps) to 240 fps.
“NVIDIA wants to help you turn any old video shot on your
phone into a blur-free, slow-motion masterpiece, and it's
using artificial intelligence to do it.”
Source: https://www.cnet.com/news/nvidia-ai-turn-blurry-phone-videos-into-slow-mo-masterpieces/
READ ARTICLE
6. ELITE AI RESEARCHERS GET A POWERFUL BOOST
NVIDIA CEO Jensen Huang awarded 20 lucky
guests a specially designed 32GB CEO edition of
the TITAN V at a private CVPR reception.
“There's all kinds of research being done here… as
someone who benefits from your work, as a person who
is going to enjoy the incredible research you guys do --
solving some of the world's grand challenges -- and to
be able to witness artificial intelligence happen in my
lifetime, I want to thank all of you guys for that. You
guys bring me so much joy."
Source: https://www.zdnet.com/article/nvidia-reveals-special-32gb-titan-v-ceo-edition-gpu-and-then-gives-away-a-bunch/
READ MORE
7. CLOSING PERSPECTIVES ON CVPR 2018
NVIDIA’s research leaders Bryan Catanzaro and
Jan Kautz discuss noteworthy research at CVPR
2018 and how the field of deep learning
continues to grow.
“We are in an era where lots of data is accessible to
lots of researchers to make progress.”
Source: https://www.youtube.com/watch?v=w1ZYi8pd7NQ&list=PLZHnYvH1qtOaI5QZVi5OwPAqq15AMEoMl&index=9
WATCH HERE