Join us and learn more about how Dell PowerEdge C4140 Rack Server, powered by four of NVIDIA V100s, the world’s most powerful GPU, address training and inference for the most demanding HPC, data visualization and AI workloads. This enables organizations to take advantage of the convergence of HPC and data analytics and realize advancements in areas including fraud detection, image processing, financial investment analysis and personalized medicine.
HPE and NVIDIA are delivering a leading portfolio of optimized AI solutions that transform business and industry to gain deeper insights and facilitate solving the world’s greatest challenges. Join this session to learn about how NVIDIA V100, the world’s most powerful GPU, powering HPE 6500 Systems, the HPE AI Systems, to provide new business insights and outcomes.
This is a presentation I presented at NVIDIA AI Conference in Korea. It's about building the largest GPU - DGX-2, the most powerful supercomputer in one node.
Orchestrate Your AI Workload with Cisco Hyperflex, Powered by NVIDIA GPUs Renee Yao
Deep learning, a collection of statistical machine learning techniques, is transforming every digital business. As data grows, businesses need to find new ways of capitalizing on the volume of information to drive their competitive advantage. GPUs are becoming mainstream in the datacenter for accelerating containerized AI workloads. Kubernetes is a popular management framework for orchestrating containers at scale. However, managing GPUs in Kubernetes is still nascent, and setting up a Kubernetes cluster with GPUs can be challenging for customers. Join this session to learn more about how to use Kubernetes to orchestrate your AI workloads on Cisco Hyperflex, powered by NVIDIA V100, world’s most powerful GPU.
Simplifying AI Infrastructure: Lessons in Scaling on DGX SystemsRenee Yao
Simplifying AI Infrastructure: Lessons in Scaling on DGX Systems, the world's most powerful AI Systems. This is a presentation I did at GTC Israel in 2018
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
Accelerate AI w/ Synthetic Data using GANsRenee Yao
Strata Data Conference in Sep 2018 Presentation
Description:
Synthetic data will drive the next wave of deployment and application of deep learning in the real world across a variety of problems involving speech recognition, image classification, object recognition and language. All industries and companies will benefit, as synthetic data can create conditions through simulation, instead of authentic situations (virtual worlds enable you to avoid the cost of damages, spare human injuries, and other factors that come into play); unparalleled ability to test products, and interactions with them in any environment.
Join us for this introductory session to learn more about how Generative Adversarial Networks (GAN) are successfully used to improve data generation. We will cover specific real-world examples where customers have deployed GAN to solve challenges in healthcare, space, transportation, and retail industries.
Renee Yao explains how generative adversarial networks (GAN) are successfully used to improve data generation and explores specific real-world examples where customers have deployed GANs to solve challenges in healthcare, space, transportation, and retail industries.
HPE and NVIDIA are delivering a leading portfolio of optimized AI solutions that transform business and industry to gain deeper insights and facilitate solving the world’s greatest challenges. Join this session to learn about how NVIDIA V100, the world’s most powerful GPU, powering HPE 6500 Systems, the HPE AI Systems, to provide new business insights and outcomes.
This is a presentation I presented at NVIDIA AI Conference in Korea. It's about building the largest GPU - DGX-2, the most powerful supercomputer in one node.
Orchestrate Your AI Workload with Cisco Hyperflex, Powered by NVIDIA GPUs Renee Yao
Deep learning, a collection of statistical machine learning techniques, is transforming every digital business. As data grows, businesses need to find new ways of capitalizing on the volume of information to drive their competitive advantage. GPUs are becoming mainstream in the datacenter for accelerating containerized AI workloads. Kubernetes is a popular management framework for orchestrating containers at scale. However, managing GPUs in Kubernetes is still nascent, and setting up a Kubernetes cluster with GPUs can be challenging for customers. Join this session to learn more about how to use Kubernetes to orchestrate your AI workloads on Cisco Hyperflex, powered by NVIDIA V100, world’s most powerful GPU.
Simplifying AI Infrastructure: Lessons in Scaling on DGX SystemsRenee Yao
Simplifying AI Infrastructure: Lessons in Scaling on DGX Systems, the world's most powerful AI Systems. This is a presentation I did at GTC Israel in 2018
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
Accelerate AI w/ Synthetic Data using GANsRenee Yao
Strata Data Conference in Sep 2018 Presentation
Description:
Synthetic data will drive the next wave of deployment and application of deep learning in the real world across a variety of problems involving speech recognition, image classification, object recognition and language. All industries and companies will benefit, as synthetic data can create conditions through simulation, instead of authentic situations (virtual worlds enable you to avoid the cost of damages, spare human injuries, and other factors that come into play); unparalleled ability to test products, and interactions with them in any environment.
Join us for this introductory session to learn more about how Generative Adversarial Networks (GAN) are successfully used to improve data generation. We will cover specific real-world examples where customers have deployed GAN to solve challenges in healthcare, space, transportation, and retail industries.
Renee Yao explains how generative adversarial networks (GAN) are successfully used to improve data generation and explores specific real-world examples where customers have deployed GANs to solve challenges in healthcare, space, transportation, and retail industries.
Nvidia Deep Learning Solutions - Alex SabatierSri Ambati
Alex Sabatier from Nvidia talks about the future of Deep Learning from an chipmaker perspective
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Implementing AI: High Performance Architectures: A Universal Accelerated Comp...KTN
The Implementing AI: High Performance Architectures webinar, hosted by KTN and eFutures, was the fourth event in the Implementing AI webinar series.
The focus of the webinar was the impact of processing AI data on data centres - particularly from the technology perspective. Timothy Lanfear, Director of Solution Architecture and Engineering EMEA, NVIDIA, presented on a Universal Accelerated Computing Platform.
Supercomputing has swept rapidly from the far edges of science to the heart of our everyday lives. And propelling it forward – bringing it into the mobile phone already in your pocket and the car in your driveway – is GPU acceleration, NVIDIA CEO Jen-Hsun Huang told a packed house at a rollicking event kicking off this week’s SC15 annual supercomputing show in Austin. The event draws 10,000 researchers, national lab directors and others from around the world.
Axel Koehler from Nvidia presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
“Accelerated computing is transforming the data center that delivers unprecedented through- put, enabling new discoveries and services for end users. This talk will give an overview about the NVIDIA Tesla accelerated computing platform including the latest developments in hardware and software. In addition it will be shown how deep learning on GPUs is changing how we use computers to understand data.”
In related news, the GPU Technology Conference takes place April 4-7 in Silicon Valley.
Watch the video presentation: http://insidehpc.com/2016/03/tesla-accelerated-computing/
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
Data Science Week 2016. NVIDIA. "Платформы и инструменты для реализации систе...Newprolab
Антон Джораев, Senior Enterprise Business Development Manager, NVIDIA. Если вы хотите получить доступ к видео выступления, напишите нам на datascienceweek2016@gmail.com.
Nvidia Deep Learning Solutions - Alex SabatierSri Ambati
Alex Sabatier from Nvidia talks about the future of Deep Learning from an chipmaker perspective
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Implementing AI: High Performance Architectures: A Universal Accelerated Comp...KTN
The Implementing AI: High Performance Architectures webinar, hosted by KTN and eFutures, was the fourth event in the Implementing AI webinar series.
The focus of the webinar was the impact of processing AI data on data centres - particularly from the technology perspective. Timothy Lanfear, Director of Solution Architecture and Engineering EMEA, NVIDIA, presented on a Universal Accelerated Computing Platform.
Supercomputing has swept rapidly from the far edges of science to the heart of our everyday lives. And propelling it forward – bringing it into the mobile phone already in your pocket and the car in your driveway – is GPU acceleration, NVIDIA CEO Jen-Hsun Huang told a packed house at a rollicking event kicking off this week’s SC15 annual supercomputing show in Austin. The event draws 10,000 researchers, national lab directors and others from around the world.
Axel Koehler from Nvidia presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
“Accelerated computing is transforming the data center that delivers unprecedented through- put, enabling new discoveries and services for end users. This talk will give an overview about the NVIDIA Tesla accelerated computing platform including the latest developments in hardware and software. In addition it will be shown how deep learning on GPUs is changing how we use computers to understand data.”
In related news, the GPU Technology Conference takes place April 4-7 in Silicon Valley.
Watch the video presentation: http://insidehpc.com/2016/03/tesla-accelerated-computing/
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
NVIDIA CEO Jensen Huang Presentation at Supercomputing 2019NVIDIA
Broadening support for GPU-accelerated supercomputing to a fast-growing new platform, NVIDIA founder and CEO Jensen Huang introduced a reference design for building GPU-accelerated Arm servers, with wide industry backing.
Data Science Week 2016. NVIDIA. "Платформы и инструменты для реализации систе...Newprolab
Антон Джораев, Senior Enterprise Business Development Manager, NVIDIA. Если вы хотите получить доступ к видео выступления, напишите нам на datascienceweek2016@gmail.com.
NVIDIA DEEP LEARNING INFERENCE PLATFORM PERFORMANCE STUDY
| TECHNICAL OVERVIEW
| 1
Introduction
Artificial intelligence (AI), the dream of computer scientists for over half
a century, is no longer science fiction—it is already transforming every
industry. AI is the use of computers to simulate human intelligence. AI
amplifies our cognitive abilities—letting us solve problems where the
complexity is too great, the information is incomplete, or the details are
too subtle and require expert training.
While the machine learning field has been active for decades, deep
learning (DL) has boomed over the last five years. In 2012, Alex
Krizhevsky of the University of Toronto won the ImageNet image
recognition competition using a deep neural network trained on NVIDIA
GPUs—beating all the human expert algorithms that had been honed
for decades. That same year, recognizing that larger networks can learn
more, Stanford’s Andrew Ng and NVIDIA Research teamed up to develop
a method for training networks using large-scale GPU computing
systems. These seminal papers sparked the “big bang” of modern AI,
setting off a string of “superhuman” achievements. In 2015, Google and
Microsoft both beat the best human score in the ImageNet challenge. In
2016, DeepMind’s AlphaGo recorded its historic win over Go champion
Lee Sedol and Microsoft achieved human parity in speech recognition.
GPUs have proven to be incredibly effective at solving some of the most
complex problems in deep learning, and while the NVIDIA deep learning
platform is the standard industry solution for training, its inferencing
capability is not as widely understood. Some of the world’s leading
enterprises from the data center to the edge have built their inferencing
solution on NVIDIA GPUs. Some examples include:
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
Semiconductors are the driving force behind the AI evolution and enable its adoption across various application areas ranging from connected and automated driving to smart healthcare and wearables. Given that, electronics research, design and manufacturing communities around the world are increasingly investing in specialized AI chips providing less latency, greater processing power, higher bandwidth and faster performance. AI also attracts new technology players to invest in making their own specialized AI chips, changing the electronics manufacturing landscape and moving the AI technology towards machine learning, deep learning and neural networks.
Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...E-Commerce Brasil
Tecnologias NVIDIA aplicadas ao e-commerce. Muito além do hardware.
Jomar Silva
Gerente de relacionamento com desenvolvedores para a América Latina - NVIDIA
https://eventos.ecommercebrasil.com.br/forum/
NVIDIA compute GPUs and software toolkits are key drivers behind major advancements in machine learning. Of particular interest is a technique called "deep learning", which utilizes what are known as Convolution Neural Networks (CNNs) having landslide success in computer vision and widespread adoption in a variety of fields such as autonomous vehicles, cyber security, and healthcare. In this talk is presented a high level introduction to deep learning where we discuss core concepts, success stories, and relevant use cases. Additionally, we will provide an overview of essential frameworks and workflows for deep learning. Finally, we explore emerging domains for GPU computing such as large-scale graph analytics, in-memory databases.
https://tech.rakuten.co.jp/
As artificial intelligence sweeps across the technology landscape, NVIDIA unveiled today at its annual GPU Technology Conference a series of new products and technologies focused on deep learning, virtual reality and self-driving cars.
Introduction to Software Defined Visualization (SDVis)Intel® Software
Software defined visualization (SDVis) is an open-source initiative from Intel and industry collaborators. Improve the visual fidelity, performance, and efficiency of prominent visualization solutions, while supporting the rapidly growing big data use on workstations through high-performance computing (HPC) on supercomputing clusters without memory limitations and cost of GPU-based solutions.
Forwarding Plane Opportunities: How to Accelerate DeploymentCharo Sanchez
Intel® Select Solution for NFVI Forwarding Platform (NFVI FP) is an enhanced NFVI solution for 4G or 5G core User Plane Functions (UPF), broadband use cases, such as virtual Broadband Network Gateway (vBNG), network services such as virtual Evolved Packet Core (vEPC), IPsec Gateways (vSecGW), and cable use cases such as virtual Cable Modem Termination System (vCMTS) that demand high performance and packet processing throughput. The Advantech SKY-8101D server is a verified Intel Select Solution for NFVI FP plus, base and controller node with Red Hat Enterprise Linux and Red Hat OpenStack tuned to meet a performance threshold capable of serving large numbers of subscribers thanks to a more efficient use of the infrastructure for lower TCO.
H2O World 2017 Keynote - Jim McHugh, VP & GM of Data Center, NVIDIASri Ambati
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the recording: https://youtu.be/NyaJ7uDroww.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://www.twitter.com/h2oai.
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
Similar to Dell and NVIDIA for Your AI workloads in the Data Center (20)
Medical imaging refers to several different technologies that are used to view the human body in order to diagnose, monitor, or treat medical conditions. Today, GPUs are found in almost all imaging modalities - including CT, MRI, x-ray, and ultrasound -- bringing compute capabilities to the edge devices. With the boom of deep learning research in medical imaging, more efficient and improved approaches are being developed to enable AI-assisted workflows.
Women L.E.A.D. Toastmasters Appreciation eventRenee Yao
This slideshare is used to facilitate the Women L.E.A.D. toastmasters public speaking appreciation event: https://womenleadtm.com/meetings/happy-hour-in-person-optional/
This slide deck is put together to support Women L.E.A.D. Toastmasters workshop, How to be An Effective Mentor. YouTube: https://www.youtube.com/watch?v=RHH6-cE2zKM. Meeting: https://womenleadtm.com/meetings/workshop-how-to-be-an-effective-mentor/
Why Toastmasters and How it Helps Your Daily Job Renee Yao
This slide deck is created for Women L.E.A.D. Toastmasters workshop on May 7th 2021. Recording: https://www.youtube.com/watch?v=3vZqVKWmrCw
Meeting Notes:
https://womenleadtm.com/meetings/workshop-why-toastmasters/
AI in Healthcare | Future of Smart Hospitals Renee Yao
In this talk, I specifically talk about how NVIDIA healthcare AI software and hardware were used to support healthcare AI startups' innovation. Three startups featured: Caption Health, Artisight, and Hyperfine. Audience: healthcare systems CXOs.
This deck help public speakers to give good and effective evaluations to others, provide step-by-step guide on how to win an evaluation contest in a Toastmasters competition, and why evaluation matters in our daily life.
Startups Step Up - how healthcare ai startups are taking action during covid-...Renee Yao
All around the world, people are facing unprecedented challenges and uncertainties as a result of COVID-19. At NVIDIA Inception program, a virtual incubation startup program, which hosts 5000+ AI startups, we see an army of healthcare AI startups that have mobilized to address this global health crisis. This webinar will share real world examples on how each offering plays a critical role during this pandemic.
Live event: https://www.meetup.com/Women-in-Big-Data-Meetup/events/270191555/?action=rsvp&response=3.
YouTube Link: https://www.youtube.com/watch?v=QWkKINi8u4o&feature=youtu.be
This deck summarizes NetApp Insights 2018 joint ONTAP AI activities with NVIDIA and NetApp. List of activities includes Women In Tech Panel, Fireside chat, Spotlight sessions, the Cube live interview, and Partner Success video.
This is a supporting deck for my personal blog, "A Toast to My Public Speaking Journey". Link can be found here: https://wordpress.com/post/reneeyao.wordpress.com/27
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Dell and NVIDIA for Your AI workloads in the Data Center
1. Helge Gose, NVIDIA Solution Architect, June 7, 2018
DELL AND NVIDIA FOR YOUR AI
WORKLOADS IN THE DATA CENTER
2. 2
AGENDA
What is Deep Learning?
Volta and NVLINK
Inference to Training – Dell solutions
3. 3
THE TIME HAS COME FOR GPU COMPUTING
1980 1990 2000 2010 2020
103
105
107
1.5X per year
1.1X per year
Single-threaded perf
GPU-Accelerated
Computing
4. 4
DEEP LEARNING
IS SWEEPING ACROSS INDUSTRIES
INTERNET SERVICES
MEDICINE MEDIA & ENTERTAINMENT SECURITY & DEFENSE AUTONOMOUS MACHINES
Cancer cell detection
Diabetic grading
Drug discovery
Pedestrian detection
Lane tracking
Recognize traffic signs
Face recognition
Video surveillance
Cyber security
Video captioning
Content based search
Real time translation
Image/Video classification
Speech recognition
Natural language processing
INTERNET SERVICES
6. 6
A NEW COMPUTING MODEL
Algorithms that learn from examples
TRADITIONAL APPROACH
Requires domain experts
Time-consuming experimentation
Custom algorithms
Not scalable to new problems
DEEP NEURAL NETWORKS
Learn from data
Easily to extend
Accelerated with GPUs
MACHINE LEARNING
Car
Vehicle
Coupe
Car
Vehicle
Coupe
DEEP LEARNING
7. 7
WHAT PROBLEM ARE YOU SOLVING?
Defining the AI/DL Task
BUSINESS
QUESTION
AI/DL TASK
EXAMPLE OUTPUTS
HEALTHCARE RETAIL FINANCE
Is “it” present
or not?
Detection Cancer Detection Targeted ads Cybersecurity
What type of thing
is “it”?
Classification Image Classification Basket Analysis Credit Scoring
To what extent is
“it” present?
Segmentation
Tumor Size/Shape
Analysis
Build 360º
Customer View
Credit Risk Analysis
What is the likely
outcome?
Prediction
Survivability
Prediction
Sentiment &
behavior recognition
Fraud Detection
What will likely
satisfy the objective?
Recommendations
Therapy
Recommendation
Recommendation
Engine
Algorithmic
Trading
INPUTS
Text Data Images
AudioVideo
9. 9
TESLA V100
WORLD’S MOST ADVANCED DATA CENTER GPU
5,120 CUDA cores
640 NEW Tensor cores
7.8 FP64 TFLOPS | 15.7 FP32 TFLOPS | 125 Tensor TFLOPS
20MB SM RF | 16MB Cache
16GB/ 32GB HBM2 @ 900GB/s | 300GB/s NVLink
10. 10
REVOLUTIONARY AI PERFORMANCE
3X Faster DL Training Performance
3X Reduction in Time to Train Over P100
0 10 20
1X
V100
1X
P100
2X
CPU
Relative Time to Train Improvements
(LSTM)
Neural Machine Translation Training for 13 Epochs |German ->English, WMT15 subset | CPU = 2x
Xeon E5 2699 V4
15 Days
18 Hours
6 Hours
Over 80X DL Training
Performance in 3 Years
1x K80
cuDNN2
4x M40
cuDNN3
8x P100
cuDNN6
8x V100
cuDNN7
0x
20x
40x
60x
80x
100x
Q1
15
Q3
15
Q2
17
Q2
16
Exponential Performance over time
(GoogleNet)
SpeedupvsK80
GoogleNet Training Performance on versions of cuDNN
Vs 1x K80 cuDNN2
11. 11
END-TO-END PRODUCT FAMILY
TRAINING INFERENCE
Jetson
Drive PX
Dell PowerEdge
C4140
DATA CENTER
TITAN V
TESLA V100
DESKTOP
DGX Station
DATA CENTER
TESLA V100
TESLA P4
EMBEDDED AUTOMOTIVE
DriveWorks SDKJETPACK SDK
12. 12
POWERING THE DEEP LEARNING ECOSYSTEM
NVIDIA SDK Accelerates Every Major Framework
COMPUTER VISION
OBJECT DETECTION IMAGE CLASSIFICATION
SPEECH & AUDIO
VOICE RECOGNITION LANGUAGE TRANSLATION
NATURAL LANGUAGE PROCESSING
RECOMMENDATION ENGINES SENTIMENT ANALYSIS
DEEP LEARNING FRAMEWORKS
Mocha.jl
NVIDIA DEEP LEARNING SDK
developer.nvidia.com/deep-learning-software
14. 1414 of 21
PowerEdge C4140 Server
Faster time to insights with ultra-dense accelerator optimized server platform
* Based on Dell internal analyses and Principled Technologies Report - Jan 2015.
TARGETED W ORKLOADS
THE BEDROCK OF THE MODERN DATACENTER
• Machine Learning and Deep
Learning
• Technical Computing (Research /
Life Sciences)
• Low latency, high performance
applications (FSI)
Key Capabilities
• Unthrottled performance and superior thermal efficiency with patent-pending interleaved
GPU system design*
• No-compromise (CPU + GPU) acceleration technology up to 500 TFLOPS / U+
using the
NVIDIA®
Tesla™
V100 with NVLink™
• 2.4KW PSUs help future-proof for next generation GPUs
• Simplified deployment with pre-configured Ready Bundles
Xeon Scalable
Processors
Tesla
GPUs
+
Based on V100 NVLink Tensor Core Performance
15. 1515 of 21
C4140 – Now with NVIDIA®
Volta™
and NVLink™
Faster time to insights with ultra-dense accelerator optimized server platform
THE BEDROCK OF THE MODERN DATACENTER
NVIDIA®
Volta GPU has over
21 Billion Transistors and
640 Tensor cores to deliver
100+ TFLOPS
NVIDIA®
NVLink™
is a high-
bandwidth interconnect
enabling ultra fast
communication between
CPU and GPU and between
GPUs
Volta V100 performs 2.6X avg. speed up for DL workloads than Pascal P100
Delivers 44X more throughput compared to CPU nodes with lower latency
NVLink 5X – 10X faster than traditional PCIe Gen3 Interconnect
Volta-Optimized Software for important HPC applications
*Source: NVIDIA® Volta benchmarks for multiple applications 2017
16. 1616 of 21 THE BEDROCK OF THE MODERN DATACENTER
C4140 and NVLink™
NVLink Topology
NVLINK is 25Gbps versus PCIe at 8Gbps
Increase in performance due to higher clock speed – 7%
Increase in performance Peer to Peer GPU communication – 7%+
PCIe Topology
18. Dell - Internal Use - Confidential18
Towers ModularRacks
Extreme Scale
Infrastructure
INDUSTRY'S #1
Server Portfolio
PowerEdge
THE BEDROCK OF THE MODERN DATA CENTER
*Based on units sold (tie). IDC Worldwide Quarterly Server Tracker, Q1-Q3, 2016.
18
OpenManage Enterprise – Intelligent Automation Systems Management
Now Introducing C4140
19. Dell - Internal Use - Confidential19 Dell - Internal Use - Confidential
ACCELERATE YOUR BUSINESS ON
PowerEdge
ADAPT AND SCALE
your dynamic business needs
by leveraging Scalable
Business Architecture
FREE UP SKILLED
RESOURCES
and focus on core business
with Intelligent Automation
PROTECT YOUR
CUSTOMERS
and your business robustly
with Integrated Security
THE BEDROCK OF THE MODERN DATA CENTER19
First, let’s start with some definitions…
AI is a broad field of study focused on using computers to do things that require human-level intelligence. It’s been around since the 50’s, playing games like tic-tac-toe and checkers, and inspiring scary sci-fi movies. But it was limited in practical applications…
ML is an approach to AI that uses statistics techniques to construct a model from observed data. It generally relies on human-defined classifiers or “feature extractors” that can be as simple as a linear regression, or the slightly more complicated “Bag of Words” analysis technique that made email SPAM filters possible.
This was really handy in the late 1980’s when lots of email started showing up in your inbox
But then we invented smartphones, webcams, social media services, and all kinds of sensors that generate huge mountains of data and the new challenge of understanding and extracting insights from all this “big data”.
DL is a ML technique that automates the creation of feature extractors using large amounts of data to train complex “deep neural networks”
DNNs are capable of achieving human-level accuracy for many tasks, but require tremendous computational power to train
Several years ago, researchers started applying DNNs in a variety of areas and reporting amazing results…
==============
Ref. https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering
Now that we’ve seen a few examples of applications that benefit from Deep Learning, and have a basic understanding of why we are seeing such rapid adoption across a wide range of use cases…
Let’s explore how deep learning works by comparing it with earlier approaches to machine learning.
Consider the traditional approach to performing computer vision tasks such as image classification or object detection.
A domain expert trained in computer vision comes up with a set of rules to extract features from the image – such as edges, corners, color information, … maybe even counting the number of wheels, headlights, etc. The expert must figure out which features in the data are important, implement these rules by hand-writing custom software routines, and figure out how all the rules should be connected in relation to each other to perform the task. As you can imagine, this can be tedious and require lots of trial and error. And if the data changes to incorporate different types of objects or environments, then it’s back to the drawing board. All of this results in tons of source code to write, debug and maintain.
[next]
In contrast, when you use the deep learning approach, the neural network model learns the rules for performing the task directly from the data. No hand-written custom feature extractors are required. You simply feed the deep neural network thousands of examples, which serve as the “experience” from which it learns how to perform the task automatically.
The advantages of using deep learning include the ability to extend and adapt to new data simply by retraining the network, the immense performance improvements from using NVIDIA GPU accelerators, and opening up the opportunity for more people to develop AI applications
As a result, the deep learning approach can be more accurate, with significantly less human effort.
It’s worth noting that some people who are comfortable using previous approaches to machine learning can find it challenging to apply deep learning, since many of the instincts and assumptions from their own hard-won experience need to change in order to effectively develop deep learning applications.
And deep learning works not only for computer vision tasks such as image classification, object detection, and image segmentation… But also for non-visual tasks such as fraud detection, speech recognition, behavior prediction, and product recommendations.
It can be helpful to think about deep learning as a way to map samples from an input domain to an output domain.
The input domain can be text data such as log files or financial data, images of pretty much anything, audio or other signals, or video streams (which are really just a sequence of images with synchronized audio). It can even be three-dimensional images or datasets collected from medical imaging devices, geophysical analyses, cosmological models, or molecular dynamics simulations.
The output domain is determined by the question you want to ask of the input data, and the question itself indicates type of deep learning task you need to perform in order to map the input domain to the output domain.
This table shows a sampling of use cases where deep learning can be applied in healthcare.
For example:
If the question requires a Yes/No answer telling you whether something is present, the task is Detection.
If the question requires an answer describing what types of things are in each input, the task is Classification.
If the question requires a shape or volume as an answer, the task is Segmentation.
And so on…
Depending on the application, you may need to use a combination tasks to achieve more sophisticated outputs.
For example, to automatically label all the faces in your family photos you’d need to first detect whether and where there are faces in the picture, and then apply facial recognition (which is a form of classification) to determine the name of the person associated with each face.
And for automatic language translation you could use speech-to-text (classification) followed by translation of text in one language to text in another language (prediction) and then speech synthesis (prediction).
Check out the next module in this series to learn how these tasks are actually built using input data and deep learning fraeworks to train deep neural networks.
======== vette this later ========
Another example you may have experienced yourself is the feature in Google Maps where the images captured in Street View are used to describe the type of business (classification), find the little sign near the door (detection), and then automatically read the sign (classification+prediction) so listing days and times the business is open can be published online.
===========================
There are a wide range of GPU-accelerated platforms you can use to accelerate deep learning training and inference application workloads.
If you want a fully-integrated solution, we recommend the DGX-1 supercomputer in a box which delivers the performance equivalent of 100s of CPU-only servers using 8 world-class Tesla GPUs, or its little brother the DGX Station, which is powered by 4 Tesla GPUs and runs whisper-quiet next to your desk.
If you just want to get started on a prototype using your existing workstation, the TitanV includes new TensorCores designed specifically for deep learning that deliver up to 12x higher peak TFLOPs for training.
In the datacenter, the Tesla P100 and V100 with NVLink Technology deliver strong scaling support for mixed workloads across both HPC applications and Deep Learning training & inference applications.
And for scale-out inference workloads the Tesla P4 (GP104) supports high efficiency (perf/watt) low-latency performance.
And, of course, if you need to deploy deep learning applications in automotive or embedded applications, NVIDIA offers the DrivePX and Jetson platforms.
======================
NVIDIA also provides a wide range of GPU-accelerated platforms you can use to accelerate deep learning training and inference application workloads.
If you want a fully-integrated solution, we recommend the DGX-1 supercomputer in a box which delivers the performance equivalent of 250 CPU-only servers, or it’s little brother the DGX Station, which runs whisper-quiet next to your desk.
If you just want to get started on a prototype using your existing workstation, the Titan X Pascal supports fast 32-bit floating point (FP32) and 8-bit integer (INT8) performance for deep learning applications.
In the datacenter, the Tesla P100 and V100 with NVLink Technology deliver strong scaling support for mixed workloads across both HPC applications and Deep Learning training & inference workloads (using FP64, FP32, and FP16).
And for scale-out inference workloads the Tesla P4 (GP104) supports high efficiency (perf/watt) low-latency performance with fast FP32 and INT8.
And, of course, if you need to deploy deep learning applications in automotive or embedded applications, NVIDIA offers the DrivePX and Jetson platforms.
======================
What happens after development. NGC is tuned for all these platforms, but what happens next is you productionize your hard work, you created this awesome AI solution, there is a seamless deployment path to a cloud based microservices in the form of the NGC TensorRT container, or you can take it to the the embedded devices… you want to productize your research or your solution, this is the path. Take these models into Jetpack and Driveworks for robotics, drones, autonomous vehicles, etc.
Key Points –
The C4140 is made possible with superior system engineering from Dell EMC and with best-of-the-breed technologies from our strategic partners – Intel providing the latest Xeon Scalable Processors and NVIDIA providing the latest TESLA GPUs
While the C4140 is designed for complex technical and cognitive computing workloads, it is targeted for the following markets :
AI / DL / ML / HPC
Life Sciences
Financial Services
Oil and Gas Exploration
Some of the capabilities include:
A no-compromise system design that provides a superior speeds of up to 500 TFLOPS/U
A patent-pending interleaved GPU enables ultra-density with unthrottled performance
Future-proofing the server platform for next-gen GPUs by supporting 2.4 KW PSUs
Will be a critical component of Ready Bundles for ML/DL
Systems Management – iDRAC9, connection view, System lockdown, OpenManage Power Center
Other 14G Features & Benefits – Systems management, Security and Intel performance boost
Key Points –
Highlight how C4140 is better with latest technology from NVIDIA
New Volta V100 is better than Pascal P100 for all HPC applications, ranging from 1.5X – 5X
Importance of ecosystem to drive application adoption and support
Key Points –
C4140 supports 2 key topologies – PCIe & NVLink
More at NVLink - http://www.nvidia.com/object/nvlink.html
Patent-pending Interleaved design only for the PCIe topology
NVLink is a proprietary tech from NVIDIA that allows direct GPU- GPU and peer-peer communication.
PCIe tech - GPU- GPU communication happens only through PCIe Switch
NVLink allows direct GPU – GPU communication resulting in increased performance
NVLink also has higher clock speed resulting in higher performance
Imperial College of London
“By choosing Dell PowerEdge C4130 servers, we gained the same amount of processing performance in 4U of rack space as our existing HPC solution, which runs across two full height racks.” - Dr. Peter Vincent, Department of Aeronautics
Texas Advanced Computing Center (TACC): Dell and EMC are the two strategic partners providing the technology that make up the core of Wrangler. Wrangler uses EMC's DSSD rack-scale flash technology to ensure speed and performance, enabling real-time analytics at scale. Source.
Result: Significantly accelerates data-centered science
Translational Genomics Research Institute (Tgen) develops early diagnostics, prognostics, and therapies for cancer, neurological disorders, diabetes, and other complex diseases.
Dell: Servers, storage, networking and infrastructure consulting. Source.
EMC: EMC Isilon scale-out cluster and Ocarina Networks compression and dedupe software, Source.
Result: Researchers can create more targeted treatments at least one week faster
James B. Hunt Jr. Library at North Carolina State University uses an HPC cluster to support large-scale visualization and builds collaborative learning spaces to inspire students. Dell: Servers, storage, networking, workstations. Source.
EMC: Storage technologies, including its Isilon scale-out NAS solutions, to power Hunt Library’s private cloud, virtual desktop, and virtual server infrastructure. Source.
Result: Supports virtualized infrastructure for remote access, easier desktop management, and cost savings
University of Aberdeen centralises its high-performance computing infrastructure, giving scientists the tools they need to drive innovative healthcare research.
Dell: New cluster based on Dell PowerEdge servers, QLogic and Dell Networking switches. Source.
EMC: EMC Celerra Unified Storage, EMC Centera Gen 4x2, EMC File Management Appliance, EMC Celerra Replicator, EMC Data Protection Advisor, EMC PowerPath, EMC RecoverPoint, EMC SnapView, EMC Virtual Provisioning, EMC Unisphere Management Console. Source.
Result: Scientists have resources to drive groundbreaking healthcare research
Max Planck Institute of Molecular Cell Biology and Genetics helps researchers better understand cell division by rendering 3D microtubule data. Dell: Servers, virtualization, storage. Source.
A joint venture between the Chinese Academy of Sciences and Max Planck Gesellschaft, the Partner Institute for Computational Biology (PICB) was established in 2005 and works on the interface between biological theory and modeling. EMC: EMC Isilon NL400, EMC Isilon X200, EMC Isilon SmartPools, EMC Isilon SmartQuotas, EMC Isilon SnapshotIQ. Source.
Results: Researchers gain 3D representations in minutes rather than hours, Increased data processing speeds by 10x
Key Takeaway – Dell EMC PowerEdge is the market leader across all types of form factors and has industry leading performance across multiple verticals.