This document summarizes key information from the 2013 GPU Technology Conference (GTC). Some of the main points covered include:
- GPU accelerated computing is enabling breakthroughs across many fields by dramatically increasing performance. Several organizations showcased advances enabled by GPUs.
- NVIDIA announced new GPU architectures and roadmaps to continue dramatically improving performance and efficiency. Developments in the automotive, mobile, and supercomputing sectors were highlighted.
- Over 3,000 attendees from 50 countries participated in the conference, which featured 425 sessions and 150 research posters on GPU accelerated applications in science, technology and industry.
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
In recent years, one of the biggest trends in applications development has been the rise of Machine Learning solutions, tools, and managed platforms. Vertex AI is a managed unified ML platform for all your AI workloads. On the MLOps side, Vertex AI Pipelines solutions let you adopt experiment pipelining beyond the classic build, train, eval, and deploy a model. It is engineered for data scientists and data engineers, and it’s a tremendous help for those teams who don’t have DevOps or sysadmin engineers, as infrastructure management overhead has been almost completely eliminated.
Based on practical examples we will demonstrate how Vertex AI Pipelines scores high in terms of developer experience, how fits custom ML needs, and analyze results. It’s a toolset for a fully-fledged machine learning workflow, a sequence of steps in the model development, a deployment cycle, such as data preparation/validation, model training, hyperparameter tuning, model validation, and model deployment. Vertex AI comes with all standard resources plus an ML metadata store, a fully managed feature store, and a fully managed pipelines runner.
Vertex AI Pipelines is a managed serverless toolkit, which means you don't have to fiddle with infrastructure or back-end resources to run workflows.
This presentation was made on June 11, 2020.
Recording from the presentation can be viewed here: https://youtu.be/02Gb062U_M4
The manufacturing industry is adopting artificial intelligence (AI) at a fast rate. This century-old industry is complex but has seen constant transformation across all of its facets.
Led by big data analytics, miniaturization of sensors enabling the Internet of Things (IoT), and, now, AI machine learning (ML), manufacturers everywhere have embarked on an AI transformation that is opening up potential new revenue streams as well taking costs and time out of existing processes.
This talk will walk through a use case for enterprise AI solutions within the manufacturing sector. We will discuss the challenges, motivation, and tool selection process, then cover the solution development in detail.
Speaker Bio:
eRic is armed with the technical know-how of Data Science, Machines Learning, and Big Data Analytics. He. is equipped with skill-sets to value-add businesses exploring into areas of Artificial Intelligence (AI) with an AI consultation approach. Translating BDA, Machine Learning, and AI into Business Values.
eRic CHOO had spent the last 8 years in the IT industry from integration of Infrastructure (Storage and Back-up) solutions to Advance Analytics Software specializing in BDA, Machines Learning, and AI. Before joining the IT industry, he had vast experience in the Semiconductor industry, thus a deep understanding in advance manufacturing processes.
SIONG Jong Hang works as a Solutions Engineer/Data Scientist at H2O.ai based in Singapore where he helps business, government, academia, and non-profit organizations in their transformation into AI. Prior to H2O.ai, he has worked at the Quant Group at Bank of America Merrill Lynch in Hong Kong and Teradata in Singapore as a data scientist. He has completed data science projects for various verticals in Europe and Asia. After hours, he’s an avid learner and has attained 100 MOOC certificates in various fields such as AI, science, engineering, and maths. He has also authored articles to instill interest in science, technology as well as AI.
Introducción al Machine Learning AutomáticoSri Ambati
¿Cómo puede llevar el aprendizaje automático a las masas? Los proyectos de Machine Learning con la búsqueda de talento, el tiempo para construir e implementar modelos y confiar en los modelos que se construyen.
¿Cómo puede tener varios equipos en su organización para crear modelos de ML precisos sin ser expertos en ciencia de datos o aprendizaje automático?
¿Se pregunta sobre los diferentes sabores de AutoML?
H2O Driverless AI emplea las técnicas de científicos expertos en datos en una aplicación fácil de usar que ayuda a escalar sus esfuerzos de ciencia de datos. La inteligencia artificial Driverless permite a los científicos de datos trabajar en proyectos más rápido utilizando la automatización y la potencia de computación de vanguardia de las GPU para realizar tareas en minutos que solían tomar meses.
Con H2O Driverless AI, todos, incluyendo expertos y científicos de datos junior, científicos de dominio e ingenieros de datos pueden desarrollar modelos confiables de aprendizaje automático. Esta plataforma de aprendizaje automático de última generación ofrece una funcionalidad única y avanzada para la visualización de datos, la ingeniería de características, la interpretabilidad del modelo y la implementación de baja latencia.
H2O Driverless AI hace:
* Visualización automática de datos
* Ingeniería automática de funciones a nivel de Grandmaster
* Selección automática del modelo
* Ajuste y capacitación automáticos del modelo
* Paralelización automática utilizando múltiples CPU o GPU
* Ensamblaje automático del modelo
*automática del Interpretaciónaprendizaje automático (MLI)
* Generación automática de código de puntuación
¿Quieres probarlo tú mismo? Puede obtener una prueba gratuita aquí: H2O Driverless AI trial.
Venga a esta sesión y descubra cómo comenzar con el Aprendizaje automático automático con AI sin conductor H2O, y cree modelos potentes con solo unos pocos clics.
¡Te veo pronto!
Acerca de H2O.ai
H2O.ai es una empresa visionaria de software de código abierto de Silicon Valley que creó y reimaginó lo que es posible. Somos una empresa de fabricantes que trajeron al mercado nuevas plataformas y tecnologías para impulsar el movimiento de inteligencia artificial. Somos los creadores de, H2O, la principal plataforma de aprendizaje de ciencia de datos de fuente abierta y de aprendizaje automático utilizada por casi la mitad de Fortune 500 y en la que confían más de 14,000 organizaciones y cientos de miles de científicos de datos de todo el mundo.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/productizing-complex-visual-ai-systems-for-autonomous-flight-a-presentation-from-airbus/
Carlo Dal Mutto, Director of Engineering at Airbus, presents the “Productizing Complex Visual AI Systems for Autonomous Flight” tutorial at the May 2021 Embedded Vision Summit.
The development of visual AI systems for real-world applications is a complex undertaking characterized by a variety of diverse challenges. While the media spotlight is often focused on academic AI models that improve performance based on well-defined datasets, in many instances insufficient attention is dedicated to the engineering complexity of productizing real-world applications.
Dal Mutto begins this presentation with an overview of key topics that must be addressed in productizing complex visual AI systems, including the definition of system requirements; hardware and software design; data acquisition, labelling and management; and AI model development, deployment, validation and maintenance. Next, he delves into software design and AI model deployment in greater detail. He illustrates key challenges and promising techniques via practical examples and results from his company’s work delivering visual AI systems for autonomous flight as part of Project Wayfinder at Acubed, Airbus’ innovation center in Silicon Valley.
Grokking Techtalk #40: AWS’s philosophy on designing MLOps platformGrokking VN
Máy học (Machine learning) đang trở thành một trong những xu hướng lớn nhất trong phát triển hệ thống hiện đại, với khả năng đem đến những hiểu biết chiến lược, các dự đoán & cái nhìn chuyên sâu cho doanh nghiệp. Tuy nhiên, xây dựng & tích hợp 1 hệ thống máy học không phải lúc nào cũng dễ dàng, đặc biệt với những hệ thống lớn & hệ thống phân tán - khi mà các khuôn phép về phát triển máy học còn chưa đạt đến độ phát triển bằng hệ thống phần mềm.
Trong buổi thảo luận này, chúng ta sẽ cùng tìm hiểu cách Amazon Web Services (AWS) đã thiết kế & xây dựng 1 trong những nền tảng MLOps được ứng dụng rộng rãi nhất trên thế giới - Amazon SageMaker.
- Về diễn giả: My Nguyễn hiện là Kiến trúc sư giải pháp tại AWS Việt Nam, chuyên sâu vào hỗ trợ các giải pháp xây dựng hệ thống Máy học.
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...Sri Ambati
Numerai is an open, crowd-sourced hedge fund powered by predictions from data scientists around the world. In return, participants are rewarded with weekly payouts in crypto.
In this talk, Joe will give an overview of the Numerai tournament based on his own experience. He will then explain how he automates the time-consuming tasks such as testing different modelling strategies, scoring new datasets, submitting predictions to Numerai as well as monitoring model performance with H2O Driverless AI and R.
CD4ML and the challenges of testing and quality in ML systemsSeldon
Speaker: Danilo Sato, principal consultant at ThoughtWorks.
Bio: Danilo Sato (@dtsato) is a principal consultant at ThoughtWorks with experience in many areas of architecture and engineering: software, data, infrastructure, and machine learning. He is the author of "DevOps in Practice: Reliable and Automated Software Delivery", a member of ThoughtWorks Technology Advisory Board, and ThoughtWorks Office of the CTO.
Title: CD4ML and the challenges of testing and quality in ML systems
Abstract: Continuous Delivery for Machine Learning (CD4ML) deals with the challenges of applying Continuous Delivery principles to ML systems to make the end-to-end process of developing and deploying them more repeatable and reliable. These systems are generally more complex than traditional software applications, and ML models are non-deterministic and hard to explain. In this talk we will discuss the challenges of testing and quality in ML systems, and share some practices for applying different types of tests to help overcome those issues.
www.devopsinpractice.com
www.devopsnapratica.com.br
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
In recent years, one of the biggest trends in applications development has been the rise of Machine Learning solutions, tools, and managed platforms. Vertex AI is a managed unified ML platform for all your AI workloads. On the MLOps side, Vertex AI Pipelines solutions let you adopt experiment pipelining beyond the classic build, train, eval, and deploy a model. It is engineered for data scientists and data engineers, and it’s a tremendous help for those teams who don’t have DevOps or sysadmin engineers, as infrastructure management overhead has been almost completely eliminated.
Based on practical examples we will demonstrate how Vertex AI Pipelines scores high in terms of developer experience, how fits custom ML needs, and analyze results. It’s a toolset for a fully-fledged machine learning workflow, a sequence of steps in the model development, a deployment cycle, such as data preparation/validation, model training, hyperparameter tuning, model validation, and model deployment. Vertex AI comes with all standard resources plus an ML metadata store, a fully managed feature store, and a fully managed pipelines runner.
Vertex AI Pipelines is a managed serverless toolkit, which means you don't have to fiddle with infrastructure or back-end resources to run workflows.
This presentation was made on June 11, 2020.
Recording from the presentation can be viewed here: https://youtu.be/02Gb062U_M4
The manufacturing industry is adopting artificial intelligence (AI) at a fast rate. This century-old industry is complex but has seen constant transformation across all of its facets.
Led by big data analytics, miniaturization of sensors enabling the Internet of Things (IoT), and, now, AI machine learning (ML), manufacturers everywhere have embarked on an AI transformation that is opening up potential new revenue streams as well taking costs and time out of existing processes.
This talk will walk through a use case for enterprise AI solutions within the manufacturing sector. We will discuss the challenges, motivation, and tool selection process, then cover the solution development in detail.
Speaker Bio:
eRic is armed with the technical know-how of Data Science, Machines Learning, and Big Data Analytics. He. is equipped with skill-sets to value-add businesses exploring into areas of Artificial Intelligence (AI) with an AI consultation approach. Translating BDA, Machine Learning, and AI into Business Values.
eRic CHOO had spent the last 8 years in the IT industry from integration of Infrastructure (Storage and Back-up) solutions to Advance Analytics Software specializing in BDA, Machines Learning, and AI. Before joining the IT industry, he had vast experience in the Semiconductor industry, thus a deep understanding in advance manufacturing processes.
SIONG Jong Hang works as a Solutions Engineer/Data Scientist at H2O.ai based in Singapore where he helps business, government, academia, and non-profit organizations in their transformation into AI. Prior to H2O.ai, he has worked at the Quant Group at Bank of America Merrill Lynch in Hong Kong and Teradata in Singapore as a data scientist. He has completed data science projects for various verticals in Europe and Asia. After hours, he’s an avid learner and has attained 100 MOOC certificates in various fields such as AI, science, engineering, and maths. He has also authored articles to instill interest in science, technology as well as AI.
Introducción al Machine Learning AutomáticoSri Ambati
¿Cómo puede llevar el aprendizaje automático a las masas? Los proyectos de Machine Learning con la búsqueda de talento, el tiempo para construir e implementar modelos y confiar en los modelos que se construyen.
¿Cómo puede tener varios equipos en su organización para crear modelos de ML precisos sin ser expertos en ciencia de datos o aprendizaje automático?
¿Se pregunta sobre los diferentes sabores de AutoML?
H2O Driverless AI emplea las técnicas de científicos expertos en datos en una aplicación fácil de usar que ayuda a escalar sus esfuerzos de ciencia de datos. La inteligencia artificial Driverless permite a los científicos de datos trabajar en proyectos más rápido utilizando la automatización y la potencia de computación de vanguardia de las GPU para realizar tareas en minutos que solían tomar meses.
Con H2O Driverless AI, todos, incluyendo expertos y científicos de datos junior, científicos de dominio e ingenieros de datos pueden desarrollar modelos confiables de aprendizaje automático. Esta plataforma de aprendizaje automático de última generación ofrece una funcionalidad única y avanzada para la visualización de datos, la ingeniería de características, la interpretabilidad del modelo y la implementación de baja latencia.
H2O Driverless AI hace:
* Visualización automática de datos
* Ingeniería automática de funciones a nivel de Grandmaster
* Selección automática del modelo
* Ajuste y capacitación automáticos del modelo
* Paralelización automática utilizando múltiples CPU o GPU
* Ensamblaje automático del modelo
*automática del Interpretaciónaprendizaje automático (MLI)
* Generación automática de código de puntuación
¿Quieres probarlo tú mismo? Puede obtener una prueba gratuita aquí: H2O Driverless AI trial.
Venga a esta sesión y descubra cómo comenzar con el Aprendizaje automático automático con AI sin conductor H2O, y cree modelos potentes con solo unos pocos clics.
¡Te veo pronto!
Acerca de H2O.ai
H2O.ai es una empresa visionaria de software de código abierto de Silicon Valley que creó y reimaginó lo que es posible. Somos una empresa de fabricantes que trajeron al mercado nuevas plataformas y tecnologías para impulsar el movimiento de inteligencia artificial. Somos los creadores de, H2O, la principal plataforma de aprendizaje de ciencia de datos de fuente abierta y de aprendizaje automático utilizada por casi la mitad de Fortune 500 y en la que confían más de 14,000 organizaciones y cientos de miles de científicos de datos de todo el mundo.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/productizing-complex-visual-ai-systems-for-autonomous-flight-a-presentation-from-airbus/
Carlo Dal Mutto, Director of Engineering at Airbus, presents the “Productizing Complex Visual AI Systems for Autonomous Flight” tutorial at the May 2021 Embedded Vision Summit.
The development of visual AI systems for real-world applications is a complex undertaking characterized by a variety of diverse challenges. While the media spotlight is often focused on academic AI models that improve performance based on well-defined datasets, in many instances insufficient attention is dedicated to the engineering complexity of productizing real-world applications.
Dal Mutto begins this presentation with an overview of key topics that must be addressed in productizing complex visual AI systems, including the definition of system requirements; hardware and software design; data acquisition, labelling and management; and AI model development, deployment, validation and maintenance. Next, he delves into software design and AI model deployment in greater detail. He illustrates key challenges and promising techniques via practical examples and results from his company’s work delivering visual AI systems for autonomous flight as part of Project Wayfinder at Acubed, Airbus’ innovation center in Silicon Valley.
Grokking Techtalk #40: AWS’s philosophy on designing MLOps platformGrokking VN
Máy học (Machine learning) đang trở thành một trong những xu hướng lớn nhất trong phát triển hệ thống hiện đại, với khả năng đem đến những hiểu biết chiến lược, các dự đoán & cái nhìn chuyên sâu cho doanh nghiệp. Tuy nhiên, xây dựng & tích hợp 1 hệ thống máy học không phải lúc nào cũng dễ dàng, đặc biệt với những hệ thống lớn & hệ thống phân tán - khi mà các khuôn phép về phát triển máy học còn chưa đạt đến độ phát triển bằng hệ thống phần mềm.
Trong buổi thảo luận này, chúng ta sẽ cùng tìm hiểu cách Amazon Web Services (AWS) đã thiết kế & xây dựng 1 trong những nền tảng MLOps được ứng dụng rộng rãi nhất trên thế giới - Amazon SageMaker.
- Về diễn giả: My Nguyễn hiện là Kiến trúc sư giải pháp tại AWS Việt Nam, chuyên sâu vào hỗ trợ các giải pháp xây dựng hệ thống Máy học.
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...Sri Ambati
Numerai is an open, crowd-sourced hedge fund powered by predictions from data scientists around the world. In return, participants are rewarded with weekly payouts in crypto.
In this talk, Joe will give an overview of the Numerai tournament based on his own experience. He will then explain how he automates the time-consuming tasks such as testing different modelling strategies, scoring new datasets, submitting predictions to Numerai as well as monitoring model performance with H2O Driverless AI and R.
CD4ML and the challenges of testing and quality in ML systemsSeldon
Speaker: Danilo Sato, principal consultant at ThoughtWorks.
Bio: Danilo Sato (@dtsato) is a principal consultant at ThoughtWorks with experience in many areas of architecture and engineering: software, data, infrastructure, and machine learning. He is the author of "DevOps in Practice: Reliable and Automated Software Delivery", a member of ThoughtWorks Technology Advisory Board, and ThoughtWorks Office of the CTO.
Title: CD4ML and the challenges of testing and quality in ML systems
Abstract: Continuous Delivery for Machine Learning (CD4ML) deals with the challenges of applying Continuous Delivery principles to ML systems to make the end-to-end process of developing and deploying them more repeatable and reliable. These systems are generally more complex than traditional software applications, and ML models are non-deterministic and hard to explain. In this talk we will discuss the challenges of testing and quality in ML systems, and share some practices for applying different types of tests to help overcome those issues.
www.devopsinpractice.com
www.devopsnapratica.com.br
ML Model Deployment and Scoring on the Edge with Automatic ML & DFSri Ambati
Machine Learning Model Deployment and Scoring on the Edge with Automatic Machine Learning and Data Flow
YouTube Video URL: https://youtu.be/gB0bTH-L6DE
Deploying Machine Learning models to the edge can present significant ML/IoT challenges centered around the need for low latency and accurate scoring on minimal resource environments. H2O.ai's Driverless AI AutoML and Cloudera Data Flow work nicely together to solve this challenge. Driverless AI automates the building of accurate Machine Learning models, which are deployed as light footprint and low latency Java or C++ artifacts, also known as a MOJO (Model Optimized). And Cloudera Data Flow leverage Apache NiFi that offers an innovative data flow framework to host MOJOs to make predictions on data moving on the edge.
Overcoming Regulatory & Compliance Hurdles with Hybrid Cloud EKS and Weave Gi...Weaveworks
In this webinar we will be discussing how Dream 11, the world’s largest fantasy sports platform, and its large-scale distributed cloud can meet regulatory requirements while still taking advantage of the benefits that cloud native technologies like EKS and Weave GitOps present.
Topics we are covering include:
How you can utilize EKSD (AWS’ open source EKS distribution) and EKS (managed Kubernetes in the cloud) to establish common operational workflows that minimize operational overhead
How to lower operational costs with the use of ephemeral cloud environments for development, testing and even production
How to maintain compliance by enabling clear operational controls and auditability
TechWiseTV Workshop: Improving Performance and Agility with Cisco HyperFlexRobb Boyd
Find out how organizations like yours are deriving business value from the HyperFlex HCI solution. Join us for a deep dive and Q&A at the TechWiseTV workshop.
TechWiseTV Hyperflex 4.0 Episode: http://cs.co/9009EW2Td
We build AI and HPC solutions. Expertise: highly optimized AI Engines and HPC Apps.
• HPC: accelerating time to results and adapting complex algorithms to GPU, FPGA, many-CPU architectures.
Leverage byteLAKE expertise in complex algorithms adaptation and optimization for NVIDIA GPUs, Xilinx Alveo FPGAs, Intel, AMD and ARM solutions. From single nodes to clusters.
More: www.byteLAKE.com/en/Alveo
AI for Manufacturing (Machine Vision, Edge AI, Federated Learning)byteLAKE
This is the extended presentation about byteLAKE's and Lenovo's Artificial Intelligence solutions for Manufacturing.
Topics covered: AI strategy for manufacturing, Edge AI, Federated Learning and Machine Vision.
It's the first publication in the upcoming series: AI for Manufacturing. Highlights: AI-assisted quality monitoring automation, AI-assisted production line monitoring and issues detection, AI-assisted measurements, Intelligent Cameras and many more. Reach out to us to learn more: welcome@byteLAKE.com.
Presented during the world's first Federated Learning conference (Jun'20). Recording: https://youtu.be/IMqRIi45dDA
Related articles:
- Revolution in factories: Industry 4.0.
https://medium.com/@marcrojek/revolution-in-factories-industry-4-0-conference-made-in-wroclaw-2020-translation-ae96e5e14d55
- Cognitive Automation helps where RPAs fall short.
https://medium.com/@marcrojek/cognitive-automation-helps-where-rpas-fall-short-a1c5a01a66f8
- Machine Vision, how AI brings value to industries.
https://medium.com/@marcrojek/machine-vision-how-ai-brings-value-to-industries-e6a4f8e56f42
Learn more:
- https://www.bytelake.com/en/cognitive-services/
- https://www.lenovo.com/ai
- https://federatedlearningconference.com/
INT Inc | Benefits of a Microservices ArchitectureThelma Gros
Developers have begun to transition from monolithic architecture to microservices. This presentation discusses how this move can be better for clients and better for your business.
Near realtime AI deployment with huge data and super low latency - Levi Brack...Sri Ambati
Published on Nov 2, 2018
This talk was recorded in London on October 30th, 2018 and can be viewed here: https://youtu.be/erHt-1yBuUw
Session: Travelport is a leading travel commerce platform that has truly huge data and many complex needs in terms of processing, performance and latency. This talk will demonstrate how we were able to harness big data technologies, H2O and cloud integration to deploy AI at scale and at low latency. The talk to cover practical advice taken from our AI journey; you will learn the successful strategies and the pitfalls of near real-time retraining ML models with streaming data and using all opensource technologies.
Bio: As principal data scientist at Travelport, Levi Brackman leads a team of data scientists that are putting ML model into production. Prior to Travelport, Levi spent most of his career in the start-up world. He founded and led an organization that created innovative educational software applications and solutions used by high schools and youth organizations in the USA and Australia. Levi earned a PhD in the quantitative social sciences under the supervision of one the world's leading educational psychologists. He earned master’s degree from University College London and is author of a business book published in eight languages that was a bestseller in multiple countries. A native of North London (UK) Levi is married and has five children and now lives in Broomfield, Colorado.
Prithvi Prabhu + Shivam Bansal, H2O.ai - Building Blocks for AI Applications ...Sri Ambati
This session was recorded in NYC on October 22nd, 2019 and can be viewed here: https://www.youtube.com/watch?v=xAhQAYV5_PY&list=PLNtMya54qvOE3AvWRCNF2tybxNobUbAYp&index=3&t=2s
Bio: Prithvi is Chief of Technology, Applications at H2O.ai. Prithvi leads the design and development of “Q”, H2O.ai’s high scale exploratory data analysis and analytical application development platform.
Prithvi has been with H2O.ai since its early days and has been responsible for several products including Driverless AI (our flagship automatic machine learning platform), Steam (distributed cluster management, model management and deployment for H2O), H2O.js (Javascript transpiler for H2O’s distributed runtime), Play (on-demand cloud provisioning system for H2O), Flow (a hybrid GUI/REPL/Notebook for H2O) and Lightning (statistical graphics for H2O).
Bio: Shivam Bansal is a Data Scientist at H2O.ai and Kaggle Grandmaster in Kernels Section. He is the three times winner of Kaggle’s Data Science for Good Competition and winner of multiple other offline AI and Data Science competitions.
Shivam has extensive cross-industry and hands-on experience in building data science products. He has helped clients in the Insurance, Healthcare, Banking, and Retail domains to solve unstructured data science problems by building end to end pipelines and solutions.
This presentation was made on June 9th, 2020.
Video recording of the session can be viewed here: https://youtu.be/OCB9sTUnUug
In this meetup with Sanyam Bhutani, Machine Learning Engineer at H2O.ai, he gives a recap of the eight annual ICLR (International Conference on Learning Representations) 2020 - a niche deep learning conference whose focus is to study how to learn representations of data, which is basically what deep learning does.
Sanyam goes through a few of his favorite selected papers from this year’s ICLR, note this session may not be able to capture the richness of all papers or allow a detailed discussion.
You will be able to find Sanyam in our community slack (https://www.h2o.ai/slack-community/), please feel free to start a discussion with him, if you send a emoji greeting, you’ll find the answers.
Following are the papers we will look into:
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Your classifier is secretly an energy based model and you should treat it like one
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Reformer: The Efficient Transformer
Generative Models for Effective ML on Private, Decentralized Datasets
Once for All: Train One Network and Specialize it for Efficient Deployment
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
Real or Not Real, that is the Question
Martin Stein, G5 - Driving Marketing Performance with H2O Driverless AI - H2O...Sri Ambati
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/f4b2Yoe9JEs
Combining H2O Driverless AI, H2O-3, and AWS for developing and deploying AI solutions on scale.
Martin Stein is a seasoned Product and Marketing executive with a successful track record delivering large-scale advanced analytics and marketing analytics services and products. Martin has served as Board Member, C-Level Executive and subject matter expert in a variety of industries (Marketing, Finance, Real Estate and Media). Currently, Martin as Chief Analytics Officer for g5, a leader in real estate marketing optimization. G5 is a predictive marketing SaaS company that uses AI and other emerging technologies to help marketers amplify their impact.
Patrick Hall, H2O.ai - Human Friendly Machine Learning - H2O World San FranciscoSri Ambati
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/diMSemHRNDw
This presentation illustrates how to combine innovations from several sub-disciplines of machine learning research to train understandable, fair, trustable, and accurate predictive modeling systems. Techniques from research into fair models, directly interpretable Bayesian or constrained machine learning models, and post-hoc explanations can be used to train transparent, fair, and accurate models and make nearly every aspect of their behavior understandable and accountable to human users. Additional techniques from fairness research can be used to check for sociological bias in model predictions and to preprocess data and post-process predictions to ensure the fairness of predictive models. Finally, applying new testing and debugging techniques, often inspired by best practices in software engineering, can increase the trustworthiness of model predictions on unseen data. Together these techniques create a new and truly human-friendly type of machine learning suitable for use in business- and life-critical decision support.
Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute.
This in-depth training on H2O Driverless AI was given by Wen Phan on June 28th, 2018. He elaborated on automatic feature engineering, machine learning interpretability, and automatic visualization components of this ground breaking product.
"Industrial Internet IoT bootcamp" meetup, 11-5-2015 hosted by GE Digital at HackerDojo. Discussing topics ranging from IoT architecture to connectivity and protocols, cyber security, data science and industrial UX design.
Keynote by Mike Gualtieri, Forrester Research - Making AI Happen Without Gett...Sri Ambati
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/4a_Y0L7suBc
AI is real. Enterprises use it to automate decisions, hyper-personalize customer experiences, streamline operational processes, and much more. However, for most enterprise technology leaders, AI technologies and use cases are still far too mysterious. The field is moving fast. Enterprise leaders must forge a coherent, pragmatic AI strategy that is tied to business outcomes. In this session, guest speaker Forrester Research Vice President & Principal Analyst Mike Gualtieri will demystify enterprise AI, identify use cases most likely to succeed, and, most importantly, provide key advice to enterprise leaders that are charged with moving AI forward in their organization.
Bio: Mike's research focuses on software technologies, platforms, and practices that enable technology professionals to deliver digital transformations that lead to prescient digital experiences and breakthrough operational efficiency. His key technology coverage areas are AI, machine learning, deep learning, AI chips and systems, digital decisions, streaming analytics, prescriptive analytics, big data analytical platforms and tools (Hadoop/Spark/Flink; translytical databases), optimization, and emerging technologies that make software faster and smarter. Mike is also a leading expert on the intersection of business strategy, artificial intelligence, and innovation. Mike provides technology vendors with actionable, fine-tuned advisory sessions on strategy, messaging, competitive analysis, buyer-persona analysis, market trends, and product road maps for the areas he directly covers and adjacent areas that wish to launch into new markets or use new technologies. Mike is a recipient of the Forrester Courage Award for making bold calls that inspire leaders and guide great business and technology decisions.
VMworld 2013
Geoff Murase, VMware
Will Wade, NVIDIA
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
ML Model Deployment and Scoring on the Edge with Automatic ML & DFSri Ambati
Machine Learning Model Deployment and Scoring on the Edge with Automatic Machine Learning and Data Flow
YouTube Video URL: https://youtu.be/gB0bTH-L6DE
Deploying Machine Learning models to the edge can present significant ML/IoT challenges centered around the need for low latency and accurate scoring on minimal resource environments. H2O.ai's Driverless AI AutoML and Cloudera Data Flow work nicely together to solve this challenge. Driverless AI automates the building of accurate Machine Learning models, which are deployed as light footprint and low latency Java or C++ artifacts, also known as a MOJO (Model Optimized). And Cloudera Data Flow leverage Apache NiFi that offers an innovative data flow framework to host MOJOs to make predictions on data moving on the edge.
Overcoming Regulatory & Compliance Hurdles with Hybrid Cloud EKS and Weave Gi...Weaveworks
In this webinar we will be discussing how Dream 11, the world’s largest fantasy sports platform, and its large-scale distributed cloud can meet regulatory requirements while still taking advantage of the benefits that cloud native technologies like EKS and Weave GitOps present.
Topics we are covering include:
How you can utilize EKSD (AWS’ open source EKS distribution) and EKS (managed Kubernetes in the cloud) to establish common operational workflows that minimize operational overhead
How to lower operational costs with the use of ephemeral cloud environments for development, testing and even production
How to maintain compliance by enabling clear operational controls and auditability
TechWiseTV Workshop: Improving Performance and Agility with Cisco HyperFlexRobb Boyd
Find out how organizations like yours are deriving business value from the HyperFlex HCI solution. Join us for a deep dive and Q&A at the TechWiseTV workshop.
TechWiseTV Hyperflex 4.0 Episode: http://cs.co/9009EW2Td
We build AI and HPC solutions. Expertise: highly optimized AI Engines and HPC Apps.
• HPC: accelerating time to results and adapting complex algorithms to GPU, FPGA, many-CPU architectures.
Leverage byteLAKE expertise in complex algorithms adaptation and optimization for NVIDIA GPUs, Xilinx Alveo FPGAs, Intel, AMD and ARM solutions. From single nodes to clusters.
More: www.byteLAKE.com/en/Alveo
AI for Manufacturing (Machine Vision, Edge AI, Federated Learning)byteLAKE
This is the extended presentation about byteLAKE's and Lenovo's Artificial Intelligence solutions for Manufacturing.
Topics covered: AI strategy for manufacturing, Edge AI, Federated Learning and Machine Vision.
It's the first publication in the upcoming series: AI for Manufacturing. Highlights: AI-assisted quality monitoring automation, AI-assisted production line monitoring and issues detection, AI-assisted measurements, Intelligent Cameras and many more. Reach out to us to learn more: welcome@byteLAKE.com.
Presented during the world's first Federated Learning conference (Jun'20). Recording: https://youtu.be/IMqRIi45dDA
Related articles:
- Revolution in factories: Industry 4.0.
https://medium.com/@marcrojek/revolution-in-factories-industry-4-0-conference-made-in-wroclaw-2020-translation-ae96e5e14d55
- Cognitive Automation helps where RPAs fall short.
https://medium.com/@marcrojek/cognitive-automation-helps-where-rpas-fall-short-a1c5a01a66f8
- Machine Vision, how AI brings value to industries.
https://medium.com/@marcrojek/machine-vision-how-ai-brings-value-to-industries-e6a4f8e56f42
Learn more:
- https://www.bytelake.com/en/cognitive-services/
- https://www.lenovo.com/ai
- https://federatedlearningconference.com/
INT Inc | Benefits of a Microservices ArchitectureThelma Gros
Developers have begun to transition from monolithic architecture to microservices. This presentation discusses how this move can be better for clients and better for your business.
Near realtime AI deployment with huge data and super low latency - Levi Brack...Sri Ambati
Published on Nov 2, 2018
This talk was recorded in London on October 30th, 2018 and can be viewed here: https://youtu.be/erHt-1yBuUw
Session: Travelport is a leading travel commerce platform that has truly huge data and many complex needs in terms of processing, performance and latency. This talk will demonstrate how we were able to harness big data technologies, H2O and cloud integration to deploy AI at scale and at low latency. The talk to cover practical advice taken from our AI journey; you will learn the successful strategies and the pitfalls of near real-time retraining ML models with streaming data and using all opensource technologies.
Bio: As principal data scientist at Travelport, Levi Brackman leads a team of data scientists that are putting ML model into production. Prior to Travelport, Levi spent most of his career in the start-up world. He founded and led an organization that created innovative educational software applications and solutions used by high schools and youth organizations in the USA and Australia. Levi earned a PhD in the quantitative social sciences under the supervision of one the world's leading educational psychologists. He earned master’s degree from University College London and is author of a business book published in eight languages that was a bestseller in multiple countries. A native of North London (UK) Levi is married and has five children and now lives in Broomfield, Colorado.
Prithvi Prabhu + Shivam Bansal, H2O.ai - Building Blocks for AI Applications ...Sri Ambati
This session was recorded in NYC on October 22nd, 2019 and can be viewed here: https://www.youtube.com/watch?v=xAhQAYV5_PY&list=PLNtMya54qvOE3AvWRCNF2tybxNobUbAYp&index=3&t=2s
Bio: Prithvi is Chief of Technology, Applications at H2O.ai. Prithvi leads the design and development of “Q”, H2O.ai’s high scale exploratory data analysis and analytical application development platform.
Prithvi has been with H2O.ai since its early days and has been responsible for several products including Driverless AI (our flagship automatic machine learning platform), Steam (distributed cluster management, model management and deployment for H2O), H2O.js (Javascript transpiler for H2O’s distributed runtime), Play (on-demand cloud provisioning system for H2O), Flow (a hybrid GUI/REPL/Notebook for H2O) and Lightning (statistical graphics for H2O).
Bio: Shivam Bansal is a Data Scientist at H2O.ai and Kaggle Grandmaster in Kernels Section. He is the three times winner of Kaggle’s Data Science for Good Competition and winner of multiple other offline AI and Data Science competitions.
Shivam has extensive cross-industry and hands-on experience in building data science products. He has helped clients in the Insurance, Healthcare, Banking, and Retail domains to solve unstructured data science problems by building end to end pipelines and solutions.
This presentation was made on June 9th, 2020.
Video recording of the session can be viewed here: https://youtu.be/OCB9sTUnUug
In this meetup with Sanyam Bhutani, Machine Learning Engineer at H2O.ai, he gives a recap of the eight annual ICLR (International Conference on Learning Representations) 2020 - a niche deep learning conference whose focus is to study how to learn representations of data, which is basically what deep learning does.
Sanyam goes through a few of his favorite selected papers from this year’s ICLR, note this session may not be able to capture the richness of all papers or allow a detailed discussion.
You will be able to find Sanyam in our community slack (https://www.h2o.ai/slack-community/), please feel free to start a discussion with him, if you send a emoji greeting, you’ll find the answers.
Following are the papers we will look into:
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Your classifier is secretly an energy based model and you should treat it like one
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Reformer: The Efficient Transformer
Generative Models for Effective ML on Private, Decentralized Datasets
Once for All: Train One Network and Specialize it for Efficient Deployment
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
Real or Not Real, that is the Question
Martin Stein, G5 - Driving Marketing Performance with H2O Driverless AI - H2O...Sri Ambati
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/f4b2Yoe9JEs
Combining H2O Driverless AI, H2O-3, and AWS for developing and deploying AI solutions on scale.
Martin Stein is a seasoned Product and Marketing executive with a successful track record delivering large-scale advanced analytics and marketing analytics services and products. Martin has served as Board Member, C-Level Executive and subject matter expert in a variety of industries (Marketing, Finance, Real Estate and Media). Currently, Martin as Chief Analytics Officer for g5, a leader in real estate marketing optimization. G5 is a predictive marketing SaaS company that uses AI and other emerging technologies to help marketers amplify their impact.
Patrick Hall, H2O.ai - Human Friendly Machine Learning - H2O World San FranciscoSri Ambati
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/diMSemHRNDw
This presentation illustrates how to combine innovations from several sub-disciplines of machine learning research to train understandable, fair, trustable, and accurate predictive modeling systems. Techniques from research into fair models, directly interpretable Bayesian or constrained machine learning models, and post-hoc explanations can be used to train transparent, fair, and accurate models and make nearly every aspect of their behavior understandable and accountable to human users. Additional techniques from fairness research can be used to check for sociological bias in model predictions and to preprocess data and post-process predictions to ensure the fairness of predictive models. Finally, applying new testing and debugging techniques, often inspired by best practices in software engineering, can increase the trustworthiness of model predictions on unseen data. Together these techniques create a new and truly human-friendly type of machine learning suitable for use in business- and life-critical decision support.
Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute.
This in-depth training on H2O Driverless AI was given by Wen Phan on June 28th, 2018. He elaborated on automatic feature engineering, machine learning interpretability, and automatic visualization components of this ground breaking product.
"Industrial Internet IoT bootcamp" meetup, 11-5-2015 hosted by GE Digital at HackerDojo. Discussing topics ranging from IoT architecture to connectivity and protocols, cyber security, data science and industrial UX design.
Keynote by Mike Gualtieri, Forrester Research - Making AI Happen Without Gett...Sri Ambati
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/4a_Y0L7suBc
AI is real. Enterprises use it to automate decisions, hyper-personalize customer experiences, streamline operational processes, and much more. However, for most enterprise technology leaders, AI technologies and use cases are still far too mysterious. The field is moving fast. Enterprise leaders must forge a coherent, pragmatic AI strategy that is tied to business outcomes. In this session, guest speaker Forrester Research Vice President & Principal Analyst Mike Gualtieri will demystify enterprise AI, identify use cases most likely to succeed, and, most importantly, provide key advice to enterprise leaders that are charged with moving AI forward in their organization.
Bio: Mike's research focuses on software technologies, platforms, and practices that enable technology professionals to deliver digital transformations that lead to prescient digital experiences and breakthrough operational efficiency. His key technology coverage areas are AI, machine learning, deep learning, AI chips and systems, digital decisions, streaming analytics, prescriptive analytics, big data analytical platforms and tools (Hadoop/Spark/Flink; translytical databases), optimization, and emerging technologies that make software faster and smarter. Mike is also a leading expert on the intersection of business strategy, artificial intelligence, and innovation. Mike provides technology vendors with actionable, fine-tuned advisory sessions on strategy, messaging, competitive analysis, buyer-persona analysis, market trends, and product road maps for the areas he directly covers and adjacent areas that wish to launch into new markets or use new technologies. Mike is a recipient of the Forrester Courage Award for making bold calls that inspire leaders and guide great business and technology decisions.
VMworld 2013
Geoff Murase, VMware
Will Wade, NVIDIA
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
At the 2018 GPU Technology Conference in Silicon Valley, NVIDIA CEO Jensen Huang announced the new "double-sized" 32GB Volta GPU; unveiled the NVIDIA DGX-2, the power of 300 servers in a box; showed an expanded inference platform with TensorRT 4 and Kubernetes on NVIDIA GPU; and revealed the NVIDIA GPU Cloud registry with 30 GPU-optimized containers and made it available from more cloud service providers. GTC attendees also got a sneak peek of the latest NVIDIA DRIVE software stack and the next DRIVE AI car computer, "Orin," along with developments in the NVIDIA Isaac platform for robotics and Project Clara, NVIDIA's medical imaging supercomputer.
NVIDIA Is Revolutionizing Computing - June 2017 NVIDIA
Here's our latest story as well as recent major announcements, featuring the epicenter of GPU computing, the era of AI, the world's largest gaming platform, and more.
NVIDIA is the world leader in visual computing. The GPU, our invention, serves as the visual cortex
of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions like self-learning machines and self-driving cars.
Highlighted notes of:
CUDA by Example
An Introduction to General Purpose GPU Computing
Authors:
Jason Sanders
Edward Kandrot
“This book is required reading for anyone working with accelerator-based computing systems.”
–From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory
CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.
CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.
Table of Contents
Why CUDA? Why Now?
Getting Started
Introduction to CUDA C
Parallel Programming in CUDA C
Thread Cooperation
Constant Memory and Events
Texture Memory
Graphics Interoperability
Atomics
Streams
CUDA C on Multiple GPUs
The Final Countdown
All the CUDA software tools you’ll need are freely available for download from NVIDIA.
Jason Sanders is a senior software engineer in NVIDIA’s CUDA Platform Group, helped develop early releases of CUDA system software and contributed to the OpenCL 1.0 Specification, an industry standard for heterogeneous computing. He has held positions at ATI Technologies, Apple, and Novell.
Edward Kandrot is a senior software engineer on NVIDIA’s CUDA Algorithms team, has more than twenty years of industry experience optimizing code performance for firms including Adobe, Microsoft, Google, and Autodesk.
CUDA by Example : Why CUDA? Why Now? : NotesSubhajit Sahu
Highlighted notes of:
Chapter 1: Why CUDA? Why Now?
Book:
CUDA by Example
An Introduction to General Purpose GPU Computing
Authors:
Jason Sanders
Edward Kandrot
“This book is required reading for anyone working with accelerator-based computing systems.”
–From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory
CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.
CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.
Table of Contents
Why CUDA? Why Now?
Getting Started
Introduction to CUDA C
Parallel Programming in CUDA C
Thread Cooperation
Constant Memory and Events
Texture Memory
Graphics Interoperability
Atomics
Streams
CUDA C on Multiple GPUs
The Final Countdown
All the CUDA software tools you’ll need are freely available for download from NVIDIA.
Jason Sanders is a senior software engineer in NVIDIA’s CUDA Platform Group, helped develop early releases of CUDA system software and contributed to the OpenCL 1.0 Specification, an industry standard for heterogeneous computing. He has held positions at ATI Technologies, Apple, and Novell.
Edward Kandrot is a senior software engineer on NVIDIA’s CUDA Algorithms team, has more than twenty years of industry experience optimizing code performance for firms including Adobe, Microsoft, Google, and Autodesk.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
Adventures in versioning everything - from software to chip designs - from NVIDIA, where more than 90% of the company use Perforce as a single source of truth. An overview of the real-world advantages of the "monorepo" across development and operations teams, including lessons learned along the way.
ENTER NVIDIA GRID
Delivering accelerated virtual desktops and applications.
This is where NVIDIA, the leader in graphics acceleration, stepped in to help. NVIDIA GRID technology allows IT to virtualize the physical GPU sitting in a server and share it with multiple VDI instances. This means that IT can deliver a true PC experience to any remote device from the datacenter. By providing a way to bring graphics acceleration to virtualization, NVIDIA GRID allows you to unlock all of the promises of productivity, mobility, security and flexibility for every one of your users.
With NVIDIA GRID you can safely house ALL your current into the datacenter so that they can be delivered out to any device, be it a thin client, chromebook, iPad or BYO Device. From an end user perspective this means they can be more productive working with the devices and in the locations that best suit them. For IT they are able to manage everything in a centrally in the datacenter which vastly simplifies their life.
by Mr. Tom Riley,
Director Global Business Development - Enterprise VR
TiECon Florida keynote - New opportunities for entrepreneurs using GPU & CUDAShanker Trivedi
This is a presentation that I gave at TiEcon Florida on 20 Sept 2013. I spoke about the new opportunities that are emerging for entrepreneurs caused by the disruptive innovation potential of GPU, CUDA and parallel computing technologies.
Silicom Ventures Talk Aug 2013 - GPUs and Parallel Programming create new opp...Shanker Trivedi
GPU are delivering exponential improvements in computing performance and scalability. And new parallel programming architectures such as CUDA are allowing smart technologists to harness the power of GPUs to address hitherto insoluble problems. This talk will illustrate the emerging opportunities and solutions that GPUs and parallel programming can offer in medical instruments and imaging, defense and surveillance, autonomous vehicles, the internet of things and sensory computing, manufacturing design and simulation, and seismic geology. The talk will be relevant to entrepreneurs who are thinking about the "next big thing" and to investors who may be thinking of the future mega trends.
Presentation by Jonathan Cohen & Mark Berger at Bioinformatics conference July 2013. It covers
- GPU Programming in 10 slides
- GPUs in Bioinformatics
- Porting SeqAn to CUDA
- Resources for developers and bioinformatics professionals
Simple guide to understanding customers needs and positioning the best Nvidia solution, This is an easy-to-use Sales Guide that we provide to our partners.
Tesla 2009-2013 and beyond. Check out the amazing progress we've made in the past 4 years This is a presentation made by my colleague Sumit Gupta at the NVIDIA investor Day 11 April 2013.
We have made significant progress over the past couple of years working with scientists around the world helping them to accelerate scientific discovery - using Nvidia Tesla GPU and CUDA computing
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2. 2006 2007 2008 2009 2010 2011 2012
40
35
30
25
20
15
10
5
NVIDIA
Acceleration
31 petaFLOPS
Total
Acceleration
37 petaFLOPS
GPU‑Accelerated
computinghas become an important catalyst
in the advancement of science and
technology—enabling tremendous
breakthroughs by simply enabling us
to do more, faster.
The need to solve complex
computational problems is becoming
increasingly commonplace. And
GPU-powered accelerated computing
is meeting this need by delivering
an order of magnitude more
performance, more efficiently.
Accelerator
Performance in
Supercomputing
Performance delivered by
GPUs in supercomputers
has increased from 0 to
19% in five years and is
growing exponentially.
Source: Top500.org.
3. Developing with GPUs is now pervasive in computing:
430,000,000+CUDA GPUs
have been shipped
35,000+papers have been published
on CUDA
8,000+institutions have registered
CUDA developers
580+universities are teaching
GPU programming
200+major applications are now
GPU accelerated
50+of the world’s fastest
supercomputers are
powered by CUDA GPUs
The benefits of GPU‑accelerated
computing are more widely recognized
every day. This is leading to more
industry leaders and members of the
research community to adopt both
NVIDIA GPUs and CUDA
®
,
the world’s most pervasive
parallel‑computing platform
and programming model.
That’s more than
two shipped
every second!
4. Driven by the broad use of NVIDIA
visual computing technology, the GPU
Technology Conference (GTC) has
become the world’s most important
event for GPU developers.
GTCis where art
meets sciencemeets
engineering
meets business.
5. Air Force
Research Lab
Barcelona
Supercomputing
Center
BGI
Carnegie Mellon
Chinese Academy
of Sciences
CERN
ETRI Korea
Expressions
College for Digital
Arts
Fermi National
Accelerator Lab
Georgia Tech
Harvard
IIT Italy
JPL
Johns Hopkins
Julich
Supercomputing
Center
KAUST
Los Alamos
National Lab
MIT Lincoln Lab
NASA
NATO
Naval Air Warfare
Center
NOAA
Oak Ridge
National Lab
Russian Academy
of Sciences
Sandia National
Lab
Stanford
Swiss
Supercomputing
Center
Tokyo Tech
Audi
Amazon
Apple
Bank of America
BMW
Boeing
Carl Zeiss
Chevron
Chrysler
Deutsche Telecom
ESPN
E*Trade
Exxon Mobile
Fiat
Ford
Google
Goldman Sachs
Gulfstream
Aerospace
Harley Davidson
Honda Research
iRobot
LEGO
NASDAQ
Netflix
Nike
Pixar
Space-X
Tesla Motors
Walt Disney
Academic and Research Industry
Every year, GTC provides an important
venue for exploring the game-
changing impact GPUs deliver to
science, technology, and industry. And
the 2013 event was one of the most
impressive ever.
The four‑day conference featured
talks by industry and NVIDIA experts
on GPU-accelerated computing—from
problem solving in everything from
medicine to product design and big
data, to hands-on sessions on how
to take advantage of this disruptive
technology.
More than 3,000Attendees from over
50countries
425Conference
Sessions
150Research
Posters
Here’s a sampling of
organizations that
attended GTC 2013:
6. GPUs are already
finding their
way into systems
and applications that
were undreamed of a
decade ago. Soon, mobile,
desktop, and supercomputer
technologies will intersect in
powerful and surprising ways.
Accelerating
The Future
7. GTC 2013 also featured the unveiling of several
technologies that will impact the future of
computing. This included the introduction of the
NVIDIA GRID
™
VCA,
the industry’s first Visual Computing Appliance.
The GRID VCA enables businesses to deliver ultra-
fast GPU performance to any Windows, Linux, or
Mac client device on their network while providing
the same rich graphic user experience as a
dedicated desktop PC or workstation.
GRID VCA provides a
workstation-quality experience
on any PC, Mac, or Linux device
and runs applications for power
users such as those from
Adobe, Autodesk, and Dassault
Systems. It is a turnkey
appliance that is easy to install
and manage.
8. At GTC 2013, NVIDIA unveiled its
GPU architecture roadmap,
demonstrating that NVIDIA continues to focus on
dramatically improving performance while also improving
energy efficiency. Today, its Kepler™
architecture is at the
heart of the fastest, most energy-efficient accelerators,
including those that power the world’s fastest
supercomputer for open science. The next-generation
Maxwell architecture will deliver unified virtual memory,
increasing the performance and shortening development
time for application developers. The Volta architecture
will follow, which is designed to solve one of the biggest
challenges of computing today—memory bandwidth.
2008 2010 2012 2014
Tesla
CUDA
Fermi
Full Double Precision
Kepler
Dynamic Parallelism
Maxwell
Unified Virtual Memory
Volta
Stacked DRAM
32
16
8
4
2
1
0.5
DoublePrecisionGigaFLOPSper Watt
The next-generation Maxwell
architecture will deliver unified
virtual memory, followed by
the Volta architecture that’s
designed to achieve 1 TB/sec of
memory bandwidth, equivalent
to transferring the entire
contents of a Blue-ray DVD in
1/50th
of a second.
9. 2011 2012 2013 2014 2015
100
10
1
RelativePerformance
Tegra 2
First Dual A9
Tegra 3
First Quad A9
First Power‑Saver Core
Tegra 4
First LTE SDR Modem
Computational Camera
Logan
Kepler-Based GPU
CUDA
OpenGL 4.3
Parker
Denver-Based CPU
Maxwell-Based GPU
FinFET
The NVIDIA Tegra
®
Mobile
processor roadmap
was also unveiled at the conference. It demonstrated that
NVIDIA’s investment in computing is everywhere—not
just in PCs and datacenters, but also in cars, phones,
tablets, gaming portables, and anything with a display.
For example, Tegra 4 leverages the CPU, GPU, and ISP to
deliver advanced computational photography features like
real-time HDR and intelligent object tracking. Tegra 4 will
be followed by Project Logan, which pairs ARM®
-based
mobile processor cores with our Kepler GPUs. This will be
followed by Project Parker, which will join the new 64‑bit,
ARM-compatible CPU cores with our next-generation
Maxwell GPU architecture.
Tegra 4 will be followed
by Logan, bringing
technologies currently found
in high‑performance PCs
and workstations to mobile
devices.
With Parker, a server‑class
CPU will be combined with
our Maxwell GPU’s unified
virtual memory and advanced
performance per watt.
10. To address the industry’s requirement for developing
applications for low-power architectures, NVIDIA also
introduced the Kaylaplatform
—the ARM development platform for mobile computing
and HPC applications. It’s designed to deliver the highest
performance and efficiency for the widest range of next-
generation ARM-based OpenGL and CUDA applications
by combining a Tegra quad-core ARM processor with a
Kepler-based GPU. This gives developers a great way to take
advantage of the next-gen Tegra SoCs based on the Logan
architecture.
Real-time ray tracing,
FFT‑based ocean simulation,
and smoke-particle simulation
on the Kayla platform.
11. Accelerating
Industry
>> safer and smarter automobiles
>> medical applications, including 4D heart
ultrasound that can potentially save lives
>> cinema and special effects
>> geospatial intelligence made possible by video
and image processing
>> digital product design across various
industries
>> big data analytics applied to every day uses
Taking center
stage at the
conference were
breakthroughs in
various fields—from
science to a wide range
of industries—spurred on
by accelerated computing.
GTC 2013 showcased a number
of industry-changing discoveries
made possible by GPUs.
These included:
12. Imaging capabilities in broadcast media
have progressed by leaps and bounds over
the last few decades, especially in sports
broadcasting. This provides today’s sports
fans with a far more intimate and entertaining
experience in the games they love.
The ESPN Emerging Technology Group
is behind many of these technology
breakthroughs. Based on an idea from a prior
GTC, ESPN developed a software architecture
for sports broadcasts using NVIDIA GPUs.
Today ESPN is using GPUs to convert 4K
video input for spectacular “lossless” zoom to
720p for sports broadcasts. Virtual cameras,
real-time overlay effects, and other features
deliver a closer, enhanced experience of
sports broadcasts to viewers.
Harley-Davidson has been
designing and manufacturing
motorcycles for more than a
century. Today, they still produce
vehicles that are coveted by
one of the most loyal customer
bases in the world.
In recent years, the design
process has evolved to include
digital visualization tools during
the conceptual phase, between
styling and engineering. GPU-
accelerated industrial design
integrates art with modeling and
simulation, ultimately reducing
time for product development,
improving styling intent,
allowing greater conceptual
exploration, and delivering
higher-quality designs earlier.
Bringing Fans Closer
to the Game
ESPN
Melding Art and
Simulation in
Industrial Design
Harley-Davidson Motor Company
13. GTC 2013 featured presenters
from Audi, BMW, Chrysler, Fiat,
Honda, and Peugeot Citroen
sharing breakthroughs in how
GPUs are increasingly playing
a central role in auto safety and
infotainment systems, as well
as breakthroughs in design.
Researchers from Audi
revealed how GPUs are used
as part of an initiative to
make driving safer in urban
areas. Highly intelligent,
power‑efficient systems in
cars will soon suggest when to
leave for your daily commute,
whether or not to stop for
coffee, even where to find best
parking choices, all based
on real‑time information.
It’s all about communicating
with the driver in an intuitive,
non‑distracting way.
Navigating Chaotic
Roadways More Safely
Audi
Better Safety and
Infotainment in Cars
GPUs are increasingly
playing a central role in auto
safety and infotainment
systems. Companies
including Audi and
Lamborghini have already
adopted NVIDIA technology,
and it will soon power
models from BMW, Tesla
Motors, Mini, and Rolls
Royce, among others.
For instance, researchers
from Audi revealed how
they are processing big
data in real-time to make
driving safer in urban areas,
eliminate traffic bottlenecks,
and make parking more
efficient. Honda Research
is also working on future
technologies, such
as merging of digital
instrument clusters and
head-up displays. And
researchers at Carnegie
Mellon are using GPUs to
enable gesture recognition
and natural language
processing, enabling a
new generation of human
machine interfaces to
be developed for safer
in‑vehicle use.
With increasing demand for
advanced, power-efficient
computing, NVIDIA unveiled
the Jetson automotive
development platform at
GTC. With this car stereo
sized system, developers
can easily create and
test automotive, image
processing, and computer-
vision applications.
14. Shazam is a commercial
mobile phone-based music
identification service that
connects more than 300
million people in more
than 200 countries and 33
languages. It uses GPUs to
instantly search and identify
songs from its 27 million
track database more than 10
million times a day. This is
accomplished by assigning an
acoustic fingerprint to each
song sample, matching that
sample to their track library,
and returning the answer in
just a few seconds. Today, every
search is done using GPUs.
Because of the performance
and power efficiency of the
GPU, Shazam is able to scale
the operations at less than half
the cost.
Identifying Audio
Patterns
Shazam
Big Data Trend
This year’s conference highlighted a
growing trend of top enterprise and mobile
application companies like IBM and Groupon
using GPUs to accelerate consumer and
commercial big data applications. Industry
leaders such as Shazam, as well as
pioneering startup Cortexica, also use GPUs
to accelerate large-scale audio search, real-
time Twitter analysis, and image matching. In
each use case, GPUs dramatically accelerate
the processing of massive datasets with
complex algorithms, and make it possible
for these big data companies to scale their
infrastructure cost effectively to meet
growing demand.
15. Accelerating
Science >> intelligent object
recognition by robots
and cars
>> image processing
for geospatial
intelligence
>> 3D visualization,
better weather
prediction for
disaster prevention
>> affordable whole
genome sequencing
to predict genetic
defects and diseases
GTC has also been
the venue for the
latest breakthroughs in
science and research on a
variety of topics.
These included:
16. Stretch a strand of human DNA out to its full
length and it’s two meters long. Yet all that
material—and the information it carries—gets
balled up inside the nucleus of a single cell.
By unraveling the human genome, we can
unlock the mysteries around genetic causes
of disease and the environmental factors that
impact genetic behavior.
Using GPUs, Harvard Fellow Erez
Lieberman‑Aiden discovered that DNA comes
together in fractal globules (the same shape
as uncooked ramen noodles) and its folds
determine whether healthy or malignant cells
will be produced. The technique relies on
looking at the billions of snapshots generated
by modern DNA sequencing techniques and
comparing their 3D relationships. GPUs are
essential in analyzing the enormous amount
of data at the heart of the process that
enables researchers to map out a person’s
genome and predict diseases.
Better Human Genome
Mapping and Disease
Prediction
Baylor College of Medicine
Rice University
Faster, Affordable
Gene Sequencing
GPU-accelerated gene
sequencing is driving
down the cost of genomic
research significantly.
The cost to sequence an
entire human genome can
be reduced to $1000 very
soon.
“By democratizing genome
sequencing, we expect
to see an unprecedented
wave of innovation in life
sciences.”
— Alan Williams,
Life Technologies
17. Imagine the impact on public
safety if we could pinpoint a
significant natural disaster such
as the landfall of Hurricane Sandy
five days in advance. Our ability
to do that may be closer than you
realize.
The National Oceanic and
Atmospheric Administration
(NOAA) presented the latest
research in high-resolution
weather models. Such
computationally intense models
were able to pinpoint the landfall
of major storms and hurricanes
such as Sandy. GPU computing
will be essential to the daily use
of highly accurate models for
operational weather modeling,
delivering better power and cost
efficiency in data centers.
Accurate Weather
Modeling and Prediction
NOAA Earth System Research Laboratory
18. The floors of the Adriatic and
Mediterranean seas are littered with tens
of thousands of mines, bombs, and other
munitions that were lost or abandoned
after World War I and II.
To locate and identify the dangerous
materials, NATO is using autonomous
underwater vehicles equipped with
synthetic aperture sonar (SAS) running
on GPUs. The SAS application runs up to
100X faster with GPUs, enabling real-time
object recognition and intelligent decision-
making capabilities within the vehicle’s
six-hour operational window. With GPU
acceleration, mine hunting is faster, more
affordable, more reliable, and safer.
Real-Time Mine
Hunting with
Unmanned
Submarines
NATO STO Centre for
Maritime Research and
Exploration
Istituto Italiano di
Tecnologia, Italy
Researchers have long believed
that human cognition is developed
through interacting with the
environment and other humans
using limbs and senses. And that
human-like manipulation plays
a vital role in the development of
the cognition.
At Plymouth University,
researchers are contributing
to the emergence of humanoid
robots by modeling biological
neural networks to better
understand both human cognition
and artificial intelligence.
These networks consist of
thousands of neurons connected
to each other through millions
of synapses. The systems
integrate visual processing,
linguistics, and other inputs
such as touch, temperature,
and position. This is made
possible with GPUs that perform
the millions of calculations to
activate the neural network every
50-100 milliseconds, allowing
researchers to teach the robot to
think like a human.
Robot
Cognition:
Thinking Like
Humans
Plymouth University
19. GPU Supercomputing
On the Rise
At GTC, the Swiss National
Supercomputing Center
announced it will deploy
NVIDIA GPUs to build
Europe’s fastest GPU
supercomputer. Piz Daint
will be used for scientific
discovery in weather
modeling, astrophysics,
material science, and life
science—the latest evidence
that GPU computing has
passed the tipping point.
GPU accelerators have evolved into
general-purpose processors ideally suited
to tackle massively parallel computing
problems. Today, more than 50 systems
on the Top500 list of supercomputers are
powered by GPUs.
At GTC, representatives from Oak Ridge
National Laboratory presented early
science on the Titan supercomputer—the
world’s fastest for open science—as well
as how researchers and scientists can
gain access to this powerful computing
resource. Titan delivers peak performance
of 27 petaflops, with 18,688 GPUs providing
90% of the computing power, and is open to
academia, government labs, and industry
from across the globe.
Powering
the Biggest
Breakthroughs
Oak Ridge National
Laboratory
20. GTC is where
researchers,
developers, and
technologists
from around
the globe meet
to learn how
others are solving
the toughest
computational
problems.
“It is the best conference for meeting
peoples—it even beats Supercomputing.”
— Guido Juckeland,
Sr. Systems Engineer, TU Dresden
“Unbelievable, to see in the same
place financial engineers, physicians,
astrophysicists, game creators… It is the
only event in the world where you can see
all those talented people!”
— Jonathan Lellouche,
Quantitative Analyst, MUREX
“GTC is action-packed and stimulating like no other
conference. NVIDIA has placed scientific content top
and center of GTC, while at the same time organizing an
exciting program with educational sessions, stunning
demos, and great networking opportunities. There will
be so many people there that I want to meet again or for
the first time. GTC is simply too good to pass [up].”
— Lorena Barba,
Assistant Professor, Boston University
“I was really impressed
with the breadth of the
subjects and the focus
on performance. Really
impressive.”
— Mikael Sorboen,
Head of Risk Systems,
BNP Paribas
“Every year I’ve made a lot of important
contacts for new directions for my
research.”
— Peter Lu,
Post-Doctoral Research Fellow,
Harvard University
“Though it was my first GTC, I was blown
away. I felt like an ant in New York City, but
in a good way. Interacting with strangers
across the spectrum and knowing that
there was so much to learn from them was
mind‑blowing, but an amazing feeling.”
— David Norman,
Engineer/Tool Developer, The Boeing Company