Presentation at the first "Yomikai" event of 2020, hosted by Ridge-i, Inc., in Tokyo. The theme this time was NeurIPS 2019 papers. I chose to review this "Bayesian Deep learning" workshop paper, since it has important implications for both the bayesian DL community and DL as a whole.
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
AI-driven product innovation: from Recommender Systems to COVID-19Xavier Amatriain
AI/Machine Learning has become an integral part of many household tech products, from Netflix to our phones. In this talk I will draw from my experience driving AI teams at some of those companies to showcase how AI can positively impact products as different as Netflix and Curai, an online telehealth service.
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
AI-driven product innovation: from Recommender Systems to COVID-19Xavier Amatriain
AI/Machine Learning has become an integral part of many household tech products, from Netflix to our phones. In this talk I will draw from my experience driving AI teams at some of those companies to showcase how AI can positively impact products as different as Netflix and Curai, an online telehealth service.
Deep Conditional Adversarial learning for polyp Segmentationmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper22.pdf
Debapriya Banik and Debotosh Bhattacharjee : Deep Conditional Adversarial learning for polyp Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This approach has addressed the Medico automatic polyp segmentation challenge which is a part of Mediaeval 2020. We have proposed a deep conditional adversarial learning based network for the automatic polyp segmentation task. The network comprises of two interdependent models namely a generator and a discriminator. The generator network is a FCN employed for the prediction of the polyp mask while the discriminator enforces the segmentation to be as similar as the real segmented mask (ground truth). Our proposed model achieved a comparative result on the test dataset provided by the organizers of the challenge.
Most work on scholarly document processing assumes that the information processed is trustworthy and factually correct. However, this is not always the case. There are two core challenges, which should be addressed: 1) ensuring that scientific publications are credible -- e.g. that claims are not made without supporting evidence, and that all relevant supporting evidence is provided; and 2) that scientific findings are not misrepresented, distorted or outright misreported when communicated by journalists or the general public. In this talk, I will present some first steps towards addressing these problems, discussing our research on exaggeration detection of scientific claims and on scientific fact checking.
In 2018 we've seen a huge uptick in applications using Kubernetes for their deployment method. Many times your persistent data layer is a difficult decision. What will you store data in, how long will you need to access this data and who will manage the lifecycle of this data? These are important questions many developers and ops teams have taken to heart. In this talk we'll review how the data layer is managed for high availability and reliability in modern application deployment. The attendee should leave having a better understanding of the options in front of them and their ability to build applications in any hosting environment.
Comparing Incremental Learning Strategies for Convolutional Neural NetworksVincenzo Lomonaco
In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/
The Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 👇
Do you agree?
Building a Knowledge Graph with Spark and NLP: How We Recommend Novel Drugs t...Databricks
It is widely known that the discovery, development, and commercialization of new classes of drugs can take 10-15 years and greater than $5 billion in R&D investment only to see less than 5% of the drugs make it to market.
AstraZeneca is a global, innovation-driven biopharmaceutical business that focuses on the discovery, development, and commercialization of prescription medicines for some of the world’s most serious diseases. Our scientists have been able to improve our success rate over the past 5 years by moving to a data-driven approach (the “5R”) to help develop better drugs faster, choose the right treatment for a patient and run safer clinical trials.
However, our scientists are still unable to make these decisions with all of the available scientific information at their fingertips. Data is sparse across our company as well as external public databases, every new technology requires a different data processing pipeline and new data comes at an increasing pace. It is often repeated that a new scientific paper appears every 30 seconds, which makes it impossible for any individual expert to keep up-to-date with the pace of scientific discovery.
To help our scientists integrate all of this information and make targeted decisions, we have used Spark on Azure Databricks to build a knowledge graph of biological insights and facts. The graph powers a recommendation system which enables any AZ scientist to generate novel target hypotheses, for any disease, leveraging all of our data.
In this talk, I will describe the applications of our knowledge graph and focus on the Spark pipelines we built to quickly assemble and create projections of the graph from 100s of sources. I will also describe the NLP pipelines we have built – leveraging spacy, bioBERT or snorkel – to reliably extract meaningful relations between entities and add them to our knowledge graph.
https://imatge.upc.edu/web/publications/layer-wise-cnn-surgery-visual-sentiment-prediction
Visual media are powerful means of expressing emotions and sentiments. The constant generation of new content in social networks highlights the need of automated visual sentiment analysis tools. While Convolutional Neural Networks (CNNs) have established a new state-of-the-art in several vision problems, their application to the task of sentiment analysis is mostly unexplored and there are few studies regarding how to design CNNs for this purpose. In this work, we study the suitability of fine-tuning a CNN for visual sentiment prediction as well as explore performance boosting techniques within this deep learning setting. Finally, we provide a deep-dive analysis into a benchmark, state-of-the-art network architecture to gain insight about how to design patterns for CNNs on the task of visual sentiment prediction.
Usage of Generative Adversarial Networks (GANs) in HealthcareGlobalLogic Ukraine
The presentation is devoted to the application of Generative Adversarial Networks (GANs) in Healthcare. We will shortly observe basic principles and features of such networks, outline the types of tasks in medicine researches and practice that can be solved with GANs. Than we’ll discuss the examples of GANs using for the solving for some medical tasks.
This presentation by Vladyslav Kolbasin (Lead Software Developer, Consultant, GlobalLogic, Kharkiv) was delivered at AI Ukraine 2017 (Kharkiv) on September 24, 2017.
Deep Conditional Adversarial learning for polyp Segmentationmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper22.pdf
Debapriya Banik and Debotosh Bhattacharjee : Deep Conditional Adversarial learning for polyp Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This approach has addressed the Medico automatic polyp segmentation challenge which is a part of Mediaeval 2020. We have proposed a deep conditional adversarial learning based network for the automatic polyp segmentation task. The network comprises of two interdependent models namely a generator and a discriminator. The generator network is a FCN employed for the prediction of the polyp mask while the discriminator enforces the segmentation to be as similar as the real segmented mask (ground truth). Our proposed model achieved a comparative result on the test dataset provided by the organizers of the challenge.
Most work on scholarly document processing assumes that the information processed is trustworthy and factually correct. However, this is not always the case. There are two core challenges, which should be addressed: 1) ensuring that scientific publications are credible -- e.g. that claims are not made without supporting evidence, and that all relevant supporting evidence is provided; and 2) that scientific findings are not misrepresented, distorted or outright misreported when communicated by journalists or the general public. In this talk, I will present some first steps towards addressing these problems, discussing our research on exaggeration detection of scientific claims and on scientific fact checking.
In 2018 we've seen a huge uptick in applications using Kubernetes for their deployment method. Many times your persistent data layer is a difficult decision. What will you store data in, how long will you need to access this data and who will manage the lifecycle of this data? These are important questions many developers and ops teams have taken to heart. In this talk we'll review how the data layer is managed for high availability and reliability in modern application deployment. The attendee should leave having a better understanding of the options in front of them and their ability to build applications in any hosting environment.
Comparing Incremental Learning Strategies for Convolutional Neural NetworksVincenzo Lomonaco
In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/
The Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 👇
Do you agree?
Building a Knowledge Graph with Spark and NLP: How We Recommend Novel Drugs t...Databricks
It is widely known that the discovery, development, and commercialization of new classes of drugs can take 10-15 years and greater than $5 billion in R&D investment only to see less than 5% of the drugs make it to market.
AstraZeneca is a global, innovation-driven biopharmaceutical business that focuses on the discovery, development, and commercialization of prescription medicines for some of the world’s most serious diseases. Our scientists have been able to improve our success rate over the past 5 years by moving to a data-driven approach (the “5R”) to help develop better drugs faster, choose the right treatment for a patient and run safer clinical trials.
However, our scientists are still unable to make these decisions with all of the available scientific information at their fingertips. Data is sparse across our company as well as external public databases, every new technology requires a different data processing pipeline and new data comes at an increasing pace. It is often repeated that a new scientific paper appears every 30 seconds, which makes it impossible for any individual expert to keep up-to-date with the pace of scientific discovery.
To help our scientists integrate all of this information and make targeted decisions, we have used Spark on Azure Databricks to build a knowledge graph of biological insights and facts. The graph powers a recommendation system which enables any AZ scientist to generate novel target hypotheses, for any disease, leveraging all of our data.
In this talk, I will describe the applications of our knowledge graph and focus on the Spark pipelines we built to quickly assemble and create projections of the graph from 100s of sources. I will also describe the NLP pipelines we have built – leveraging spacy, bioBERT or snorkel – to reliably extract meaningful relations between entities and add them to our knowledge graph.
https://imatge.upc.edu/web/publications/layer-wise-cnn-surgery-visual-sentiment-prediction
Visual media are powerful means of expressing emotions and sentiments. The constant generation of new content in social networks highlights the need of automated visual sentiment analysis tools. While Convolutional Neural Networks (CNNs) have established a new state-of-the-art in several vision problems, their application to the task of sentiment analysis is mostly unexplored and there are few studies regarding how to design CNNs for this purpose. In this work, we study the suitability of fine-tuning a CNN for visual sentiment prediction as well as explore performance boosting techniques within this deep learning setting. Finally, we provide a deep-dive analysis into a benchmark, state-of-the-art network architecture to gain insight about how to design patterns for CNNs on the task of visual sentiment prediction.
Usage of Generative Adversarial Networks (GANs) in HealthcareGlobalLogic Ukraine
The presentation is devoted to the application of Generative Adversarial Networks (GANs) in Healthcare. We will shortly observe basic principles and features of such networks, outline the types of tasks in medicine researches and practice that can be solved with GANs. Than we’ll discuss the examples of GANs using for the solving for some medical tasks.
This presentation by Vladyslav Kolbasin (Lead Software Developer, Consultant, GlobalLogic, Kharkiv) was delivered at AI Ukraine 2017 (Kharkiv) on September 24, 2017.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
Neur ips yomikai_at_ridgei_aaron_jan312020
1. Review of Filos et al. (2019):
“A Systematic Comparison of Bayesian Deep
Learning Robustness in Diabetic Retinopathy”
NeurIPSよみかい@Ridge-i
January 31st 2020
Aaron C. Bell, Engineer
2.
3. • Paper PDF:
• http://bayesiandeeplearning.org/2019/paper
s/12.pdf
5. ...and a major bottleneck in BDL
● UCI “toy” datasets limit research in BDL:
6. ...and a major bottleneck in BDL
● UCI “toy” datasets limit research in BDL:
● Yachts, Wine, Concrete, Energy….
7. ...and a major bottleneck in BDL
● UCI “toy” datasets...
● Yachts, Wine, Concrete, Energy….
8. ● UCI “toy” datasets… are they too easy?
● Yachts, Wine, Concrete, Energy….
The context:A major problem with DL BDL?
9. What is Bayesian deep learning?
● Extension of Bayesian methods to deep learning
○ Taking account of prior information to
○ Getting robust uncertainties on predictions
● Allows us to ask:
○ How powerful are your results… really?
○ Is higher accuracy really a significant result?
10. What is Bayesian deep learning?
● Extension of Bayesian methods to neural networks
11. What is Bayesian deep learning?
● Extension of Bayesian methods to deep learning
○ Taking account of prior information
○ Getting robust uncertainties on predictions
● Allows us to ask:
○ How powerful are your results… really?
○ Is higher accuracy really a significant result?
12. What is Bayesian deep learning?
● Extension of Bayesian methods to deep learning
○ Taking account of prior information to
○ Getting robust uncertainties on predictions
● Allows us to ask:
○ How powerful are your results… really?
○ Is higher accuracy really a significant result?
13. What is Bayesian deep learning?
● Extension of Bayesian methods to neural networks
○ Allows DL to be applied in real-world applications where uncertainties are critical
14. What is Bayesian deep learning?
● Extension of Bayesian methods to deep learning
○ Taking account of prior information to
○ Getting robust uncertainties on predictions
● Allows us to ask:
○ How powerful are your results… really?
○ Is higher accuracy really a significant result?
15. What is Bayesian deep learning?
● Extension of Bayesian methods to deep learning
○ Taking account of prior information to
○ Getting robust uncertainties on predictions
● Allows us to ask:
○ How powerful are your results… really?
○ Is higher accuracy really a significant result?
16. What is Bayesian deep learning?
● Extension of Bayesian methods to deep learning
○ Taking account of prior information to
○ Getting robust uncertainties on predictions
● Allows us to ask:
○ How powerful are your results… really?
○ Is higher accuracy really a significant result?
17. What is Bayesian deep learning?
● Extension of Bayesian methods to deep learning
○ Taking account of prior information to
○ Getting robust uncertainties on predictions
● Allows us to ask:
○ How powerful are your results… really?
○ Is higher accuracy really a significant result?
■ Opens door to DL use for scientific hypothesis testing
18. The paper’s objectives:
1) Widen the bottleneck in BDL --- provide a better benchmark dataset (than
UCI)
1) Show off the strong points of BDL --- argue a specific, challenging, real-world
example where BDL is needed, medical diagnosis.
19. The paper’s objectives:
1) Widen the bottleneck in BDL --- provide a better benchmark dataset (than
UCI)
1) Show off the strong points of BDL --- argue a specific, challenging, real-world
example where BDL is needed, medical diagnosis.
20. The paper’s objectives:
1) Widen the bottleneck in BDL --- provide a better benchmark dataset (than
UCI)
21. A better benchmark dataset for BDL
● Step 1: Choose an existing dataset that’s suited for BDL’s strengths:
○ 1) Highly dimensional
○ 2) Large dataset
○ 3) Requiring more complex models
● Step 2: Enhance suitability for BDL benchmarking
○ 1) Pre-process the dataset.
○ 2) Develop API for benchmarking.
22. A better benchmark dataset for BDL
● Step 1: Choose an existing dataset that’s suited for BDL’s strengths:
○ 1) Highly dimensional
○ 2) Large dataset
○ 3) Requiring more complex models
● Step 2: Enhance suitability for BDL benchmarking
○ 1) Pre-process the dataset.
○ 2) Develop API for benchmarking.
23. A better benchmark dataset for BDL
● Step 1: Choose an existing dataset that’s suited for BDL’s strengths:
○ 1) Highly dimensional
○ 2) Large dataset
○ 3) Requiring more complex models
● Step 2: Enhance suitability for BDL benchmarking
○ 1) Pre-process the dataset.
○ 2) Develop API for benchmarking.
24. A better benchmark dataset for BDL
● Step 1: Choose an existing dataset that’s suited for BDL’s strengths:
○ 1) Highly dimensional
○ 2) Large number of examples
○ 3) Requiring more complex models
● Step 2: Enhance suitability for BDL benchmarking
○ 1) Pre-process the dataset.
○ 2) Develop API for benchmarking.
25. A better benchmark dataset for BDL
● Step 1: Choose an existing dataset that’s suited for BDL’s strengths:
○ 1) Highly dimensional
○ 2) Large number of examples
○ 3) Requiring more complex models
● Step 2: Enhance suitability for BDL benchmarking
○ 1) Pre-process the dataset.
○ 2) Develop API for benchmarking.
26. A better benchmark dataset for BDL
● Step 1:
Choose an existing highly
dimensional, large
dataset….
Diabetic retinopathy
“fundus” images
(Kaggle dataset)
27. A better benchmark dataset for BDL
● Step 1: Choose an existing highly dimensional, large dataset….
Diabetic retinopathy (DR) “fundus” images (Kaggle dataset)
28. A better benchmark dataset for BDL
● Step 2: Pre-process the dataset:
○ Redefine the problem… 5-classes of diabetic retinopathy (DR) to Binary
0: No DR
1: Mild DR
2: Moderate DR
3: Severe DR
4. Critical DR
0: Sight not in
danger
1: Sight in danger
29. A better benchmark dataset for BDL
● Step 2: Pre-process the dataset:
○ Augment data: Make it challenging enough for BDL.
30. Objective 2)Show an example where BDL is needed
● Giving predictions with uncertainties
● Informing medical diagnosis
● Streamlining patient referrals
31. Objective 2) Show an example where BDL is needed
● Giving predictions with uncertainties
● Informing medical diagnosis
● Streamlining patient referrals
Automatic Final
Diagnosis
32. Objective 2) Show an example where BDL is needed
● Giving predictions with uncertainties
● Informing medical diagnosis
● Streamlining patient referrals
Automatic Final
Diagnosis
Referral to
“real” doctor
55. Comparison of Various Approaches: Data retention
In-domain
(Kaggle DR)
Out-of-domain
(India blindness
detection dataset)
56. Comparison of Various Approaches: Data retention
In-domain
(Kaggle DR)
Out-of-domain
(India blindness
detection dataset)
All models
converge on full
dataset… (within
std error bar)
Uncertainty
comparison is
fair.
57. Comparison of Various Approaches: Data retention
Ensemble MC Dropout
Always Performs best at
50% data retention
60. Major conclusions...
● Over use of UCI may have misled the BDL community.
● Harder benchmarks give a better picture of BDL method performance
61. Major conclusions...
● Over use of UCI may have misled the BDL community.
● Harder benchmarks give a better picture of BDL method performance
● BDL methods are suited for cases where uncertainty is critical for the
downstream decision task… (medical diagnosis, re-evaluation.
You’re not just learning the weights, you’re learning a distribution for each of the weights. You assume a prior and posterior distribution, conventionally a gaussian (unless you have prior info to say otherwise), to reduce computational complexity.
You can also think of building uncertainties in terms of the test output. This is something akin to “bootstrapping”. But how can we bootstrap neural net inference? It’s become popular in the BDL community to do this by applying dropout at test time, and doing a MC simulation of the test output. This builds a distribution of potential outcomes.
Extremely simply compared to the previous techniques --- just train a lot (hence “ensemble”) of deterministic (traditional) models in parallel, with varying random seeds. This gives sense of the range of possible training outcomes.
Extremely simply compared to the previous techniques --- just train a lot (hence “ensemble”) of deterministic (traditional) models in parallel, with varying random seeds. This gives sense of the range of possible training outcomes.
Extremely simply compared to the previous techniques --- just train a lot (hence “ensemble”) of deterministic (traditional) models in parallel, with varying random seeds. This gives sense of the range of possible training outcomes.
Extremely simply compared to the previous techniques --- just train a lot (hence “ensemble”) of deterministic (traditional) models in parallel, with varying random seeds. This gives sense of the range of possible training outcomes.
Extremely simply compared to the previous techniques --- just train a lot (hence “ensemble”) of deterministic (traditional) models in parallel, with varying random seeds. This gives sense of the range of possible training outcomes.