The document discusses the potential applications of deep learning in healthcare. It begins by explaining that deep learning models can improve accuracy of diagnosis, prognosis, and risk prediction by analyzing large datasets. It then discusses how deep learning can optimize hospital processes like resource allocation and patient flow by early and accurate prediction of diseases. Finally, it mentions that deep learning can help identify patient subgroups for personalized and precision medicine approaches.
Short presentation for a special lecture on Medicine Graduation Course in Hospital de Clínicas (https://www.hcpa.edu.br/), as part of a one-day special discipline on Machine Learning and Healthcare. The goal was introducing the importance of Deep Learning for Healthcare as well as showing some of the recent impact.
Intro to Deep Learning for Medical Image Analysis, with Dan Lee from Dentuit AISeth Grimes
Dan Lee from Dentuit AI presented an Intro to Deep Learning for Medical Image Analysis at the Maryland AI meetup (https://www.meetup.com/Maryland-AI), May 27, 2020. Visit https://www.youtube.com/watch?v=xl8i7CGDQi0 for video.
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
Talk @ ACM SF Bayarea Chapter on Deep Learning for medical imaging space.
The talk covers use cases, special challenges and solutions for Deep Learning for Medical Image Analysis using Tensorflow+Keras. You will learn about:
- Use cases for Deep Learning in Medical Image Analysis
- Different DNN architectures used for Medical Image Analysis
- Special purpose compute / accelerators for Deep Learning (in the Cloud / On-prem)
- How to parallelize your models for faster training of models and serving for inferenceing.
- Optimization techniques to get the best performance from your cluster (like Kubernetes/ Apache Mesos / Spark)
- How to build an efficient Data Pipeline for Medical Image Analysis using Deep Learning
- Resources to jump start your journey - like public data sets, common models used in Medical Image Analysis
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
This Deep Learning Presentation will help you in understanding what is Deep learning, why do we need Deep learning, applications of Deep Learning along with a detailed explanation on Neural Networks and how these Neural Networks work. Deep learning is inspired by the integral function of the human brain specific to artificial neural networks. These networks, which represent the decision-making process of the brain, use complex algorithms that process data in a non-linear way, learning in an unsupervised manner to make choices based on the input. This Deep Learning tutorial is ideal for professionals with beginners to intermediate levels of experience. Now, let us dive deep into this topic and understand what Deep learning actually is.
Below topics are explained in this Deep Learning Presentation:
1. What is Deep Learning?
2. Why do we need Deep Learning?
3. Applications of Deep Learning
4. What is Neural Network?
5. Activation Functions
6. Working of Neural Network
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
This talk will cover various medical applications of deep learning including tumor segmentation in histology slides, MRI, CT, and X-Ray data. Also, more complicated tasks such as cell counting where the challenge is to count how many objects are in an image. It will also cover generative adversarial networks and how they can be used for medical applications. This presentation is accessible to non-doctors and non-computer scientists.
Short presentation for a special lecture on Medicine Graduation Course in Hospital de Clínicas (https://www.hcpa.edu.br/), as part of a one-day special discipline on Machine Learning and Healthcare. The goal was introducing the importance of Deep Learning for Healthcare as well as showing some of the recent impact.
Intro to Deep Learning for Medical Image Analysis, with Dan Lee from Dentuit AISeth Grimes
Dan Lee from Dentuit AI presented an Intro to Deep Learning for Medical Image Analysis at the Maryland AI meetup (https://www.meetup.com/Maryland-AI), May 27, 2020. Visit https://www.youtube.com/watch?v=xl8i7CGDQi0 for video.
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
Talk @ ACM SF Bayarea Chapter on Deep Learning for medical imaging space.
The talk covers use cases, special challenges and solutions for Deep Learning for Medical Image Analysis using Tensorflow+Keras. You will learn about:
- Use cases for Deep Learning in Medical Image Analysis
- Different DNN architectures used for Medical Image Analysis
- Special purpose compute / accelerators for Deep Learning (in the Cloud / On-prem)
- How to parallelize your models for faster training of models and serving for inferenceing.
- Optimization techniques to get the best performance from your cluster (like Kubernetes/ Apache Mesos / Spark)
- How to build an efficient Data Pipeline for Medical Image Analysis using Deep Learning
- Resources to jump start your journey - like public data sets, common models used in Medical Image Analysis
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
This Deep Learning Presentation will help you in understanding what is Deep learning, why do we need Deep learning, applications of Deep Learning along with a detailed explanation on Neural Networks and how these Neural Networks work. Deep learning is inspired by the integral function of the human brain specific to artificial neural networks. These networks, which represent the decision-making process of the brain, use complex algorithms that process data in a non-linear way, learning in an unsupervised manner to make choices based on the input. This Deep Learning tutorial is ideal for professionals with beginners to intermediate levels of experience. Now, let us dive deep into this topic and understand what Deep learning actually is.
Below topics are explained in this Deep Learning Presentation:
1. What is Deep Learning?
2. Why do we need Deep Learning?
3. Applications of Deep Learning
4. What is Neural Network?
5. Activation Functions
6. Working of Neural Network
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
This talk will cover various medical applications of deep learning including tumor segmentation in histology slides, MRI, CT, and X-Ray data. Also, more complicated tasks such as cell counting where the challenge is to count how many objects are in an image. It will also cover generative adversarial networks and how they can be used for medical applications. This presentation is accessible to non-doctors and non-computer scientists.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session.
Large Language Models, No-Code, and Responsible AI - Trends in Applied NLP in...David Talby
An April 2023 presentation to the AMIA working group on natural language processing. The talk focuses on three current trends in NLP and how they apply in healthcare: Large language models, No-code, and Responsible AI.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Half day session on Machine learning and its applications. It introduces Artificial Intelligence, move on Machine Learning, applications, algorithms, types, using Cloud for ML, Deep Learning and some resources to start with
The presentation briefly answers the questions:
1. What is Machine Learning?
2. Ideas behind Neural Networks?
3. What is Deep Learning? How different is it from NN?
4. Practical examples of applications.

For more information:
https://www.quora.com/How-does-deep-learning-work-and-how-is-it-different-from-normal-neural-networks-and-or-SVM
http://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma
https://www.youtube.com/watch?v=n1ViNeWhC24 - presentation by Ng
http://techtalks.tv/talks/deep-learning/58122/ - deep learning tutorial and slides - http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf
Deep learning for NLP - http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
papers: http://www.cs.toronto.edu/~hinton/science.pdf
http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_ErhanCBV10.pdf
http://arxiv.org/pdf/1206.5538v3.pdf
http://arxiv.org/pdf/1404.7828v4.pdf
More recommendations - https://www.quora.com/What-are-the-best-resources-to-learn-about-deep-learning
What is Deep Learning and how it helps to Healthcare Sector?Cogito Tech LLC
To know what is Deep Learning and how it helps to Healthcare Sector check this presentation that shows the top use cases of deep learning process of this technology backed systems, applications or machines in the healthcare industry. The entire presentation shows the deep learning definition and how it is changing the healthcare industry. This PPT is represented by Cogito to get to know the role of deep learning in healthcare as Cogito is providing the training data sets for deep learning and machine learning with best accuracy.
Visit: http://bit.ly/2QRrSc2
Machine Learning and Real-World ApplicationsMachinePulse
This presentation was created by Ajay, Machine Learning Scientist at MachinePulse, to present at a Meetup on Jan. 30, 2015. These slides provide an overview of widely used machine learning algorithms. The slides conclude with examples of real world applications.
Ajay Ramaseshan, is a Machine Learning Scientist at MachinePulse. He holds a Bachelors degree in Computer Science from NITK, Suratkhal and a Master in Machine Learning and Data Mining from Aalto University School of Science, Finland. He has extensive experience in the machine learning domain and has dealt with various real world problems.
Synthetic data generation for machine learningQuantUniversity
As machine learning becomes more pervasive in the industry, data scientists and quants are realizing the challenges and limitations of machine learning models. One of the primary reasons machine learning applications fail is due to the lack of rich, diverse and clean datasets needed to build models. Datasets may have missing values, may not incorporate enough samples for all use cases (for example: availability of fraudulent transaction records to train a model) and may not be easily sharable due to privacy concerns. While there are many data cleansing techniques to fix data-related issues and we can always try and get new and rich datasets, the cost is at times prohibitive and at times impractical leading many institutions to abandon machine learning and go back to rule-based methods.
Synthetic data sets and simulations are used to enrich and augment existing datasets to provide comprehensive samples while training machine learning problems. In addition, synthetic datasets can be used for comprehensive scenario analysis, missing value filling and privacy protection of the datasets when building models. The advent of novel techniques like Deep Learning has rekindled interest in using techniques like GANs and Encoder-Decoder architectures in financial synthetic data generation.
In this workshop, we will discuss the state of the art in Synthetic data generation and will illustrate the various techniques and methods that can be used in practice. Through examples using QuSynthesize & QuSandbox, we will demonstrate how these techniques can be realized in practice.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session.
Large Language Models, No-Code, and Responsible AI - Trends in Applied NLP in...David Talby
An April 2023 presentation to the AMIA working group on natural language processing. The talk focuses on three current trends in NLP and how they apply in healthcare: Large language models, No-code, and Responsible AI.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Half day session on Machine learning and its applications. It introduces Artificial Intelligence, move on Machine Learning, applications, algorithms, types, using Cloud for ML, Deep Learning and some resources to start with
The presentation briefly answers the questions:
1. What is Machine Learning?
2. Ideas behind Neural Networks?
3. What is Deep Learning? How different is it from NN?
4. Practical examples of applications.

For more information:
https://www.quora.com/How-does-deep-learning-work-and-how-is-it-different-from-normal-neural-networks-and-or-SVM
http://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma
https://www.youtube.com/watch?v=n1ViNeWhC24 - presentation by Ng
http://techtalks.tv/talks/deep-learning/58122/ - deep learning tutorial and slides - http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf
Deep learning for NLP - http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
papers: http://www.cs.toronto.edu/~hinton/science.pdf
http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_ErhanCBV10.pdf
http://arxiv.org/pdf/1206.5538v3.pdf
http://arxiv.org/pdf/1404.7828v4.pdf
More recommendations - https://www.quora.com/What-are-the-best-resources-to-learn-about-deep-learning
What is Deep Learning and how it helps to Healthcare Sector?Cogito Tech LLC
To know what is Deep Learning and how it helps to Healthcare Sector check this presentation that shows the top use cases of deep learning process of this technology backed systems, applications or machines in the healthcare industry. The entire presentation shows the deep learning definition and how it is changing the healthcare industry. This PPT is represented by Cogito to get to know the role of deep learning in healthcare as Cogito is providing the training data sets for deep learning and machine learning with best accuracy.
Visit: http://bit.ly/2QRrSc2
Machine Learning and Real-World ApplicationsMachinePulse
This presentation was created by Ajay, Machine Learning Scientist at MachinePulse, to present at a Meetup on Jan. 30, 2015. These slides provide an overview of widely used machine learning algorithms. The slides conclude with examples of real world applications.
Ajay Ramaseshan, is a Machine Learning Scientist at MachinePulse. He holds a Bachelors degree in Computer Science from NITK, Suratkhal and a Master in Machine Learning and Data Mining from Aalto University School of Science, Finland. He has extensive experience in the machine learning domain and has dealt with various real world problems.
Synthetic data generation for machine learningQuantUniversity
As machine learning becomes more pervasive in the industry, data scientists and quants are realizing the challenges and limitations of machine learning models. One of the primary reasons machine learning applications fail is due to the lack of rich, diverse and clean datasets needed to build models. Datasets may have missing values, may not incorporate enough samples for all use cases (for example: availability of fraudulent transaction records to train a model) and may not be easily sharable due to privacy concerns. While there are many data cleansing techniques to fix data-related issues and we can always try and get new and rich datasets, the cost is at times prohibitive and at times impractical leading many institutions to abandon machine learning and go back to rule-based methods.
Synthetic data sets and simulations are used to enrich and augment existing datasets to provide comprehensive samples while training machine learning problems. In addition, synthetic datasets can be used for comprehensive scenario analysis, missing value filling and privacy protection of the datasets when building models. The advent of novel techniques like Deep Learning has rekindled interest in using techniques like GANs and Encoder-Decoder architectures in financial synthetic data generation.
In this workshop, we will discuss the state of the art in Synthetic data generation and will illustrate the various techniques and methods that can be used in practice. Through examples using QuSynthesize & QuSandbox, we will demonstrate how these techniques can be realized in practice.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Handwritten Recognition using Deep Learning with RPoo Kuan Hoong
R User Group Malaysia Meet Up - Handwritten Recognition using Deep Learning with R
Source code available at: https://github.com/kuanhoong/myRUG_DeepLearning
End-to-end deep auto-encoder for segmenting a moving object with limited tra...IJECEIAES
Deep learning-based approaches have been widely used in various applications, including segmentation and classification. However, a large amount of data is required to train such techniques. Indeed, in the surveillance video domain, there are few accessible data due to acquisition and experiment complexity. In this paper, we propose an end-to-end deep auto-encoder system for object segmenting from surveillance videos. Our main purpose is to enhance the process of distinguishing the foreground object when only limited data are available. To this end, we propose two approaches based on transfer learning and multi-depth auto-encoders to avoid over-fitting by combining classical data augmentation and principal component analysis (PCA) techniques to improve the quality of training data. Our approach achieves good results outperforming other popular models, which used the same principle of training with limited data. In addition, a detailed explanation of these techniques and some recommendations are provided. Our methodology constitutes a useful strategy for increasing samples in the deep learning domain and can be applied to improve segmentation accuracy. We believe that our strategy has a considerable interest in various applications such as medical and biological fields, especially in the early stages of experiments where there are few samples.
Deep Learning Applications and Image Processingijtsrd
With the rapid development of digital technologies, the analysis and processing of data has become an important problem. In particular, classification, clustering and processing of complex and multi structured data required the development of new algorithms. In this process, Deep Learning solutions for solving Big Data problems are emerging. Deep Learning can be described as an advanced variant of artificial neural networks. Deep Learning algorithms are commonly used in healthcare, facial and voice recognition, defense, security and autonomous vehicles. Image processing is one of the most common applications of Deep Learning. Deep Learning software is commonly used to capture and process images by removing the errors. Image processing methods are used in many fields such as medicine, radiology, military industry, face recognition, security systems, transportation, astronomy and photography. In this study, current Deep Learning algorithms are investigated and their relationship with commonly used software in the field of image processing is determined. Ahmet Özcan | Mahmut Ünver | Atilla Ergüzen "Deep Learning Applications and Image Processing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-2 , February 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49142.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/49142/deep-learning-applications-and-image-processing/ahmet-özcan
Novi Sad AI is the first AI community in Serbia with goal of democratizing knowledge of AI. On our first event we talked about Belief networks, Deep learning and many more.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/luxoft/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Alexey Rybakov, Senior Director for Embedded Systems at Luxoft, presents the "Deep Learning Beyond Cats and Cars: Developing a Real-life DNN-based Embedded Vision Product for Agriculture, Construction, Medical, or Retail" tutorial at the May 2017 Embedded Vision Summit.
By now we know very well how to design and train a neural network to recognize cats, dogs and cars. But what about real projects — for example, in agriculture, construction, medical, and retail? This how-to talk provides an overview of what it takes to design, train, and fine-tune a real-life DNN-based embedded vision solution. Rybakov explores algorithmic, data set, training, and optimization decisions that take you from proofs-of-concepts to solid, reliable, and highly optimized systems. This material is based on Luxoft's own successes, failures, and lessons learned while implementing embedded vision solutions.
The field of Artificial Intelligence (AI) has been revitalized in this decade, primarily due to the large-scale application of Deep Learning (DL) and other Machine Learning (ML) algorithms. This has been most evident in applications like computer vision, natural language processing, and game bots. However, extraordinary successes within a short period of time have also had the unintended consequence of causing a sharp difference of opinion in research and industrial communities regarding the capabilities and limitations of deep learning. A few questions you might have heard being asked (or asked yourself) include:
a. We don’t know how Deep Neural Networks make decisions, so can we trust them?
b. Can Deep Learning deal with highly non-linear continuous systems with millions of variables?
c. Can Deep Learning solve the Artificial General Intelligence problem?
The goal of this seminar is to provide a 1000-feet view of Deep Learning and hopefully answer the questions above. The seminar will touch upon the evolution, current state of the art, and peculiarities of Deep Learning, and share thoughts on using Deep Learning as a tool for developing power system solutions.
Deep Learning on nVidia GPUs for QSAR, QSPR and QNAR predictionsValery Tkachenko
While we have seen a tremendous growth in machine learning methods over the last two decades there is still no one fits all solution. The next era of cheminformatics and pharmaceutical research in general is focused on mining the heterogeneous big data, which is accumulating at ever growing pace, and this will likely use more sophisticated algorithms such as Deep Learning (DL). There has been increasing use of DL recently which has shown powerful advantages in learning from images and languages as well as many other areas. However the accessibly of this technique for cheminformatics is hindered as it is not available readily to non-experts. It was therefore our goal to develop a DL framework embedded into a general research data management platform (Open Science Data Repository) which can be used as an API, standalone tool or integrated in new software as an autonomous module. In this poster we will present results of comparing performance of classic machine learning methods (Naïve Bayes, logistic regression, Support Vector Machines etc.) with Deep Learning and will discuss challenges associated with Ddeep Learning Neural Networks (DNN). The DNN learning models of different complexity (up to 6 hidden layers) were built and tuned (different number of hidden units per layer, multiple activation functions, optimizers, drop out fraction, regularization parameters, and learning rate) using Keras (https://keras.io/) and Tensorflow (www.tensorflow.org) and applied to various use cases connected to prediction of physicochemical properties, ADME, toxicity and calculating properties of materials. It was also shown that using nVidia GPUs significantly accelerates calculations, although memory consumption puts some limits on performance and applicability of standard toolkits 'as is'.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
The Internet of Things (IoT) is a revolutionary concept that connects everyday objects and devices to the internet, enabling them to communicate, collect, and exchange data. Imagine a world where your refrigerator notifies you when you’re running low on groceries, or streetlights adjust their brightness based on traffic patterns – that’s the power of IoT. In essence, IoT transforms ordinary objects into smart, interconnected devices, creating a network of endless possibilities.
Here is a blog on the role of electrical and electronics engineers in IOT. Let's dig in!!!!
For more such content visit: https://nttftrg.com/
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
1. Deep Learning for Healthcare
DR. MEENAKSHI SOOD
ASSOCIATE PROFESSOR
NITTTR, CHANDIGARH, INDIA
MEENAKSHI@NITTTRCHD.AC.IN
2. Artificial Intelligence
The recent progress in machine learning and artificial intelligence can be
attributed to:
• Explosion of tremendous amount of data
• Cheap Computational cost due to the development of CPUs and GPUs
• Improvement in learning algorithms
Current excitement concerns a subfield called “Deep Learning”.
4/6/2022 DR MEENAKSHI S NITTTR CHD 2
6. Why deeper?
• Deeper networks are able to use far fewer units per layer and far
fewer parameters, as well as frequently generalizing to the test set.
• But harder to optimize!
• Choosing a deep model encodes a very general belief that the
function we want to learn involves composition of several simpler
functions.
4/6/2022 DR MEENAKSHI S NITTTR CHD 6
Hidden layers (cascading tiers) of processing
“Deep” networks (3+ layers) versus “shallow” (1-2
layers)
7. Curse of dimensionality
• The core idea in deep learning is that we assume that the data was generated by
the composition factors or features, potentially at multiple levels in a hierarchy.
• This assumption allows an exponential gain in the relationship between the
number of examples and the number of regions that can be distinguished.
• The exponential advantages conferred by the use of deep, distributed
representations counter the exponential challenges posed by the curse of
dimensionality.
4/6/2022 DR MEENAKSHI S NITTTR CHD 7
8. Deep Neural Networks (DNN)
4/6/2022 DR MEENAKSHI S NITTTR CHD 8
Deep Neural Network is a deep and wide Neural Network.
More number of hidden layers Many Input/ hidden nodes
Deep
Wide
9. Continued….
4/6/2022 DR MEENAKSHI S NITTTR CHD 9
• Utilizes learning algorithms that
derive meaningful data using
hierarchy of multiple layers that
mimics the neural network of human
brain.
• If we provide the systems tons of
information, it begins to understand it
and respond in a useful way.
• Can learn increasingly complex
features and train complex networks.
• More specific and more general-
purpose than hand-engineered
features.
10. Universality Theorem
4/6/2022 DR MEENAKSHI S NITTTR CHD 10
Reference for the reason:
http://neuralnetworksandde
eplearning.com/chap4.html
Any continuous function f
M
: R
R
f N
Can be realized by a network
with one hidden layer
(given enough hidden
neurons)
Why “Deep” neural network not “Fat” neural network?
Deeper is Better?
11. Fat + Short vs Thin + Tall
4/6/2022 DR MEENAKSHI S NITTTR CHD 11
1
x 2
x …… N
x
Deep
1
x 2
x …… N
x
……
Shallow
Which one is better?
The same number
of parameters
12. Fat + Short v.s. Thin + Tall
4/6/2022 DR MEENAKSHI S NITTTR CHD 12
Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription Using
Context-Dependent Deep Neural Networks." Interspeech. 2011.
Layer X Size
Word Error
Rate (%)
Layer X Size
Word Error
Rate (%)
1 X 2k 24.2
2 X 2k 20.4
3 X 2k 18.4
4 X 2k 17.8
5 X 2k 17.2 1 X 3772 22.5
7 X 2k 17.1 1 X 4634 22.6
1 X 16k 22.1
13. Why Deep?
Deep → Modularization
4/6/2022 DR MEENAKSHI S NITTTR CHD 13
Image
Long or
short?
Boy or Girl?
Sharing by the
following classifiers
as module
can be trained by little data
Girls with
long hair
Boys with
short hair
Boys with
long hair
Classifier
1
Classifier
2
Classifier
3
Girls with
short hair
Classifier
4
Little data
fine
Basic
Classifier
14. Why Deep?
Deep → Modularization
4/6/2022 DR MEENAKSHI S NITTTR CHD 14
1
x
2
x
……
N
x
……
……
……
……
……
……
The most basic
classifiers
Use 1st layer as module
to build classifiers
Use 2nd layer as
module ……
The modularization is
automatically learned from data.
→ Less training data?
15. Hand-crafted
kernel function
SVM
Source of image: http://www.gipsa-lab.grenoble-
inp.fr/transfert/seminaire/455_Kadri2013Gipsa-lab.pdf
Apply simple
classifier
Deep Learning
1
x
2
x
…
N
x
…
…
…
y1
y2
yM
…
……
……
……
simple
classifier
Learnable kernel
4/6/2022 DR MEENAKSHI S NITTTR CHD 15
16. o Manually designed features are often over-specified, incomplete
and take a long time to design and validate
o Learned Features are easy to adapt, fast to learn
o Deep learning provides a very flexible, (almost?) universal,
learnable framework for representing world, visual and linguistic
information.
o Can learn both unsupervised and supervised
Why is DL useful?
In ~2010 DL started outperforming other
ML techniques
first in speech and vision, then NLP
4/6/2022 DR MEENAKSHI S NITTTR CHD 16
17. Size of Data
Performance
Traditional ML algorithms
“Deep Learning doesn’t do different things,
it does things differently”
4/6/2022 DR MEENAKSHI S NITTTR CHD 17
18. Technology
Deep learning is a fast-growing field, and new architectures, variants appearing frequently.
1. Convolution Neural Network (CNN)
CNNs exploit spatially-local
correlation by enforcing a local
connectivity pattern between
neurons of adjacent layers.
4/6/2022 DR MEENAKSHI S NITTTR CHD 18
19. Architecture
CNNs are multilayered neural networks that include input and output layers as well
as a number of hidden layers:
Convolution layers – Responsible for filtering the input image and extracting
specific features such as edges, curves, and colors.
Pooling layers – Improve the detection of unusually placed objects.
Normalization layers – Improve network performance by normalizing the inputs
of the previous layer.
Fully connected layers – In these layers, neurons have full connections to all
activations in the previous layer (similar to regular neural networks).
4/6/2022 DR MEENAKSHI S NITTTR CHD 19
21. Cont..
2. Recurrent Neural Network (RNN)
RNNs are called recurrent because they perform the
same task for every element of a sequence, with the
output being depended on the previous computations.
4/6/2022 DR MEENAKSHI S NITTTR CHD 21
22. Cont…
3. Long-Short Term Memory
LSTM can learn "Very Deep Learning" tasks that require
memories of events that happened thousands or even
millions of discrete time steps ago.
4/6/2022 DR MEENAKSHI S NITTTR CHD 22
23. The popular CNN
• LeNet, 1998
• AlexNet, 2012
• VGGNet, 2014
• ResNet, 2015
23
26. Research and Applications
Phyological Research
Neurological Research
Medical Research
BioInformatics Research
Educational Research and Application
Therapeutic Application
4/6/2022 JAYPEE UNIVERSITY OF INFORMATION TECHNOLOGY 26
28. Artificial intelligence in medicine : The
virtual branch
The virtual component is represented by Machine
Learning, (also called Deep Learning)-mathematical
algorithms that improve learning through experience.
Three types of machine learning algorithms:
1. Unsupervised (ability to find patterns)
2. Supervised (classification and prediction
algorithms based on previous examples)
3. Reinforcement learning (use of sequences of
rewards and punishments to form a strategy for
operation in a specific problem space)
29. Benefits of Artificial intelligence
AI can definitely assist physicians
◦ Clinical decision making - better clinical decisions
◦ Replace human judgement in certain functional areas of healthcare (eg, radiology).
◦ up-to-date medical information from journals, textbooks and clinical practices
◦ Experienced vs fresh Clinician
◦ 24x7 availability of expert
Early diagnosis
Prediction of outcome of the disease as well as treatment
Feedback on treatment
Reinforce non pharmacological management
Reduce diagnostic and therapeutic errors
Increased patient safety and Huge cost savings associated with use of AI
AI system extracts useful information from a large patient population
Assist making real-time inferences for health risk alert and health outcome
prediction
Learning and self-correcting abilities to improve its accuracy based on
feedback.
30. What makes healthcare different?
• Often very little labeled data (e.g., for clinical NLP)
– Motivates semi-supervised learning algorithms
• Sometimes small numbers of samples (e.g., a rare disease)
– Learn as much as possible from other data (e.g. healthy
patients)
– Model the problem carefully
• Lots of missing data, varying time intervals, censored
labels
31. What makes healthcare different?
• Difficulty of de-identifying data
– Need for data sharing agreements and sensitivity
• Difficulty of deploying ML
– Commercial electronic health record software is difficult
to modify
– Data is often in silos; everyone recognizes need for
interoperability, but slow progress
– Careful testing and iteration is needed
33. President Obama’s initiative to create a 1 million
person research cohort
[Precision Medicine Initiative (PMI) working Group Report, Sept. 17 2015]
THEPRECISIONMEDICINEINITIA
TIVE
Large datasets
Core data set:
• Baseline health exam
• Clinical data derived
from electronic health
records(EHRs)
• Healthcare claims
• Laboratorydata
34. Diversity of digital health data
genomics
imaging
lab tests
phone
vital signs
proteomics
devices
social media
37. Lab results
(Continuous valued)
MD comments
(free text)
Triage Information Specialist consults
(Free text)
Physician
documentation
Repeated vital signs
(continuous values)
Measured every 30 s
T=0
30 min
2 hrs
Disposition
Data in Emergency Department (ED)
Collaboration with
Steven Horng, MD
Electronic records for over 300,000 ED visits
39. What can Deep
Learning do for
the healthcare?
Improve accuracy
of diagnosis,
prognosis, and
risk prediction.
Optimize hospital
processes such as
resource allocation
and patient flow.
Identify patient
subgroups for
personalized and
precision medicine.
Improve quality of care and
population health outcomes,
while reducing healthcare costs.
Reduce
medication errors
and adverse
events.
Discover new
medical knowledge
(clinical guidelines,
best practices).
Model and
prevent spread of
hospital acquired
infections.
Automate detection
of relevant findings
in pathology,
radiology, etc.
40. What can Deep
Learning do for
the healthcare?
Improve accuracy
of diagnosis,
prognosis, and
risk prediction.
Optimize hospital
processes such as
resource allocation
and patient flow.
Identify patient
subgroups for
personalized and
precision medicine.
Improve quality of care and
population health outcomes,
while reducing healthcare costs.
Reduce
medication errors
and adverse
events.
Discover new
medical knowledge
(clinical guidelines,
best practices).
Model and
prevent spread of
hospital acquired
infections.
Automate detection
of relevant findings
in pathology,
radiology, etc.
41. Improve accuracy
of diagnosis,
prognosis, and
risk prediction.
new methods for chronic disease risk prediction and visualization
that give clinicians a comprehensive view of their patient population,
risk levels, and risk factors, along with the estimated effects of potential interventions.
Increased
risk
of
heart
attack
Prashar, N., Sood, M., Jain, S., “Design and
implementation of a robust noise removal system in
ECG signals using dual-tree complex wavelet
transform”
Biomedical Signal Processing and Control, Jan
2021, 63, 102212 DOI:10.1016/j.bspc.2020.102212
(SCI indexed, IF: 3.137)
44. What can Deep
Learning do for
the healthcare?
Improve accuracy
of diagnosis,
prognosis, and
risk prediction.
Optimize hospital
processes such as
resource allocation
and patient flow.
Identify patient
subgroups for
personalized and
precision medicine.
Improve quality of care and
population health outcomes,
while reducing healthcare costs.
Reduce
medication errors
and adverse
events.
Discover new
medical knowledge
(clinical guidelines,
best practices).
Model and
prevent spread of
hospital acquired
infections.
Automate detection
of relevant findings
in pathology,
radiology, etc.
45. Optimize hospital
processes such as
resource allocation
and patient flow.
By early and accurate prediction of disease ,
predict demand and allocate scarce hospital resources such as beds
and operating rooms.
46. What can Deep
Learning do for
the healthcare?
Improve accuracy
of diagnosis,
prognosis, and
risk prediction.
Optimize hospital
processes such as
resource allocation
and patient flow.
Identify patient
subgroups for
personalized and
precision medicine.
Improve quality of care and
population health outcomes,
while reducing healthcare costs.
Reduce
medication errors
and adverse
events.
Discover new
medical knowledge
(clinical guidelines,
best practices).
Model and
prevent spread of
hospital acquired
infections.
Automate detection
of relevant findings
in pathology,
radiology, etc.
47. Automate detection
of relevant findings
in pathology,
radiology, etc.
Key advance 1: Very efficient, accurate
search over subareas of an image.
Key advance 2: Use hierarchy to search
at multiple resolutions (coarse to fine).
48. 48
Classification of Non-Proliferative Diabetic Retinopathy from Retinal Fundus
Images Employing Hierarchical Severity Level Grading system
1. Bhardwaj, C., Jain, S. & Sood, M. Deep
Learning–Based Diabetic Retinopathy
Severity Grading System Employing
Quadrant Ensemble Model. J Digit
Imaging (2021). April
https://doi.org/10.1007/s10278-021-
00418-5 (SCI indexed, IF:
4.056)
50. Key advance 1: Very efficient, accurate search over subareas of
an image.
Key advance 2: Use hierarchy to search at multiple resolutions
(coarse to fine).
Detection is also valuable when
key patterns of interest are
discovered by integrating
information across many
patients, and might not be visible
from a single patient’s data.
51. What can
machine learning
do for the
healthcare
industry?
Improve accuracy
of diagnosis,
prognosis, and
risk prediction.
Optimize hospital
processes such as
resource allocation
and patient flow.
Identify patient
subgroups for
personalized and
precision medicine.
Improve quality of care and
population health outcomes,
while reducing healthcare costs.
Reduce
medication errors
and adverse
events.
Discover new
medical knowledge
(clinical guidelines,
best practices).
Model and
prevent spread of
hospital acquired
infections.
Automate detection
of relevant findings
in pathology,
radiology, etc.
52. Discover new
medical knowledge
(clinical guidelines,
best practices).
Claims data: ~125K
patients with diseases of
the circulatory system
APC-Scan
Most significant detected pattern:
Glucocorticoids are associated with dramatically
increased hospitalizations and length of stay in the
subpopulation of ~2K overweight, hypertensive
males with endocrine secondary diagnoses.
Regression on separate, held-out patient dataset:
51% increase in hospitalizations for this
subpopulation; vs. 11% for entire patient population.
Glucocorticoids
Yes No
Number of Patients 264 1713
Mean Number of
Hospitalizations
0.606
(0.069)
0.280
(0.016)
53. What can
machine learning
do for the
healthcare
industry?
Improve accuracy
of diagnosis,
prognosis, and
risk prediction.
Optimize hospital
processes such as
resource allocation
and patient flow.
Identify patient
subgroups for
personalized and
precision medicine.
Improve quality of care and
population health outcomes,
while reducing healthcare costs.
Reduce
medication errors
and adverse
events.
Discover new
medical knowledge
(clinical guidelines,
best practices).
Model and
prevent spread of
hospital acquired
infections.
Automate detection
of relevant findings
in pathology,
radiology, etc.
54. Identify patient
subgroups for
personalized and
precision medicine.
At the very beginning of the image analysis workflow, machine learning will be
used to triage incoming studies based on the initial AI findings, to route the study to
the appropriate specialist
Once the radiologist opens a new case, machine learning will search for any
clinically relevant prior studies, automatically register studies, select the hanging
protocol and provide the radiologist with contextually relevant tools and supporting
information, for example from the patient’s electronic medical record .
In parallel, machine learning will automatically detect, segment, visualize and
quantify any abnormalities . If an abnormality is detected, machine learning will
then provide diagnostic decision support, such as a probability score for malignancy
or a differential diagnosis.
Triage-Diagnostic Decision Support
55. AI-Enabled Connected Health
Informatics
software frameworks for deep-learning are becoming increasingly
capable of training advanced neural-network models, while on the other
hand, heterogeneous hardware components such as GPUs, FPGAs and
ASICs dedicated to deep learning are beginning to challenge the
computational limits of Moore’s law.
Together, these trends have influenced connected-health informatic
systems, which comprise various processes for sensing, data transfer,
storage and analytics to improve overall health and wellbeing.
4/6/2022 55
56. Identify a variety of cancers such as breast cancer,
prostate cancer, and lung lesions
iCAD
http://signifyresearch.net/analyst-insights/
• Automation
• Accuracy
• Consistency
57. Identify a variety of cancers such as breast cancer,
prostate cancer, and lung lesions
iCAD
automatic detection and measurements of imaging
features (biomarkers) to assist with diagnosis, such
as lung density, breast density, analysis of coronary
and peripheral vessels, etc.
4D Flow fromArterys
http://signifyresearch.net/analyst-insights/
• Automation
• Accuracy
• Consistency
58. Identify a variety of cancers such as breast cancer,
prostate cancer, and lung lesions
iCAD
detection and quantification,
alongside supporting information
extracted from an EHR,
pathology reports and other
patient records, to assist with
diagnosis
IBM Watson Health
automatic detection and
measurements of imaging
features (biomarkers) to assist
with diagnosis, such as lung
density, breast density, analysis of
coronary and peripheral vessels,
etc.
4D Flow fromArterys
http://signifyresearch.net/analyst-insights/
• Integration
• X-collaboration
60. Use Case: Tumor Tissue Characterization using Ultrasound
Problem Definition:
GS 3+3 GS 4+4
GS 4+3
Benign GS 3+4
...
Prostate under Ultrasound
Prostate under microscope
https://www.cancer.gov/types/prostate/patient/prostate-treatment-pdq
Courtesy of Dr. Abolmaesumi
Cancer Map
61. Benign Malignant
CT Images US Images
Microscopic Images of Blood
MRI Images
DNA sequence signal
Non-invasive visualization of internal organs, tissue, etc.
Medical Imaging
Jyotsna Dogra, Shruti Jain,
Meenakshi Sood, “Glioma
Extraction from MR Images
Employing Gradient Based Kernel
Selection Graph Cut Technique”
The Visual Computer, vol 36(5), pp-
875-891 DOI: 10.1007/s00371-019-
01698-3. ISSN: 0178-2789
(Print) 1432-2315 (Online)
Springer
(SCI IF: 1.415)
63. Use Case: Tumor Tissue Characterization using Ultrasound
Steps:
1. Understand the problem
2. Define input(s) and output(s)
3. Investigate limitations and boundary conditions
4. Collect representative data
1. [Labels]
5. [Calculate features/engineer features]
6. Define ML framework
7. Define Metric for evaluation
...
64. 1. Understand the problem
2. Define input(s) and output(s)
3. Investigate limitations and boundary conditions
4. Collect representative data
1. [Labels]
5. [Calculate features/engineer features]
6. Define ML framework
7. Define Metric for evaluation
...
Courtesy of Dr. Abolmaesumi
65. Use of robots to deliver treatment..robotic surgery
Use of robots to monitor effectiveness of treatment
Use of robots to deliver treatment - Robotic surgery
66. Successes!
▶ Mammographic mass
classification
▶ Brain Lesions
▶ Air way leakages
▶ Diabetic Retinopathy
▶ Prostrate Segmentation
▶ Breast cancer metastasis
▶ Skin Lesion Classification
▶ Bone suppression in Chest X-Rays
6
Source: arXiv:1702.05747
67. Medical Imaging Open Datasets
▶ http://www.cancerimagingarchive.net/
▶ Lung Cancer, Skin Cancer, Breast Cancer….
▶ Kaggle Open Datasets
▶ Diabetic Retinopathy, Lung Cancer
▶ Kaggle Data Science Bowl 2018
▶ https://www.kaggle.com/c/data-science-bowl-2018
▶ ISIC Skin Cance r Dataset
▶ https://challenge.kitware.com/#challenge/583f126bcad3a51cc66c8d9a
▶ Grand Challenges in Medical Image Analysis
▶ https://grand-challenges.grand-challenge.org/all_challenges/
▶ And more…
▶ https://github.com/sfikas/medical-imaging-datasets
26
68. Resources
▶ CBInsights AI in Healthcare Map: https://www.cbinsights.com/research/artificial-intelligence-startups-healthcare/
▶ DL in Medical Imaging Survey : https://arxiv.org/pdf/1702.05747.pdf
▶ Unet: https://arxiv.org/pdf/1505.04597.pdf
▶ Learning to diagnose from scratch exploiting dependencies in labels: https://arxiv.org/pdf/1710.10501.pdf
▶ TieNet Chest X-Ray Auto-reporting: https://arxiv.org/pdf/1801.04334.pdf
▶ Dermatologist level classification of Skin Cancer using DL: https://www.nature.com/articles/nature21056
▶ Tensorflow Intel CPU Optimized: https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-
architecture
▶ Tensorflow Quantization: https://www.tensorflow.org/performance/quantization
▶ Deep Compression Paper: https://arxiv.org/abs/1510.00149
▶ Microsoft’s Project Brainwave: https://www.microsoft.com/en-us/research/blog/microsoft-unveils-project-brainwave/
▶ Can FPGAs Beat GPUs?: http://jaewoong.org/pubs/fpga17-next-generation-dnns.pdf
▶ ESE on FPGA: https://arxiv.org/abs/1612.00694
▶ Intel Spark BigDL: https://software.intel.com/en-us/articles/bigdl-distributed-deep-learning-on-apache-spark
▶ Baidu’s Paddle-Paddle on Kubernetes: http://blog.kubernetes.io/2017/02/run-deep-learning-with-paddlepaddle-on-
kubernetes.html
▶ Uber’s Horovod Distributed Training framework for Tensorflow: https://github.com/uber/horovod
▶ TFX: Tensorflow based production scale ML Platform: https://dl.acm.org/citation.cfm?id=3098021
▶ Explainable AI: https://www.cc.gatech.edu/~alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf
28