The presentation is coverong the convolution neural network (CNN) design.
First,
the main building blocks of CNNs will be introduced. Then we systematically
investigate the impact of a range of recent advances in CNN architectures and
learning methods on the object categorization (ILSVRC) problem. In the
evaluation, the influence of the following choices of the architecture are
tested: non-linearity (ReLU, ELU, maxout, compatibility with batch
normalization), pooling variants (stochastic, max, average, mixed), network
width, classifier design (convolution, fully-connected, SPP), image
pre-processing, and of learning parameters: learning rate, batch size,
cleanliness of the data, etc.
Variational formulation of unsupervised deep learning for ultrasound image ar...Shujaat Khan
Recently, deep learning approaches have been successfully used for ultrasound (US) image artifact removal. However, paired high-quality images for supervised training are difficult to obtain in many practical situations. Inspired by the recent theory of unsupervised learning using optimal transport driven CycleGAN (OT-CycleGAN), here, we investigate the applicability of unsupervised deep learning for US artifact removal problems without matched reference data. Two types of OT-CycleGAN approaches are employed: one with the partial knowledge of the image degradation physics and the other with the lack of such knowledge. Various US artifact removal problems are then addressed using the two types of OT-CycleGAN. Experimental results for various unsupervised US artifact removal tasks confirmed that our unsupervised learning method delivers results comparable to supervised learning in many practical applications.
TMS workshop on machine learning in materials science: Intro to deep learning...BrianDeCost
This presentation is intended as a high-level introduction for to deep learning and its applications in materials science. The intended audience is materials scientists and engineers
Disclaimers: the second half of this presentation is intended as a broad overview of deep learning applications in materials science; due to time limitations it is not intended to be comprehensive. As a review of the field, this necessarily includes work that is not my own. If my own name is not included explicitly in the reference at the bottom of a slide, I was not involved in that work.
Any mention of commercial products in this presentation is for information only; it does not imply recommendation or endorsement by NIST.
Talk by Dr. Nikita Morikiakov on inverse problems in medical imaging with deep learning.
Inverse problem is the type of problems in natural sciences when one has to infer from a set of observations the causal factors that produced them. In medical imaging, important examples of inverse problems would be recontruction in CT and MRI, where the volumetric representation of an object is computed from the projection and Fourier space data respectively. In a classical approach, one relies on domain specific knowledge contained in physical-analytical models to develop a reconstruction algorithm, which is often given by a certain iterative refinement procedure. Recent research in inverse problems seeks to develop a mathematically coherent foundation for combining data driven models, based on deep learning, with the analytical knowledge contained in the classical reconstruction procedures. In this talk we will give a brief overview of these developments and then focus on particular applications in Digital Breast Tomosynthesis and MRI reconstruction.
The presentation is coverong the convolution neural network (CNN) design.
First,
the main building blocks of CNNs will be introduced. Then we systematically
investigate the impact of a range of recent advances in CNN architectures and
learning methods on the object categorization (ILSVRC) problem. In the
evaluation, the influence of the following choices of the architecture are
tested: non-linearity (ReLU, ELU, maxout, compatibility with batch
normalization), pooling variants (stochastic, max, average, mixed), network
width, classifier design (convolution, fully-connected, SPP), image
pre-processing, and of learning parameters: learning rate, batch size,
cleanliness of the data, etc.
Variational formulation of unsupervised deep learning for ultrasound image ar...Shujaat Khan
Recently, deep learning approaches have been successfully used for ultrasound (US) image artifact removal. However, paired high-quality images for supervised training are difficult to obtain in many practical situations. Inspired by the recent theory of unsupervised learning using optimal transport driven CycleGAN (OT-CycleGAN), here, we investigate the applicability of unsupervised deep learning for US artifact removal problems without matched reference data. Two types of OT-CycleGAN approaches are employed: one with the partial knowledge of the image degradation physics and the other with the lack of such knowledge. Various US artifact removal problems are then addressed using the two types of OT-CycleGAN. Experimental results for various unsupervised US artifact removal tasks confirmed that our unsupervised learning method delivers results comparable to supervised learning in many practical applications.
TMS workshop on machine learning in materials science: Intro to deep learning...BrianDeCost
This presentation is intended as a high-level introduction for to deep learning and its applications in materials science. The intended audience is materials scientists and engineers
Disclaimers: the second half of this presentation is intended as a broad overview of deep learning applications in materials science; due to time limitations it is not intended to be comprehensive. As a review of the field, this necessarily includes work that is not my own. If my own name is not included explicitly in the reference at the bottom of a slide, I was not involved in that work.
Any mention of commercial products in this presentation is for information only; it does not imply recommendation or endorsement by NIST.
Talk by Dr. Nikita Morikiakov on inverse problems in medical imaging with deep learning.
Inverse problem is the type of problems in natural sciences when one has to infer from a set of observations the causal factors that produced them. In medical imaging, important examples of inverse problems would be recontruction in CT and MRI, where the volumetric representation of an object is computed from the projection and Fourier space data respectively. In a classical approach, one relies on domain specific knowledge contained in physical-analytical models to develop a reconstruction algorithm, which is often given by a certain iterative refinement procedure. Recent research in inverse problems seeks to develop a mathematically coherent foundation for combining data driven models, based on deep learning, with the analytical knowledge contained in the classical reconstruction procedures. In this talk we will give a brief overview of these developments and then focus on particular applications in Digital Breast Tomosynthesis and MRI reconstruction.
Unsupervised Deconvolution Neural Network for High Quality Ultrasound ImagingShujaat Khan
High quality US imaging demand large number of measurements that can increase the cost, size and power requirements. Therefore, low-powered, portable and 3D ultrasound imaging system require reconstruction algorithms that can produce high quality images using fewer receive measurements. Number of model specific methods has been proposed which doesn't work under perturbation. For instance, compressive deconvolution ultrasound which provide a reasonable quality with limited measurements however, it has its own down-sides such as high computation cost and accurate estimation of point spread function (PSF). An other major limitation of conventional methods is that they require RF or base-band signal which is difficult to obtain from portable US systems. To deal with the aforementioned issues, in this study we designed a novel deep deconvolution model for image domain-based deconvolution. The proposed deep deconvolution (DeepDeconv) model can be trained in an unsupervised fashion, alleviate the need of paired high and low quality images. The model was evaluated on both the phantom and in-vivo scans for various sampling configurations. The proposed DeepDeconv significantly enhance the details of anatomical structures and using unsupervised learning on average it achieved 2.14dB, 4.96dB and 0.01 units gain in CR, PSNR and SSIM values respectively, which are comparable to the supervised method.
Presentation on machine learning and materials science at Computing in Engineering Forum 2018, Machine Ground Interaction Consortium (MaGIC) 2018, Wisconsin, Madison, December 4, 2018
Physics informed deep learning for efficient b-mode ultrasound imagingShujaat Khan
A webinar on "Physics-Informed Deep Learning for Efficient B-Mode Ultrasound Imaging" organized by Center for Professional Training (C.P.T.) National University of Computer and Emerging Sciences (NUCES), Karachi.
Practical computer vision-- A problem-driven approach towards learning CV/ML/DLAlbert Y. C. Chen
Practical computer vision-- A problem-driven approach towards learning CV/ML/DL
Albert Chen Ph.D., 20170726 at Academia Sinica, Taiwan
Invited Speech during Academia Sinica's AI month
Enabling Real Time Analysis & Decision Making - A Paradigm Shift for Experime...PyData
By Kerstin Kleese van Dam
PyData New York City 2017
New instrument technologies are enabling a new generation of in-situ and in-operando experiments, with extremely fine spatial and temporal resolution, that allows researchers to observe as physics, chemistry and biology are happening. These new methodologies go hand in hand with an exponential growth in data volumes and rates - petabyte scale data collections and terabyte/sec. At the same time scientists are pushing for a paradigm shift. As they can now observe processes in intricate details, they want to analyze, interpret and control those processes. Given the multitude of voluminous, heterogenous data streams involved in every single experiment, novel real time, data driven analysis and decision support approaches are needed to realize their vision. This talk will discuss state of the art streaming analysis for experimental facilities, its challenges and early successes. It will present where commercial technologies can be leveraged and how many of the novel approaches differ from commonly available solutions.
Transformer Architectures in Vision
[2018 ICML] Image Transformer
[2019 CVPR] Video Action Transformer Network
[2020 ECCV] End-to-End Object Detection with Transformers
[2021 ICLR] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A primer for the upcoming developing Human Connectome Project data release; presented at the Big Data Little Brains conference in Chapel Hill, May 2018
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Generalizing Scientific Machine Learning and Differentiable Simulation Beyond...Chris Rackauckas
The combination of scientific models into deep learning structures, commonly referred to as scientific machine learning (SciML), has made great strides in the last few years in incorporating models such as ODEs and PDEs into deep learning through differentiable simulation. However, the vast space of scientific simulation also includes models like jump diffusions, agent-based models, and more. Is SciML constrained to the simple continuous cases or is there a way to generalize to more advanced model forms? This talk will dive into the mathematical aspects of generalizing differentiable simulation to discuss cases like chaotic simulations, differentiating stochastic simulations like particle filters and agent-based models, and solving inverse problems of Bayesian inverse problems (i.e. differentiation of Markov Chain Monte Carlo methods). We will then discuss the evolving numerical stability issues, implementation issues, and other interesting mathematical tidbits that are coming to light as these differentiable programming capabilities are being adopted.
Bio: Dr. Chris Rackauckas is the VP of Modeling and Simulation at JuliaHub, the Director of Scientific Research at Pumas-AI, Co-PI of the Julia Lab at MIT, and the lead developer of the SciML Open Source Software Organization. For his work in mechanistic machine learning, his work is credited for the 15,000x acceleration of NASA Launch Services simulations and recently demonstrated a 60x-570x acceleration over Modelica tools in HVAC simulation, earning Chris the US Air Force Artificial Intelligence Accelerator Scientific Excellence Award. See more at https://chrisrackauckas.com/. He is the lead developer of the Pumas project and has received a top presentation award at every ACoP in the last 3 years for improving methods for uncertainty quantification, automated GPU acceleration of nonlinear mixed effects modeling (NLME), and machine learning assisted construction of NLME models with DeepNLME. For these achievements, Chris received the Emerging Scientist award from ISoP.
Recurrent Neural Networks (RNNs) represent the reference class of Deep Learning models for learning from sequential data. Despite the widespread success, a major downside of RNNs and commonly derived ‘gating’ variants (LSTM, GRU) is given by the high cost of the involved training algorithms. In this context, an increasingly popular alternative is the Reservoir Computing (RC) approach, which enables limiting the training algorithm to operate only on a restricted set of (output) parameters. RC is appealing for several reasons, including the amenability of being implemented in low-powerful edge devices, enabling adaptation and personalization in IoT and cyber-physical systems applications.
This webinar will introduce Reservoir Computing from scratch, covering all the fundamental design topics as well as good practices. It is targeted to both researchers and practitioners that are interested in setting up fastly-trained Deep Learning models for sequential data.
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017MLconf
Irina Rish is a researcher at the AI Foundations department of the IBM T.J. Watson Research Center. She received MS in Applied Mathematics from Moscow Gubkin Institute, Russia, and PhD in Computer Science from the University of California, Irvine. Her areas of expertise include artificial intelligence and machine learning, with a particular focus on probabilistic graphical models, sparsity and compressed sensing, active learning, and their applications to various domains, ranging from diagnosis and performance management of distributed computer systems (“autonomic computing”) to predictive modeling and statistical biomarker discovery in neuroimaging and other biological data. Irina has published over 60 research papers, several book chapters, two edited books, and a monograph on Sparse Modeling, taught several tutorials and organized multiple workshops at machine-learning conferences, including NIPS, ICML and ECML. She holds 24 patents and several IBM awards. Irina currently serves on the editorial board of the Artificial Intelligence Journal (AIJ). As an adjunct professor at the EE Department of Columbia University, she taught several advanced graduate courses on statistical learning and sparse signal modeling.
Abstract Summary:
Learning About the Brain and Brain-Inspired Learning:
Quantifying mental states and identifying statistical biomarkers of mental disorders from neuroimaging data is an exciting and rapidly growing research area at the intersection of neuroscience and machine learning, with the particular focus on interpretability and reproducibility of learned models. We will discuss promises and limitations of machine-learning methods in such applications, focusing on recent applications of deep learning methods such as recurrent convnets to the analysis of “brain movies” (EEG) data. On the other hand, besides the above “AI to Brain” direction, we will also discuss the “Brain to AI”, namely, borrowing ideas from neuroscience to improve machine learning, with specific focus on adult neurogenesis and online model adaptation in representation learning.
The Face of Nanomaterials: Insightful Classification Using Deep Learning - An...PyData
Artificial intelligence is emerging as a new paradigm in materials science. This talk describes how physical intuition and (insightful) machine learning can solve the complicated task of structure recognition in materials at the nanoscale.
Title: Sense of Taste
Presenter: Dr. Faiza, Assistant Professor of Physiology
Qualifications:
MBBS (Best Graduate, AIMC Lahore)
FCPS Physiology
ICMT, CHPE, DHPE (STMU)
MPH (GC University, Faisalabad)
MBA (Virtual University of Pakistan)
Learning Objectives:
Describe the structure and function of taste buds.
Describe the relationship between the taste threshold and taste index of common substances.
Explain the chemical basis and signal transduction of taste perception for each type of primary taste sensation.
Recognize different abnormalities of taste perception and their causes.
Key Topics:
Significance of Taste Sensation:
Differentiation between pleasant and harmful food
Influence on behavior
Selection of food based on metabolic needs
Receptors of Taste:
Taste buds on the tongue
Influence of sense of smell, texture of food, and pain stimulation (e.g., by pepper)
Primary and Secondary Taste Sensations:
Primary taste sensations: Sweet, Sour, Salty, Bitter, Umami
Chemical basis and signal transduction mechanisms for each taste
Taste Threshold and Index:
Taste threshold values for Sweet (sucrose), Salty (NaCl), Sour (HCl), and Bitter (Quinine)
Taste index relationship: Inversely proportional to taste threshold
Taste Blindness:
Inability to taste certain substances, particularly thiourea compounds
Example: Phenylthiocarbamide
Structure and Function of Taste Buds:
Composition: Epithelial cells, Sustentacular/Supporting cells, Taste cells, Basal cells
Features: Taste pores, Taste hairs/microvilli, and Taste nerve fibers
Location of Taste Buds:
Found in papillae of the tongue (Fungiform, Circumvallate, Foliate)
Also present on the palate, tonsillar pillars, epiglottis, and proximal esophagus
Mechanism of Taste Stimulation:
Interaction of taste substances with receptors on microvilli
Signal transduction pathways for Umami, Sweet, Bitter, Sour, and Salty tastes
Taste Sensitivity and Adaptation:
Decrease in sensitivity with age
Rapid adaptation of taste sensation
Role of Saliva in Taste:
Dissolution of tastants to reach receptors
Washing away the stimulus
Taste Preferences and Aversions:
Mechanisms behind taste preference and aversion
Influence of receptors and neural pathways
Impact of Sensory Nerve Damage:
Degeneration of taste buds if the sensory nerve fiber is cut
Abnormalities of Taste Detection:
Conditions: Ageusia, Hypogeusia, Dysgeusia (parageusia)
Causes: Nerve damage, neurological disorders, infections, poor oral hygiene, adverse drug effects, deficiencies, aging, tobacco use, altered neurotransmitter levels
Neurotransmitters and Taste Threshold:
Effects of serotonin (5-HT) and norepinephrine (NE) on taste sensitivity
Supertasters:
25% of the population with heightened sensitivity to taste, especially bitterness
Increased number of fungiform papillae
Unsupervised Deconvolution Neural Network for High Quality Ultrasound ImagingShujaat Khan
High quality US imaging demand large number of measurements that can increase the cost, size and power requirements. Therefore, low-powered, portable and 3D ultrasound imaging system require reconstruction algorithms that can produce high quality images using fewer receive measurements. Number of model specific methods has been proposed which doesn't work under perturbation. For instance, compressive deconvolution ultrasound which provide a reasonable quality with limited measurements however, it has its own down-sides such as high computation cost and accurate estimation of point spread function (PSF). An other major limitation of conventional methods is that they require RF or base-band signal which is difficult to obtain from portable US systems. To deal with the aforementioned issues, in this study we designed a novel deep deconvolution model for image domain-based deconvolution. The proposed deep deconvolution (DeepDeconv) model can be trained in an unsupervised fashion, alleviate the need of paired high and low quality images. The model was evaluated on both the phantom and in-vivo scans for various sampling configurations. The proposed DeepDeconv significantly enhance the details of anatomical structures and using unsupervised learning on average it achieved 2.14dB, 4.96dB and 0.01 units gain in CR, PSNR and SSIM values respectively, which are comparable to the supervised method.
Presentation on machine learning and materials science at Computing in Engineering Forum 2018, Machine Ground Interaction Consortium (MaGIC) 2018, Wisconsin, Madison, December 4, 2018
Physics informed deep learning for efficient b-mode ultrasound imagingShujaat Khan
A webinar on "Physics-Informed Deep Learning for Efficient B-Mode Ultrasound Imaging" organized by Center for Professional Training (C.P.T.) National University of Computer and Emerging Sciences (NUCES), Karachi.
Practical computer vision-- A problem-driven approach towards learning CV/ML/DLAlbert Y. C. Chen
Practical computer vision-- A problem-driven approach towards learning CV/ML/DL
Albert Chen Ph.D., 20170726 at Academia Sinica, Taiwan
Invited Speech during Academia Sinica's AI month
Enabling Real Time Analysis & Decision Making - A Paradigm Shift for Experime...PyData
By Kerstin Kleese van Dam
PyData New York City 2017
New instrument technologies are enabling a new generation of in-situ and in-operando experiments, with extremely fine spatial and temporal resolution, that allows researchers to observe as physics, chemistry and biology are happening. These new methodologies go hand in hand with an exponential growth in data volumes and rates - petabyte scale data collections and terabyte/sec. At the same time scientists are pushing for a paradigm shift. As they can now observe processes in intricate details, they want to analyze, interpret and control those processes. Given the multitude of voluminous, heterogenous data streams involved in every single experiment, novel real time, data driven analysis and decision support approaches are needed to realize their vision. This talk will discuss state of the art streaming analysis for experimental facilities, its challenges and early successes. It will present where commercial technologies can be leveraged and how many of the novel approaches differ from commonly available solutions.
Transformer Architectures in Vision
[2018 ICML] Image Transformer
[2019 CVPR] Video Action Transformer Network
[2020 ECCV] End-to-End Object Detection with Transformers
[2021 ICLR] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A primer for the upcoming developing Human Connectome Project data release; presented at the Big Data Little Brains conference in Chapel Hill, May 2018
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Generalizing Scientific Machine Learning and Differentiable Simulation Beyond...Chris Rackauckas
The combination of scientific models into deep learning structures, commonly referred to as scientific machine learning (SciML), has made great strides in the last few years in incorporating models such as ODEs and PDEs into deep learning through differentiable simulation. However, the vast space of scientific simulation also includes models like jump diffusions, agent-based models, and more. Is SciML constrained to the simple continuous cases or is there a way to generalize to more advanced model forms? This talk will dive into the mathematical aspects of generalizing differentiable simulation to discuss cases like chaotic simulations, differentiating stochastic simulations like particle filters and agent-based models, and solving inverse problems of Bayesian inverse problems (i.e. differentiation of Markov Chain Monte Carlo methods). We will then discuss the evolving numerical stability issues, implementation issues, and other interesting mathematical tidbits that are coming to light as these differentiable programming capabilities are being adopted.
Bio: Dr. Chris Rackauckas is the VP of Modeling and Simulation at JuliaHub, the Director of Scientific Research at Pumas-AI, Co-PI of the Julia Lab at MIT, and the lead developer of the SciML Open Source Software Organization. For his work in mechanistic machine learning, his work is credited for the 15,000x acceleration of NASA Launch Services simulations and recently demonstrated a 60x-570x acceleration over Modelica tools in HVAC simulation, earning Chris the US Air Force Artificial Intelligence Accelerator Scientific Excellence Award. See more at https://chrisrackauckas.com/. He is the lead developer of the Pumas project and has received a top presentation award at every ACoP in the last 3 years for improving methods for uncertainty quantification, automated GPU acceleration of nonlinear mixed effects modeling (NLME), and machine learning assisted construction of NLME models with DeepNLME. For these achievements, Chris received the Emerging Scientist award from ISoP.
Recurrent Neural Networks (RNNs) represent the reference class of Deep Learning models for learning from sequential data. Despite the widespread success, a major downside of RNNs and commonly derived ‘gating’ variants (LSTM, GRU) is given by the high cost of the involved training algorithms. In this context, an increasingly popular alternative is the Reservoir Computing (RC) approach, which enables limiting the training algorithm to operate only on a restricted set of (output) parameters. RC is appealing for several reasons, including the amenability of being implemented in low-powerful edge devices, enabling adaptation and personalization in IoT and cyber-physical systems applications.
This webinar will introduce Reservoir Computing from scratch, covering all the fundamental design topics as well as good practices. It is targeted to both researchers and practitioners that are interested in setting up fastly-trained Deep Learning models for sequential data.
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017MLconf
Irina Rish is a researcher at the AI Foundations department of the IBM T.J. Watson Research Center. She received MS in Applied Mathematics from Moscow Gubkin Institute, Russia, and PhD in Computer Science from the University of California, Irvine. Her areas of expertise include artificial intelligence and machine learning, with a particular focus on probabilistic graphical models, sparsity and compressed sensing, active learning, and their applications to various domains, ranging from diagnosis and performance management of distributed computer systems (“autonomic computing”) to predictive modeling and statistical biomarker discovery in neuroimaging and other biological data. Irina has published over 60 research papers, several book chapters, two edited books, and a monograph on Sparse Modeling, taught several tutorials and organized multiple workshops at machine-learning conferences, including NIPS, ICML and ECML. She holds 24 patents and several IBM awards. Irina currently serves on the editorial board of the Artificial Intelligence Journal (AIJ). As an adjunct professor at the EE Department of Columbia University, she taught several advanced graduate courses on statistical learning and sparse signal modeling.
Abstract Summary:
Learning About the Brain and Brain-Inspired Learning:
Quantifying mental states and identifying statistical biomarkers of mental disorders from neuroimaging data is an exciting and rapidly growing research area at the intersection of neuroscience and machine learning, with the particular focus on interpretability and reproducibility of learned models. We will discuss promises and limitations of machine-learning methods in such applications, focusing on recent applications of deep learning methods such as recurrent convnets to the analysis of “brain movies” (EEG) data. On the other hand, besides the above “AI to Brain” direction, we will also discuss the “Brain to AI”, namely, borrowing ideas from neuroscience to improve machine learning, with specific focus on adult neurogenesis and online model adaptation in representation learning.
The Face of Nanomaterials: Insightful Classification Using Deep Learning - An...PyData
Artificial intelligence is emerging as a new paradigm in materials science. This talk describes how physical intuition and (insightful) machine learning can solve the complicated task of structure recognition in materials at the nanoscale.
Title: Sense of Taste
Presenter: Dr. Faiza, Assistant Professor of Physiology
Qualifications:
MBBS (Best Graduate, AIMC Lahore)
FCPS Physiology
ICMT, CHPE, DHPE (STMU)
MPH (GC University, Faisalabad)
MBA (Virtual University of Pakistan)
Learning Objectives:
Describe the structure and function of taste buds.
Describe the relationship between the taste threshold and taste index of common substances.
Explain the chemical basis and signal transduction of taste perception for each type of primary taste sensation.
Recognize different abnormalities of taste perception and their causes.
Key Topics:
Significance of Taste Sensation:
Differentiation between pleasant and harmful food
Influence on behavior
Selection of food based on metabolic needs
Receptors of Taste:
Taste buds on the tongue
Influence of sense of smell, texture of food, and pain stimulation (e.g., by pepper)
Primary and Secondary Taste Sensations:
Primary taste sensations: Sweet, Sour, Salty, Bitter, Umami
Chemical basis and signal transduction mechanisms for each taste
Taste Threshold and Index:
Taste threshold values for Sweet (sucrose), Salty (NaCl), Sour (HCl), and Bitter (Quinine)
Taste index relationship: Inversely proportional to taste threshold
Taste Blindness:
Inability to taste certain substances, particularly thiourea compounds
Example: Phenylthiocarbamide
Structure and Function of Taste Buds:
Composition: Epithelial cells, Sustentacular/Supporting cells, Taste cells, Basal cells
Features: Taste pores, Taste hairs/microvilli, and Taste nerve fibers
Location of Taste Buds:
Found in papillae of the tongue (Fungiform, Circumvallate, Foliate)
Also present on the palate, tonsillar pillars, epiglottis, and proximal esophagus
Mechanism of Taste Stimulation:
Interaction of taste substances with receptors on microvilli
Signal transduction pathways for Umami, Sweet, Bitter, Sour, and Salty tastes
Taste Sensitivity and Adaptation:
Decrease in sensitivity with age
Rapid adaptation of taste sensation
Role of Saliva in Taste:
Dissolution of tastants to reach receptors
Washing away the stimulus
Taste Preferences and Aversions:
Mechanisms behind taste preference and aversion
Influence of receptors and neural pathways
Impact of Sensory Nerve Damage:
Degeneration of taste buds if the sensory nerve fiber is cut
Abnormalities of Taste Detection:
Conditions: Ageusia, Hypogeusia, Dysgeusia (parageusia)
Causes: Nerve damage, neurological disorders, infections, poor oral hygiene, adverse drug effects, deficiencies, aging, tobacco use, altered neurotransmitter levels
Neurotransmitters and Taste Threshold:
Effects of serotonin (5-HT) and norepinephrine (NE) on taste sensitivity
Supertasters:
25% of the population with heightened sensitivity to taste, especially bitterness
Increased number of fungiform papillae
Prix Galien International 2024 Forum ProgramLevi Shapiro
June 20, 2024, Prix Galien International and Jerusalem Ethics Forum in ROME. Detailed agenda including panels:
- ADVANCES IN CARDIOLOGY: A NEW PARADIGM IS COMING
- WOMEN’S HEALTH: FERTILITY PRESERVATION
- WHAT’S NEW IN THE TREATMENT OF INFECTIOUS,
ONCOLOGICAL AND INFLAMMATORY SKIN DISEASES?
- ARTIFICIAL INTELLIGENCE AND ETHICS
- GENE THERAPY
- BEYOND BORDERS: GLOBAL INITIATIVES FOR DEMOCRATIZING LIFE SCIENCE TECHNOLOGIES AND PROMOTING ACCESS TO HEALTHCARE
- ETHICAL CHALLENGES IN LIFE SCIENCES
- Prix Galien International Awards Ceremony
Title: Sense of Smell
Presenter: Dr. Faiza, Assistant Professor of Physiology
Qualifications:
MBBS (Best Graduate, AIMC Lahore)
FCPS Physiology
ICMT, CHPE, DHPE (STMU)
MPH (GC University, Faisalabad)
MBA (Virtual University of Pakistan)
Learning Objectives:
Describe the primary categories of smells and the concept of odor blindness.
Explain the structure and location of the olfactory membrane and mucosa, including the types and roles of cells involved in olfaction.
Describe the pathway and mechanisms of olfactory signal transmission from the olfactory receptors to the brain.
Illustrate the biochemical cascade triggered by odorant binding to olfactory receptors, including the role of G-proteins and second messengers in generating an action potential.
Identify different types of olfactory disorders such as anosmia, hyposmia, hyperosmia, and dysosmia, including their potential causes.
Key Topics:
Olfactory Genes:
3% of the human genome accounts for olfactory genes.
400 genes for odorant receptors.
Olfactory Membrane:
Located in the superior part of the nasal cavity.
Medially: Folds downward along the superior septum.
Laterally: Folds over the superior turbinate and upper surface of the middle turbinate.
Total surface area: 5-10 square centimeters.
Olfactory Mucosa:
Olfactory Cells: Bipolar nerve cells derived from the CNS (100 million), with 4-25 olfactory cilia per cell.
Sustentacular Cells: Produce mucus and maintain ionic and molecular environment.
Basal Cells: Replace worn-out olfactory cells with an average lifespan of 1-2 months.
Bowman’s Gland: Secretes mucus.
Stimulation of Olfactory Cells:
Odorant dissolves in mucus and attaches to receptors on olfactory cilia.
Involves a cascade effect through G-proteins and second messengers, leading to depolarization and action potential generation in the olfactory nerve.
Quality of a Good Odorant:
Small (3-20 Carbon atoms), volatile, water-soluble, and lipid-soluble.
Facilitated by odorant-binding proteins in mucus.
Membrane Potential and Action Potential:
Resting membrane potential: -55mV.
Action potential frequency in the olfactory nerve increases with odorant strength.
Adaptation Towards the Sense of Smell:
Rapid adaptation within the first second, with further slow adaptation.
Psychological adaptation greater than receptor adaptation, involving feedback inhibition from the central nervous system.
Primary Sensations of Smell:
Camphoraceous, Musky, Floral, Pepperminty, Ethereal, Pungent, Putrid.
Odor Detection Threshold:
Examples: Hydrogen sulfide (0.0005 ppm), Methyl-mercaptan (0.002 ppm).
Some toxic substances are odorless at lethal concentrations.
Characteristics of Smell:
Odor blindness for single substances due to lack of appropriate receptor protein.
Behavioral and emotional influences of smell.
Transmission of Olfactory Signals:
From olfactory cells to glomeruli in the olfactory bulb, involving lateral inhibition.
Primitive, less old, and new olfactory systems with different path
Flu Vaccine Alert in Bangalore Karnatakaaddon Scans
As flu season approaches, health officials in Bangalore, Karnataka, are urging residents to get their flu vaccinations. The seasonal flu, while common, can lead to severe health complications, particularly for vulnerable populations such as young children, the elderly, and those with underlying health conditions.
Dr. Vidisha Kumari, a leading epidemiologist in Bangalore, emphasizes the importance of getting vaccinated. "The flu vaccine is our best defense against the influenza virus. It not only protects individuals but also helps prevent the spread of the virus in our communities," he says.
This year, the flu season is expected to coincide with a potential increase in other respiratory illnesses. The Karnataka Health Department has launched an awareness campaign highlighting the significance of flu vaccinations. They have set up multiple vaccination centers across Bangalore, making it convenient for residents to receive their shots.
To encourage widespread vaccination, the government is also collaborating with local schools, workplaces, and community centers to facilitate vaccination drives. Special attention is being given to ensuring that the vaccine is accessible to all, including marginalized communities who may have limited access to healthcare.
Residents are reminded that the flu vaccine is safe and effective. Common side effects are mild and may include soreness at the injection site, mild fever, or muscle aches. These side effects are generally short-lived and far less severe than the flu itself.
Healthcare providers are also stressing the importance of continuing COVID-19 precautions. Wearing masks, practicing good hand hygiene, and maintaining social distancing are still crucial, especially in crowded places.
Protect yourself and your loved ones by getting vaccinated. Together, we can help keep Bangalore healthy and safe this flu season. For more information on vaccination centers and schedules, residents can visit the Karnataka Health Department’s official website or follow their social media pages.
Stay informed, stay safe, and get your flu shot today!
Explore natural remedies for syphilis treatment in Singapore. Discover alternative therapies, herbal remedies, and lifestyle changes that may complement conventional treatments. Learn about holistic approaches to managing syphilis symptoms and supporting overall health.
micro teaching on communication m.sc nursing.pdfAnurag Sharma
Microteaching is a unique model of practice teaching. It is a viable instrument for the. desired change in the teaching behavior or the behavior potential which, in specified types of real. classroom situations, tends to facilitate the achievement of specified types of objectives.
Recomendações da OMS sobre cuidados maternos e neonatais para uma experiência pós-natal positiva.
Em consonância com os ODS – Objetivos do Desenvolvimento Sustentável e a Estratégia Global para a Saúde das Mulheres, Crianças e Adolescentes, e aplicando uma abordagem baseada nos direitos humanos, os esforços de cuidados pós-natais devem expandir-se para além da cobertura e da simples sobrevivência, de modo a incluir cuidados de qualidade.
Estas diretrizes visam melhorar a qualidade dos cuidados pós-natais essenciais e de rotina prestados às mulheres e aos recém-nascidos, com o objetivo final de melhorar a saúde e o bem-estar materno e neonatal.
Uma “experiência pós-natal positiva” é um resultado importante para todas as mulheres que dão à luz e para os seus recém-nascidos, estabelecendo as bases para a melhoria da saúde e do bem-estar a curto e longo prazo. Uma experiência pós-natal positiva é definida como aquela em que as mulheres, pessoas que gestam, os recém-nascidos, os casais, os pais, os cuidadores e as famílias recebem informação consistente, garantia e apoio de profissionais de saúde motivados; e onde um sistema de saúde flexível e com recursos reconheça as necessidades das mulheres e dos bebês e respeite o seu contexto cultural.
Estas diretrizes consolidadas apresentam algumas recomendações novas e já bem fundamentadas sobre cuidados pós-natais de rotina para mulheres e neonatos que recebem cuidados no pós-parto em unidades de saúde ou na comunidade, independentemente dos recursos disponíveis.
É fornecido um conjunto abrangente de recomendações para cuidados durante o período puerperal, com ênfase nos cuidados essenciais que todas as mulheres e recém-nascidos devem receber, e com a devida atenção à qualidade dos cuidados; isto é, a entrega e a experiência do cuidado recebido. Estas diretrizes atualizam e ampliam as recomendações da OMS de 2014 sobre cuidados pós-natais da mãe e do recém-nascido e complementam as atuais diretrizes da OMS sobre a gestão de complicações pós-natais.
O estabelecimento da amamentação e o manejo das principais intercorrências é contemplada.
Recomendamos muito.
Vamos discutir essas recomendações no nosso curso de pós-graduação em Aleitamento no Instituto Ciclos.
Esta publicação só está disponível em inglês até o momento.
Prof. Marcus Renato de Carvalho
www.agostodourado.com
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists Saeid Safari
Preoperative Management of Patients on GLP-1 Receptor Agonists like Ozempic and Semiglutide
ASA GUIDELINE
NYSORA Guideline
2 Case Reports of Gastric Ultrasound
These lecture slides, by Dr Sidra Arshad, offer a quick overview of physiological basis of a normal electrocardiogram.
Learning objectives:
1. Define an electrocardiogram (ECG) and electrocardiography
2. Describe how dipoles generated by the heart produce the waveforms of the ECG
3. Describe the components of a normal electrocardiogram of a typical bipolar leads (limb II)
4. Differentiate between intervals and segments
5. Enlist some common indications for obtaining an ECG
Study Resources:
1. Chapter 11, Guyton and Hall Textbook of Medical Physiology, 14th edition
2. Chapter 9, Human Physiology - From Cells to Systems, Lauralee Sherwood, 9th edition
3. Chapter 29, Ganong’s Review of Medical Physiology, 26th edition
4. Electrocardiogram, StatPearls - https://www.ncbi.nlm.nih.gov/books/NBK549803/
5. ECG in Medical Practice by ABM Abdullah, 4th edition
6. ECG Basics, http://www.nataliescasebook.com/tag/e-c-g-basics
1. Jong Chul Ye
Bio-Imaging & Signal Processing Lab.
Dept. Bio & Brain Engineering
Dept. Mathematical Sciences
KAIST, Korea
Refresher Course
Deep Learning for CT Reconstruction:
From Concept to Practices
This can be downloaded from http://bispl.weebly.com
2. Course Overview
• Introduction
• CNN review
• Deep learning: biological origin
• Deep learning: mathematical origin
• Applications to CT
• Programming Example
3. Deep Learning Age
• Deep learning has been successfully used for
classification, low-level computer vision, etc
• Even outperforms human observers
6. Sparse-view CT with Variational Network
Chen et al, “LEARN”, arXiv:1707.09636
7. CT Filter design using Neural Network
Würfl, Tobias, et al. 2016.
8. • Successful demonstra5on of deep learning for various image
reconstruc5on problems
– Low-dose x-ray CT (Kang et al, Chen et al, Wolterink et al)
– Sparse view CT (Jin et al, Han et al, Adler et al)
– Interior tomography (Han et al)
– Stationary CT for baggage inspection (Han et al)
– CS-MRI (Hammernik et al, Yang et al, Lee et al, Zhu et al)
– US imaging (Yoon et al )
– Diffuse optical tomography (Yoo et al)
– Elastic tomography (Yoo et al)
– etc
• Advantages
– Very fast reconstruction time
– Significantly improved results
Other works
9. WHY DEEP LEARNING WORKS
FOR RECON ?
DOES IT CREATE ANY
ARTIFICIAL FEATURES ?
19. Retina, V1 Layer
18
Receptive fields of two ganglion cells
in retina à convolution
Orientation column in V1
http://darioprandi.com/docs/talks/image-reconstruction-recognition/graphics/pinwheels.jpg
Figure courtesy by distillery.com
23. Why Deep Learning works for recon ?
Existing views 1: unfolded iteration
• Most prevailing views
• Direct connec5on to sparse recovery
– Cannot explain the filter channels
Jin, arXiv:1611.03679
24. Why Deep Learning works for recon ?
Existing views 2: generative model
• Image reconstruc5on as a distribu5on matching
– However, difficult to explain the role of black-box network
Bora et al, Compressed Sensing using Generative Models, arXiv:1703.03208
25. Our Proposal:
Deep Learning == Deep Convolutional
Framelets
• Ye et al, “Deep convolutional framelets: A general deep
learning framework for inverse problems”, SIAM Journal
Imaging Sciences, 11(2), 991-1048, 2018.
30. Missing elements can be found by low rank Hankel structured matrix comple5on
Nuclear norm Projec5on on sampling posi5ons
min
m
kH(m)k⇤
subject to P⌦(b) = P⌦(f)
RankH(f) = k
* Jin KH et al IEEE TCI, 2016
* Jin KH et al.,IEEE TIP, 2015
* Ye JC et al., IEEE TIT, 2016
m
Annihilating filter-based low-rank Hankel matrix
35. 18. APR. 2015. 34
Algorithmic Flow
ALOHA
Computationally
Heavy
36. Key Observation
Data-Driven Hankel matrix decomposition
=> Deep Learning
• Ye et al, “Deep convolutional framelets: A general deep
learning framework for inverse problems”, SIAM Journal
Imaging Sciences, 11(2), 991-1048, 2018.
37. Hd(f) Hd(f)
= ˜T
˜ T C
C = T
Hd(f)
C = T
(f ~ )
Encoder:
˜ T
= I
˜ = PR(V )
Hd(f) = U⌃V T
Unlifting: f = (˜C) ~ ⌧(˜)
: Non-local basis
: Local basis
: Frame condition
: Low-rank condition
convolution
pooling
un-pooling
convolution
: User-defined pooling
: Learnable filters
Hpi
(gi) =
X
k,l
[Ci]kl
e
Bkl
i
Decoder:
Deep Convolutional Framelets (Y, Han, Cha; 2018)
40. Role of ReLU: Conic encoding
D. D. Lee and H. S. Seung, Nature, 1999
ReLU: positive framelet
coefficients
Conic encoding à part by
representation similar to visual
cortex
55. Problem of U-net
Pooling does NOT satisfy
the frame condition
JC Ye et al, SIAM Journal Imaging Sciences, 2018
Y. Han et al, TMI, 2018.
ext
>
ext = I + >
6= I
56. Improving U-net using Deep Conv Framelets
• Dual Frame U-net
• Tight Frame U-net
JC Ye et al, SIAM Journal Imaging Sciences, 2018
Y. Han and J. C. Ye, TMI, 2018
73. Still Unresolved Problems..
• Cascaded geometry of deep neural network
• Generalization capability
• Optimization landscape
• Training procedure
• Extension to classification problems
75. Dataset of Natural Images
• MNIST (http://yann.lecun.com/exdb/mnist/)
– Handwritten digits.
• SVHN (http://ufldl.stanford.edu/housenumbers/)
– House numbers from Google Street View.
• ImageNet (http://www.image-net.org/)
– The de-facto image dataset.
• LSUN (http://lsun.cs.princeton.edu/2016/)
– Large-scale scene understanding challenge.
• Pascal VOC (
http://host.robots.ox.ac.uk/pascal/VOC/)
– Standardized image dataset.
• MS COCO (http://cocodataset.org/#home)
– Common Objects in Context.
• CIFAR-10 / -100
(https://www.cs.utoronto.ca/~kriz/cifar.html)
– Tiny images data set.
• BSDS300 / 500 (
https://www2.eecs.berkeley.edu/Research/
Projects/CS/vision/grouping/resources.html)
– Contour detection and image Segmentation
resources.
https://en.wikipedia.org/wiki/List_of_datasets_for_machine_learning_research
https://www.kaggle.com/datasets
76. Dataset for Medical Images
• HCP (https://www.humanconnectome.org/)
– Behavioral and 3T / 7T MR imaging dataset.
• MRI Data (http://mridata.org/)
– Raw k-space dataset acquired on
a GE clinical 3T scanner.
• LUNA (https://luna16.grand-challenge.org/data/)
– Lung Nodule analysis dataset acquired on a CT scanner.
• Data Science Bowl
(https://www.kaggle.com/c/data-science-bowl-2017)
– A thousand low-dose CT images.
• NIH Chest X-rays
(https://nihcc.app.box.com/v/ChestXray-NIHCC)
– X-ray images with disease labels.
http://www.cancerimagingarchive.net/
https://www.kaggle.com/datasets
• TCIA Collections
(http://www.cancerimagingarchive.net/)
• De-identifies and hosts a large archive of medical
images of
cancer accessible for public download.
• The data are organized as “Collections”, typically
patients related
by a common disease, image modality (MRI, CT,
etc).
77. Libraries for Deep learning
• TensorFlow (https://www.tensorflow.org/)
– Python
• Theano (
http://deeplearning.net/software/theano/)
– Python
• Keras (https://keras.io/)
– Python
• Caffe (http://caffe.berkeleyvision.org/)
– Python
• Torch (or PyTorch) (http://torch.ch/)
– C / C++ (or Python)
• Deeplearning4J (
https://deeplearning4j.org/)
– Java
• Microsoft Cognitive Toolkit (CNTK)
(
https://www.microsoft.com/en-us/cognitive-
toolkit/)
– Python / C / C++
• MatConvNet (
http://www.vlfeat.org/matconvnet/)
– Matlab
79. Step 1: Compile the toolbox
1. Unzip the MatConvNet toolbox.
2. Open ‘vl_compilenn.m’ in Matlab.
80. Step 1: Compile the toolbox (cont.)
3. Check the options such as enableGpu and enableCudnn.
4. Run the ‘vl_compilenn.m’.
* To use GPU processing (false true),
you must have CUDA installed.
( https://developer.nvidia.com/cuda-90-download-archive )
** To use cuDNN library (false true),
you must have cuDNN installed.
( https://developer.nvidia.com/cudnn )
81. Step 2: Prepare dataset
As a classification example, MNIST consists of as follows,
Images % struct-type
Data % handwritten digit image
labels % [1, …, 10]
set % [1, 2, 3],
1, 2, and 3 indicate
train, valid, and test set, respectively.
Data Labels
6
82. Step 2: Prepare dataset (cont.)
• As a segmentation example, U-Net dataset consists of as follows,
– Images % struct-type
Ø data % Cell image
Ø labels % [1, 2], Mask image.
1 and 2 indicate
back- and for-ground, respectively.
Ø set % [1, 2, 3]
Data Labels
83. Step 3: Implementation of the network architecture
• Developers only need to program the network architecture code
because MatConvNet supports the network training framework.
Support famous network architectures,
such as alexnet, vggnet, resnet, inceptionent, and so on.
84. Step 3: Implementation of the architecture (cont.)
– The implementation details of U-Net
U-Net can be implemented,
recursively.
Stage 0
Stage 1
Stage 2
Stage 3
Stage 4
85. Step 3: Implementation of the architecture (cont.)
1. Create objects of network and layers.
Encoder Part
Skip + Concat Part Decoder Part
• The structure of Stage 0
Network Part
86. Step 3: Implementation of the architecture (cont.)
2. Connect each layers.
• The structure of Stage 0
Layer Name
( string-type )
Layer object
( object )
Input Name
( string-type )
Output Name
( string-type )
Parameters Name
( string-type )
All objects and names must be unique.
87. Step 3: Implementation of the architecture
(cont.)
3. Implement recurrently the each stages and add a loss
function.
Previous parts (3.1 and 3.2)
become functional as
‘add_block_unet’.
88. Step 4: Network hyper-parameter set up
• MatConvNet supports the default hyper-parameters as follows,
Refer the cnn_train.m
( or cnn_train_dag.m )
The supported hyper-parameters
1. The size of mini-batch
2. The number of epochs
3. Learning rate
4. Weight decay factor
5. Solvers
such as SGD, AdaDelta, AdaGrad, Adam, and RMS
The kind of Optimization Solvers
89. Step 5: Run the training script
1. Training
script
2. Training
loss
3. Training
loss graph
• Blue : train
• Orange : valid
90. Acknowledgements
CT Team
• Yoseob Han
• Eunhee Kang
• Jawook Goo
US Team
• Shujaat Khan
• Jaeyong Hur
MR Team
• Dongwook Lee
• Juyoung Lee
• Eunju Cha
• Byung-hoon Kim
Image Analysis Team
• Boa Kim
• Junyoung Kim
Optics Team
• Sungjun Lim
• Junyoung Kim
• Jungsol Kim
• Taesung Kwon