اسلایدهای درس شبکه عصبی و یادگیری عمیق که در دانشگاه شیراز توسط استاد اقبال منصوری تدریس می شود.
Neural network and deep learning course slide taught by Professor Iqbal Mansouri at Shiraz University.
Deep Learning With Python | Deep Learning And Neural Networks | Deep Learning...Simplilearn
This presentation about Deep Learning with Python will help you understand what is deep learning, applications of deep learning, what is a neural network, biological versus artificial neural networks, introduction to TensorFlow, activation function, cost function, how neural networks work, and what gradient descent is. Deep learning is a technology that is used to achieve machine learning through neural networks. We will also look into how neural networks can help achieve the capability of a machine to mimic human behavior. We'll also implement a neural network manually. Finally, we'll code a neural network in Python using TensorFlow.
Below topics are explained in this Deep Learning with Python presentation:
1. What is Deep Learning
2. Biological versus Artificial Intelligence
3. What is a Neural Network
4. Activation function
5. Cost function
6. How do Neural Networks work
7. How do Neural Networks learn
8. Implementing the Neural Network
9. Gradient descent
10. Deep Learning platforms
11. Introduction to TensoFlow
12. Implementation in TensorFlow
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations, and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning, and artificial intelligence
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Learn more at https://www.simplilearn.com/deep-learning-course-with-tensorflow-training
An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. It works on real-valued, discrete-valued and vector valued.
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...Simplilearn
This presentation about Deep Learning is designed for beginners who want to learn Deep Learning from scratch. We will look at where Deep Learning is applied and what exactly this term means. We'll see how Deep Learning, Machine Learning, and AI are different and why Deep Learning even came into the picture. We will then proceed to look at Neural Networks, which are the core of Deep Learning. Before we move into the working of Neural Networks, we'll cover activation and cost functions. The video will also introduce you to the most popular Deep Learning platforms. We wrap it up with a demo in TensorFlow to predict if a person receives a salary above or below 50k. Now, let us get started and understand Deep Learning in detail.
Below topics are explained in this Deep Learning presentation:
1. Applications of Deep Learning
2. What is Deep Learning
3. Why is Deep Learning important
4. What are Neural Networks
5. Activation function
6. Cost function
7. How do Neural Networks work
8. Deep Learning platforms
9. Introduction to TensorFlow
10. Use case implementation using TensorFlow
Why Deep Learning?
It is one of the most popular software platforms used for Deep Learning and contains powerful tools to help you build and implement artificial Neural Networks.
Advancements in Deep Learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in Deep Learning models, learn to operate TensorFlow to manage Neural Networks and interpret the results. According to payscale.com, the median salary for engineers with Deep Learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement Deep Learning algorithms, understand Neural Networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional Neural Networks, Recurrent Neural Networks, training deep networks and high-level interfaces
4. Build Deep Learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of Artificial Neural Networks
6. Troubleshoot and improve Deep Learning models
Learn more at https://www.simplilearn.com/deep-learning-course-with-tensorflow-training
Deep Learning With Python | Deep Learning And Neural Networks | Deep Learning...Simplilearn
This presentation about Deep Learning with Python will help you understand what is deep learning, applications of deep learning, what is a neural network, biological versus artificial neural networks, introduction to TensorFlow, activation function, cost function, how neural networks work, and what gradient descent is. Deep learning is a technology that is used to achieve machine learning through neural networks. We will also look into how neural networks can help achieve the capability of a machine to mimic human behavior. We'll also implement a neural network manually. Finally, we'll code a neural network in Python using TensorFlow.
Below topics are explained in this Deep Learning with Python presentation:
1. What is Deep Learning
2. Biological versus Artificial Intelligence
3. What is a Neural Network
4. Activation function
5. Cost function
6. How do Neural Networks work
7. How do Neural Networks learn
8. Implementing the Neural Network
9. Gradient descent
10. Deep Learning platforms
11. Introduction to TensoFlow
12. Implementation in TensorFlow
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations, and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning, and artificial intelligence
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Learn more at https://www.simplilearn.com/deep-learning-course-with-tensorflow-training
An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. It works on real-valued, discrete-valued and vector valued.
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...Simplilearn
This presentation about Deep Learning is designed for beginners who want to learn Deep Learning from scratch. We will look at where Deep Learning is applied and what exactly this term means. We'll see how Deep Learning, Machine Learning, and AI are different and why Deep Learning even came into the picture. We will then proceed to look at Neural Networks, which are the core of Deep Learning. Before we move into the working of Neural Networks, we'll cover activation and cost functions. The video will also introduce you to the most popular Deep Learning platforms. We wrap it up with a demo in TensorFlow to predict if a person receives a salary above or below 50k. Now, let us get started and understand Deep Learning in detail.
Below topics are explained in this Deep Learning presentation:
1. Applications of Deep Learning
2. What is Deep Learning
3. Why is Deep Learning important
4. What are Neural Networks
5. Activation function
6. Cost function
7. How do Neural Networks work
8. Deep Learning platforms
9. Introduction to TensorFlow
10. Use case implementation using TensorFlow
Why Deep Learning?
It is one of the most popular software platforms used for Deep Learning and contains powerful tools to help you build and implement artificial Neural Networks.
Advancements in Deep Learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in Deep Learning models, learn to operate TensorFlow to manage Neural Networks and interpret the results. According to payscale.com, the median salary for engineers with Deep Learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement Deep Learning algorithms, understand Neural Networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional Neural Networks, Recurrent Neural Networks, training deep networks and high-level interfaces
4. Build Deep Learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of Artificial Neural Networks
6. Troubleshoot and improve Deep Learning models
Learn more at https://www.simplilearn.com/deep-learning-course-with-tensorflow-training
Comparative study of ANNs and BNNs and mathematical modeling of a neuronSaransh Choudhary
Comparative study of Artificial and Biological Neural Networks with respect to structure, functionalities, learning methods and information transmission method across fundamental units. Also includes mathematical modelling of an artificial neuron.
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
This Deep Learning interview questions and answers presentation will help you prepare for Deep Learning interviews. This presentation is ideal for both beginners as well as professionals who are appearing for Deep Learning, Machine Learning or Data Science interviews. Learn what are the most important Deep Learning interview questions and answers and know what will set you apart in the interview process.
Some of the important Deep Learning interview questions are listed below:
1. What is Deep Learning?
2. What is a Neural Network?
3. What is a Multilayer Perceptron (MLP)?
4. What is Data Normalization and why do we need it?
5. What is a Boltzmann Machine?
6. What is the role of Activation Functions in neural network?
7. What is a cost function?
8. What is Gradient Descent?
9. What do you understand by Backpropagation?
10. What is the difference between Feedforward Neural Network and Recurrent Neural Network?
11. What are some applications of Recurrent Neural Network?
12. What are Softmax and ReLU functions?
13. What are hyperparameters?
14. What will happen if learning rate is set too low or too high?
15. What is Dropout and Batch Normalization?
16. What is the difference between Batch Gradient Descent and Stochastic Gradient Descent?
17. Explain Overfitting and Underfitting and how to combat them.
18. How are weights initialized in a network?
19. What are the different layers in CNN?
20. What is Pooling in CNN and how does it work?
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Learn more at: https//www.simplilearn.com
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Comparative study of ANNs and BNNs and mathematical modeling of a neuronSaransh Choudhary
Comparative study of Artificial and Biological Neural Networks with respect to structure, functionalities, learning methods and information transmission method across fundamental units. Also includes mathematical modelling of an artificial neuron.
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
This Deep Learning interview questions and answers presentation will help you prepare for Deep Learning interviews. This presentation is ideal for both beginners as well as professionals who are appearing for Deep Learning, Machine Learning or Data Science interviews. Learn what are the most important Deep Learning interview questions and answers and know what will set you apart in the interview process.
Some of the important Deep Learning interview questions are listed below:
1. What is Deep Learning?
2. What is a Neural Network?
3. What is a Multilayer Perceptron (MLP)?
4. What is Data Normalization and why do we need it?
5. What is a Boltzmann Machine?
6. What is the role of Activation Functions in neural network?
7. What is a cost function?
8. What is Gradient Descent?
9. What do you understand by Backpropagation?
10. What is the difference between Feedforward Neural Network and Recurrent Neural Network?
11. What are some applications of Recurrent Neural Network?
12. What are Softmax and ReLU functions?
13. What are hyperparameters?
14. What will happen if learning rate is set too low or too high?
15. What is Dropout and Batch Normalization?
16. What is the difference between Batch Gradient Descent and Stochastic Gradient Descent?
17. Explain Overfitting and Underfitting and how to combat them.
18. How are weights initialized in a network?
19. What are the different layers in CNN?
20. What is Pooling in CNN and how does it work?
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Learn more at: https//www.simplilearn.com
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
Invited talk at Tsinghua University on "Applications of Deep Neural Network". As the tech. lead of deep learning task force at NIO USA INC, I was invited to give this colloquium talk on general applications of deep neural network.
Neural Network and Artificial Intelligence.
Neural Network and Artificial Intelligence.
WHAT IS NEURAL NETWORK?
The method calculation is based on the interaction of plurality of processing elements inspired by biological nervous system called neurons.
It is a powerful technique to solve real world problem.
A neural network is composed of a number of nodes, or units[1], connected by links. Each linkhas a numeric weight[2]associated with it. .
Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights.
Artificial neurons are the constitutive units in an artificial neural network.
WHY USE NEURAL NETWORKS?
It has ability to Learn from experience.
It can deal with incomplete information.
It can produce result on the basis of input, has not been taught to deal with.
It is used to extract useful pattern from given data i.e. pattern Recognition etc.
Biological Neurons
Four parts of a typical nerve cell :• DENDRITES: Accepts the inputs• SOMA : Process the inputs• AXON : Turns the processed inputs into outputs.• SYNAPSES : The electrochemical contactbetween the neurons.
ARTIFICIAL NEURONS MODEL
Inputs to the network arerepresented by the x1mathematical symbol, xn
Each of these inputs are multiplied by a connection weight , wn
sum = w1 x1 + ……+ wnxn
These products are simplysummed, fed through the transfer function, f( ) to generate a result and then output.
NEURON MODEL
Neuron Consist of:
Inputs (Synapses): inputsignal.Weights (Dendrites):determines the importance ofincoming value.Output (Axon): output toother neuron or of NN .
Lecture conducted by me on Deep Learning concepts and applications. Discussed FNNs, CNNs, Simple RNNs and LSTM Networks in detail. Finally conducted a hands-on session on deep-learning using Keras and scikit-learn.
اسلایدهای درس شبکه عصبی و یادگیری عمیق که در دانشگاه شیراز توسط استاد اقبال منصوری تدریس می شود.
Neural network and deep learning course slide taught by Professor Iqbal Mansouri at Shiraz University.
اسلایدهای درس شبکه عصبی و یادگیری عمیق که در دانشگاه شیراز توسط استاد اقبال منصوری تدریس می شود.
Neural network and deep learning course slide taught by Professor Iqbal Mansouri at Shiraz University.
Slides taught in the course of pattern recognition of Professor Zohreh Azimifar at Shiraz University.
The slides are owned by the University of Texas.
اسلاید های تدریس شده درس شناسایی آماری الگو استاد زهره عظیمی فر در دانشگاه شیراز.
اسلاید ها متعلق به دانشگاه تگزاس است.
Slides taught in the course of pattern recognition of Professor Zohreh Azimifar at Shiraz University.
The slides are owned by the University of Texas.
اسلاید های تدریس شده درس شناسایی آماری الگو استاد زهره عظیمی فر در دانشگاه شیراز.
اسلاید ها متعلق به دانشگاه تگزاس است.
Slides of pattern recognition Course of Professor Zohreh Azimifar at Shiraz University.
اسلاید های درس شناسایی آماری الگو استاد زهره عظیمی فر در دانشگاه شیراز.
Slides of pattern recognition Course of Professor Zohreh Azimifar at Shiraz University.
اسلاید های درس شناسایی آماری الگو استاد زهره عظیمی فر در دانشگاه شیراز.
Slides of pattern recognition Course of Professor Zohreh Azimifar at Shiraz University.
اسلاید های درس شناسایی آماری الگو استاد زهره عظیمی فر در دانشگاه شیراز.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Lateral Ventricles.pdf very easy good diagrams comprehensive
Ch 1-1 introduction
1. Neural Networks & Deep Learning
CSE & IT Department
ECE School
Shiraz University
ONLY
GOD
Introduction to
Neural Networks
2021
2. 2
Biological Inspiration
• Brain is a highly complex, nonlinear, and parallel
computer
• Simple processing units called neurons
• Cycle times in milliseconds
• Massive number of neurons (~ 1011
in humans)
• Massive interconnection (~ 6012 connections)
• Brain can perform certain tasks (pattern recognition,
perception) many times faster than fastest digital
computers
3. 3
Natural Intelligence
Structure of human brain
• Frontal lobe
• Reward, attention, planning,
short-term memory, motivation
• Parietal lobe
• Sensory inputs from skin (touch,
temperature, pain)
• Occipital lobe
• Visual processing center
• Temporal lobe
• Visual memories, language
comprehension, emotion
5. 5
Human Nervous System Operation
• May be broken down into three stages:
• Receptors
• Collect information from the environment (e.g. photons on the
retina)
• Convert stimuli from human body or environment into electrical
pulses as information conveyer to brain
6. 6
Organization of Human Nervous System
• Neural network
• Represents the brain which receives information and makes
appropriate decisions
• Effectors
• Convert electrical impulses generated by the neural network into
responses as system outputs
• Generate interactions with the environment (e.g. activate muscles)
• Arrows
• Represent the flow of information/activation (feed-forward and
feed-back)
8. 8
Biological Neuron
• Dendrites
• Receptive zones that receive activation
from other neurons
• Cell body (soma)
• Processes incoming activations and
converts them into output activations
• Axons
• Transmission lines that send activation
to other neurons
• Synapses
• Allow weighted transmission of signals
(using neurotransmitters) between
axons and dendrites
9. 9
Brain: Amazingly Efficient Computer
~ 1011
neurons
~ 104 synapses per neuron
~ 10 spikes go through each synapse per second
~ 1016
operations per second
~ 25 Watts (Very efficient)
~ 1.4 Kg, 1.7 liters
~ 2500 cm2 (Unfolded cortex)
10. 10
Action of Biological Neuron
• Majority of neurons encode their outputs/activations as a series
of brief electrical pulses (spikes or action potentials)
11. 11
Firing of Neuron
• Behavior is binary (a neuron either fires or not)
• Neurons don’t fire if their accumulated activity stay below
threshold
• If activity is above threshold, a neuron fires (produces a
spike)
• The firing frequency increases with accumulated activity
until max. firing frequency reached
• The firing frequency is limited by refractory period of about
1-10 ms
12. 12
Man vs. Machine (Hardware)
Features Human brain Von Neumann computer
# elements
1010 - 1012
simple neurons
107 - 108 transistors
104 complex processors
# connections/element massive little or no
Switching frequency slow (103 Hz) fast (109 - 1010 Hz)
Energy/operation/sec. 10-16 Joule 10-6 Joule
Power consumption 10 Watt 100 - 500 Watt
Structure dynamic static
Reliability of elements low reasonable
Reliability of system high reasonable
Decision speed/power high reasonable
13. 13
Man vs. Machine (Software)
Features Human brain Digital computer
Data representation analog digital
Memory localization distributed localized
Control distributed localized
Processing parallel sequential
Skill acquisition adaptive-learnable pre-programming
Fault/error tolerance fault tolerant intolerant to errors
Structure trained designed
Activation nonlinear linear
14. 14
What are NNs?
• Artificial neurons are crude approximations of neurons
found in brains
• Physical devices
• Mathematical constructs
• Artificial NNs (ANN) are networks of artificial neurons as
crude approximations to parts of biological brains
15. 15
Conceptual Definition of NNs
• An ANN
• Is a machine, designed to model the brain how
performs a particular task or function of interest
• Is either implemented in hardware or simulated in
software
• Mimics the brain or nervous system, in senses:
• Structure (simple processing units, massive
interconnection, …)
• Functionality (learning, adaptability, fault tolerance, …)
16. 16
Pragmatic Definition of NNs
• An ANN
• Is a massively connected parallel computational system
• Made up of many simple processing units
• Has propensity of storing experimental knowledge
• Can perform tasks analogously to biological brains
• Resembles the brain in two respects:
• Knowledge is acquired by the network from the
environment through a learning process
• Inter-neuron connection strengths are used to store the
acquired knowledge
17. 17
Why NNs?
• They are extremely powerful computational devices
• Massive parallelism makes them very efficient
• They can learn from training data and generalize to new
situations
• They are particularly fault tolerant (as “graceful
degradation” in biological systems)
• They are robust and noise tolerant
• They can do anything a symbolic/logic system can do, and
more
19. 19
Characteristic of Neuron
• A neuron can receive many inputs
• Inputs may be modified by synaptic weights at receiving
dendrites
• A neuron sums its weighted inputs
• An activation function limits amplitude of output to {0, 1}
• A neuron can transmit an output signal
• Output can go to many other neurons
22. 22
ANN Features
• Neurons act nonlinearly
• Information processing is local
• Memory is distributed
• The dendrite weights learn through experience
• The weights may be inhibitory or excitatory
• Neurons can generalize novel input stimuli
• Neurons are fault tolerant and can sustain damage
23. 23
ANN Categories
• Direction of information (signal) flow
• Feed-forward network (no feed-back connections)
• Recurrent network (with at least one feed-back
connection)
• Number of layers
• Single layer network
• Single hidden-layer network
• Multilayer network (shallow/deep network)
• Connectivity
• Fully-connected network
• Partially-connected network
24. 24
ANN Categories
• Activation function
• Threshold (binary, bipolar)
• Linear (identity)
• Nonlinear (sigmoid, radial-basis)
• Learning methodology
• Supervised learning
• Unsupervised learning
• Reinforced learning
• Training method
• Static
• Dynamic
25. 25
NN Design Decisions
• Architecture: pattern of connections between
neurons
• No. of layers, no. of neurons in each layer, no. of links
• Connectivity
• Learning algorithm: method of determining the
connection weights
• Supervised, self-organizing, competitive
• Activation function: mapping of neurons' behavior
• Threshold, linear, nonlinear
29. 29
Knowledge Representation in ANN
• What is knowledge?
• Information or models used to interpret, predict and appropriately
respond to outside world (environment)
• Quality of knowledge representation generally translates
into quality of solution
• Better representation means better solution
• Knowledge representation in NNs is not well-understood
• There is little theory that relates a given weight to a particular piece
of information
• Knowledge is encoded in free parameters of NN
• Weights and thresholds
30. 30
Knowledge Representation Rules
• Rule 1:
• Similar inputs from similar classes should usually produce similar
representations inside the network, and should, therefore, be
classified as belonging to the same category
• Rule 2:
• Inputs to be characterized as separate classes should be given
widely different representations in the network
• Rule 3:
• If a particular feature is important, then there should be a large
number of neurons involved in representation of that item in
network
• Rule 4:
• Prior knowledge and invariances should be built into design of
network, thus simplifying learning
31. 31
Knowledge Representation Rules
• Advantages of building-in prior knowledge
• Specialized structure
• Benefits of specialized structure
• Biologically plausible, less complication, fewer free parameters, faster
training, fewer examples needed, better generalization
• How to build-in prior knowledge
• No hard and fast rules
• In general, use domain knowledge to reduce complexity of NN
based on its performance characteristics
• Built-in Invariances
• Invariance?
• Fault tolerance
• Immunity to transformations
• Invariance by structure, training, feature space
32. 32
NN Learning
• Learning:
• To acquire and maintain knowledge of interest
• Knowledge of environment that will enable NN to achieve
its goals
• Prior information
• Current information
• Knowledge can be built into NNs from input-output
examples via learning using training algorithms
• The most powerful property of NNs: ability to learn and
generalize from a set of training data
33. 33
NN Learning
• NNs adapt the weights of connections between
neurons so that final output activations are correct
• Three broad types of learning:
• Supervised Learning (learning with a teacher)
• Reinforcement learning (learning with environment feedback)
• Unsupervised learning (learning with no help)
• Most of human/animal learning is unsupervised
• If intelligence was a cherry ice-cream cake,
• unsupervised learning would be cake
• supervised learning would be icing on cake
• reinforcement learning would be cherry on cake
34. 34
An Example
• Using NN for signature verification
• Prior knowledge
• Architecture of NN
• Current knowledge
• Input-output examples, being used for NN training
• Learning
• Modification of free parameters (weights, thresholds, …)
• Generalization
• Using the trained NN for predicting output for unseen
input
35. 35
Advantages of ANNs
• Efficiency
• Inherent massively parallel
• Robustness
• Can deal with incomplete and/or noisy data
• Fault tolerance
• Still works when part of net fails
• User friendly
• Learning instead of programming
36. 36
Drawbacks of ANNs
• Difficult to design
– No clear design rules for arbitrary applications
• Hard or impossible to train
– When training data are rare
• Difficult to assess internal operation (black box)
– Difficult to find out what tasks are performed by different
parts of the net
• Unpredictable
– Difficult to estimate future network performance based on
current behavior
37. 37
Real-world NN applications
• Financial modeling (predicting stocks, currency exchange rates)
• Time series prediction (climate, weather, airline marketing)
• Computer games (intelligent agents, backgammon)
• Control systems (autonomous robots, microwave controllers)
• Pattern recognition (speech recognition, hand-writing recognition)
• Data analysis (data compression, data mining)
• Noise reduction (function approximation, ECG noise reduction)
• Bioinformatics (protein secondary structure, DNA sequencing)
39. 39
History of NN
• 1943 McCulloch-Pitts neuron model
• 1949 Hebbian learning rule
• 1958 Single layer network, Perceptron
• 1960 Adaline
• 1969 Limitations of Perceptrons
• 1982 Kohonen nets, Associative memory,
Hopfield net
• 1985 ART
40. 40
History of NN
• 1986 Back-propagation learning algorithm for
multi-layer Perceptron
• 1990s Radial basis function networks
• 2000 Ensembles of NNs, Cascaded Networks
• 2004 Deep Belief Networks
• 2010~ Convolutional Networks, Autoencoders, LSTMs
41. 41
Some Useful Notations
𝑥𝑖, 𝑦𝑗 : outputs of units 𝑋𝑖 and 𝑌
𝑗
𝑤𝑖𝑗 : weight on connection between 𝑋𝑖 and 𝑌
𝑗
𝑏𝑗, 𝜃𝑗 : bias and threshold of neuron 𝑌
𝑗
𝑊 = 𝑤𝑖𝑗 , 𝑖 = 1, … , 𝑛 𝑗 = 1, … , 𝑚 = 𝑊
𝑛×𝑚
𝑤.𝑗 =
𝑤1𝑗
.
.
.
𝑤𝑛𝑗
vector of weights to 𝑌
𝑗
𝑦_𝑖𝑛𝑗 = net input to neuron 𝑌
𝑗
𝑦_𝑖𝑛𝑗 = 𝑏𝑗 + 𝑖=1
𝑛
𝑥𝑖𝑤𝑖𝑗 = 𝑏𝑗 + 𝑥 ∙ 𝑤.𝑗 ⇒ 𝑦𝑗 = 𝑓(𝑦_𝑖𝑛𝑗)
42. If the brain were so simple
that
we could understand it
then we would be so simple
that
we couldn't understand it
Lyall Watson