Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
ย
Machine Learning 2
1. Machine Learning
Deep Learning
Inas A. Yassine
Systems and Biomedical Engineering Department,
Faculty of Engineering - Cairo University
iyassine@eng.cu.edu.eg
3. Deep Learning
ยง Biology Aspect
ยง Each neuron is fired due to a certain edge
direction
ยง New Wiring Experiment
ยง Brain port
ยง Automate what we see as a faceโฆ.
4. Self-taught learning
Sparse
coding,
LCC, etc.
f1, f2, โฆ, fk
Car Motorcycle
Use learned f1, f2, โฆ, fk to represent training/test sets.
Using f1, f2, โฆ, fk
a1, a2, โฆ, ak
If have labeled training
set is small, can give
huge performance
boost.
9. Deep learning with autoencoders
ยง Logistic regression
ยง Neural network
ยง Sparse autoencoder
ยง Deep autoencoder
10. Logistic regression has a learned parameter vector q.
On input x, it outputs:
where
Logistic regression
x1
x2
x3
+1
Draw a logistic
regression unit as:
11. Neural Network
String a lot of logistic units together. Example 3 layer network:
x1
x2
x3
+1 +1
a3
a2
a1
Layer 1 Layer 2
Layer 3
13. Training a neural network
Given training set (x1, y1), (x2, y2), (x3, y3 ), โฆ.
Adjust parameters q (for every node) to make:
(Use gradient descent.โBackpropagationโ algorithm. Susceptible to local optima.)
14. Unsupervised feature learning
x4
x5
x6
+1
Layer 1
Layer 2
x1
x2
x3
x4
x5
x6
x1
x2
x3
+1
Layer 3
Network is trained to
output the input (learn
identify function).
Minimizing both information
of data and output
Trivial solution unless:
- Constrain number of units
in Layer 2 (learn compressed
representation), or
- Constrain Layer 2 to be
sparse.
a1
a2
a3
15. Training a sparse autoencoder.
Given unlabeled training set x1, x2,
Unsupervised feature learning with ANN
Reconstruction error
term
๐" ๐X
a1
a2
a3
21. First stage of visual processing in
brain:V1
Schematic of simple cell Actual simple cell
โGabor functions.โ
The first stage of
visual processing in
the brain (V1) does
โedge detection.โ
22. Learning an image representation
Sparse coding (Olshausen & Field,1996)
Input: Images x(1), x(2), โฆ, x(m) (each in Rn x n)
Learn: Dictionary of bases f1, f2, โฆ, fk (also Rn x n), so that each
input x can be approximately decomposed as:
s.t. ajโs are mostly zero (โsparseโ)
Use to represent 14x14 image patch succinctly, as [a7=0.8, a36=0.3,
a41 = 0.5]. I.e., this indicates which โbasic edgesโ make up the
image.
24. Represent as: [0, 0, โฆ, 0, 0.6, 0, โฆ, 0, 0.8, 0, โฆ, 0, 0.4, โฆ]
Represent as: [0, 0, โฆ, 0, 1.3, 0, โฆ, 0, 0.9, 0, โฆ, 0, 0.3, โฆ]
More examples
ยป 0.6 * + 0.8 * + 0.4 *
f15 f28
f37
ยป 1.3 * + 0.9 * + 0.3 *
f5 f18
f29
โข Method hypothesizes that edge-like patches are the most
โbasicโ elements of a scene, and represents an image in terms of
the edges that appear in it.
โข Use to obtain a more compact, higher-level representation of
the scene than pixels.
25. Sparse Learning
ยง Input: Images x(1), x(2), โฆ, x(m) (each in Rn x
n)
Reconstruction error
term
๐" ๐X
Regularization objective :
โข Small?
โข Too much energy to be fired
โข Different neurons
โข L1 norm
โข ฮฮ |W X|