• Like
Jst part5
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Published

 

Published in Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
147
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
9
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. 66//1010//20132013 Back-Propagation Algorithm • A training procedure which allows multi-layer feed forward Neural Networks to be trained • Error-correction learning algorithm – Back-propagate the error from the output layer to the hidden layers • An example of a gradient-descent technique – The back-propagation process emerges directly from a derivation of the overall error gradient • Can theoretically perform “any” input-output mapping BPA Process • Compute the values for the output units, using the observed error • Starting with output layer, repeat the following for each layer in the network, until the earliest hidden layer is reached: – Propagate the values back to the previous layer – Update the weights between the two layers
  • 2. 66//1010//20132013 Error Back-Propagation • Sum of squared Error Hidden unitsHidden units Output unitsOutput units Input unitsInput units aaii aajj aakk wwj,ij,i wwk,jk,j Error Back-Propagation • Output Layer – Multiple output units, so Erri is i th component of error vector – Modified error • Gradient in output layer • Weight update in output layer η : Learning: Learning RateRate
  • 3. 66//1010//20132013 Error Back-Propagation • Hidden Layer – Hidden node j is “ responsible” for some fraction of the error in each of the output nodes to which it connects – divided according to strength of the connection between the hidden node and the output node and are propagated back to provide the values for the hidden layer Error Back-Propagation • Gradient in hidden layer • Weight update in hidden layer – : Learning: Learning RateRate
  • 4. 66//1010//20132013 Fungsi Aktivasi (fungsi logistik/sigmoid) ∞<<∞> −+ = )(-and0 ))(exp(1 1 ))(( nva nav nvf j j jj )](1)[( ))](exp(1[ ))(exp( ))(( 2 ' nynay nav nava nvf jj j j jj −= −+ − = FungsiFungsi AktivasiAktivasi((TangenTangen hiperbolikushiperbolikus)) 0))(tanh())(( >= (a,b)nbvanvf jij )]()][([ )))((tanh1())((hsec))(( 22' nyanya a b nbvabnbvabnvf jj jjij +−= −== Parameter-parameter JST (1) • Parameter JST yang sangat penting dan sensitif pada proses pelatihan, yaitu: jumlah neuron pada hidden layer, learning rate, jumlah iterasi, dan batasan error • Belum ada formula khusus menentukan jumlah neuron pada hidden layer yang optimal • Formula untuk memperkirakan jumlah neuron pada hidden layer adalah: – Nh adalah jumlah neuron pada hidden layer – Ni jumlah node input layer atau jumlah masukan – Ni adalah jumlah neuron pada output layer
  • 5. 66//1010//20132013 Parameter-parameter JST (2) • Parameter learning rate sangat mempengaruhi proses pelatihan – Learning rate terlalu besar (misal 0,9) • MSE menurun tajam pada awal iterasi, tetapi MSE dapat berosilasi atau naik turun tidak terkendali – Learning rate terlalu kecil (misal 0,0001) • MSE menurun sangat pelan Parameter-parameter JST (3) • Jumlah iterasi dan batasan error digunakan sebagai kondisi berhenti pada proses pelatihan. • Jika batasan error yang kita definisikan terlalu kecil, JST bisa menjadi overfit, artinya JST memiliki akurasi tinggi untuk data training set tapi akurasinya sangat rendah untuk data test set • Salah satu cara untuk menghindari overfit adalah dengan membagi data-data yang ada menjadi tiga bagian: training set, validation set, dan test set. • Pada saat pelatihan, kita menggunakan training set dan validation set secara bersamaan