Neural Computation


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Neural Computation

  1. 1. 2806 Neural Computation Recurrent Neetworks Lecture 12 2005 Ari Visa
  2. 2. Agenda <ul><li>Some historical notes </li></ul><ul><li>Some theory </li></ul><ul><li>Recurrent networks </li></ul><ul><li>Training </li></ul><ul><li>C onclusions </li></ul>
  3. 3. Some Historical Notes <ul><li>The recurrent network : ”Automata Studies”, Kleene 1954 </li></ul><ul><li>Kalman filter theory (Rudolf E. Kalman, 1960) </li></ul><ul><li>Controllability and observability (Zadeh & Desoer, 1963) (Kailath, 1980), (Sontag, 1990), (Lewis & Syrmos, 1995) </li></ul><ul><li>The NARX model (Leontaritis & Billings 1985) </li></ul><ul><li>The NARX model in the context of neural networks (Chen et al, 1990) </li></ul><ul><li>Recurrent network architectures (Jordan 1986) </li></ul><ul><li>Olin and Giles (1996) showed that using second-order recurrent networks the correct classification of temporal sequences of finite length is quaranteed. </li></ul>
  4. 4. Some Historical Notes <ul><li>The idea behind back-propagation through time (Minsky & Papert, 1969), Werbos (1974), Rumelhart (1986). </li></ul><ul><li>The real-time recurrent learning algorithm (Williams & Zipser, 1989) <- compare with McBride & Narenda (1965) system identification for tuning the parameters of an arbitary dynamical system. </li></ul><ul><li>System identification (Ljung, 1987),(Ljung&Glad,1994) </li></ul>
  5. 5. Some Theory <ul><li>Recurrent networks are neural networks with one or more feedback loops. </li></ul><ul><li>The feedback can be of a local or global kind. </li></ul><ul><li>Input-output mapping networks, a recurrent network responds temporally to an externally applied input signal -> dynamically driven recurrent network. </li></ul><ul><li>The application of feedback enables recurrent network to acquire state representations, which make them suitable devices for such diverse applications as nonlinear prediction and modeling, adaptive equalization speech processing, plant control, automobile engine diagnostics. </li></ul>
  6. 6. Some Theory <ul><li>Four specific network architectures will be represented. </li></ul><ul><li>They all incorporate a static multilayer perceptron or parts thereof. </li></ul><ul><li>They all exploit the nonlinear mapping capability of the multilayer perceptron. </li></ul>
  7. 7. Some Theory <ul><li>Input-Output Recurrent Model -> nonlinear autoregressive with exogeneous inputs model (NARX) y(n+1) = F(y(n),...,y(n-q+1),u(n),...,u(n-q+1)) </li></ul><ul><li>The model has a single input that is applied to a tapped-delay-line memory of q units. It has a single output that is fed back to the input via another tapped-delay-line memory also of q units. </li></ul><ul><li>The contents of these two tapped-delay-lines memories are used to feed the input layer of the multilayer perceptron .The present value of the model input is denoted u(n), and the corresponding value of the model output is denoted by y(n+1). The signal vector applied to the input layer of the multilayer perceptron consists of a data window made up as follows: Present and past values of the input (exogenous inputs), Delayed values of the output (regressed). </li></ul>
  8. 8. NARX <ul><li>Consider a recurrent network with a single input and output. </li></ul><ul><li>y(n+q) =  ( x (n), u q (n)) where q is the dimensionality of the state space, and  :R 2q ->R. </li></ul><ul><li>Provided that the recurrent network is observable x (n) =  ( y q (n), u q-1 (n)) where  :R 2q ->R. </li></ul><ul><li>y(n+q) = F( y q (n), u q (n)) where u q-1 (n) is contained in u q (n) as its first (q-1) elements, and the nonlinear mapping F:R 2q ->R takes care of both  and  . </li></ul><ul><li>y(n+1) = F(y(n),...,y(n-q+1),u(n),...,u(n-q+1)) </li></ul>
  9. 9. Some Theory <ul><li>State-Space Model </li></ul><ul><li>The hidden neurons define the state of the network. The output of the hidden layer is fed back to the input layer via a bank of unit delays. The input layer consists of a concatenation of feedback nodes and source nodes. The network is connected to the external environment via the source node. The number of unit delays used to feed the output of the hidden layer back to the input layer determines the order of the model. </li></ul><ul><li>x (n+1) = f ( x (n), u (n)) </li></ul><ul><li>y (n) = Cx (n) </li></ul><ul><li>The simple recurrent network (SRN) differs from the main model by replacing the output layer by a nonlinear one and by omitting the bank of unit delays at the output. </li></ul>
  10. 10. State-Space Model <ul><li>The state of a dynamical system is defined as a set of quantities that summarizes all the information about the past behavior of the system that is needed to uniquely describe its future behavior, except for the purely external effects arising from the applied input (excitation). </li></ul><ul><li>Let the q-by-1 vector x (n) denote the state of a nonlinear discrete-time system. Let the m-by-1 vector u (n) denote the input applied to the system, and the p-by-1 vector y (n) denote the corresponding output of the system. </li></ul><ul><li>The dynamic behavior of the system (noise free) is described. x (n+1) =  ( W a x (n)+ W b u (n)) (the process equation), y (n) = Cx (n) (the measurement equation) where W a is a q-by-q matrix, W b is a q-by-(m+1) matrix, C is a p-by-q matris; and  : R q -> R q is a diagonal map described by  :[x 1 , x 2 ,…, x q ] T -> [  (x 1 ),  (x 2 ),…,  (x q )] T for some memoryless componen-wise nonlinearity  : R -> R . </li></ul><ul><li>The spaces R m , R q and R p are called the input space, state space, and output space -> m-input, p-output recurrent model of order q. </li></ul>
  11. 11. Some Theory <ul><li>Recurrent multilayer perceptron (RMLP) </li></ul><ul><li>It has one or more hidden layers. Each computation layer of an RMLP has feedback around it. </li></ul><ul><li>x I (n+1) =  I ( x I (n), u (n)) </li></ul><ul><li>x II (n+1) =  II ( x II (n), x I (n+1)), ..., </li></ul><ul><li>x O (n+1) =  O ( x O (n), x K (n)) </li></ul>
  12. 12. Some Theory <ul><li>Second-order network </li></ul><ul><li>When the induced local field v k is combined using multiplications, we refer to the neuron as a second-order neuron. </li></ul><ul><li>A second-order recurrent networks </li></ul><ul><li>v k (n) = b k +  i  j w kij x i (n)u j (n) </li></ul><ul><li>x k (n+1) =  (v k (n)) </li></ul><ul><li>= 1 /(1+exp(- v k (n) ) </li></ul><ul><li>Note, represents the pair x j (n)u j (n) [state,input] and a positive weight w kij represents the presence of the transtion {state,input} ->{next state}, while a negative weight represents the absence of the transition. The state stransition is described by  ( x i ,u j ) = x k. </li></ul><ul><li>Second-order networks are used for representing and learning deterministic finite-state automata (DFA). </li></ul>
  13. 13. Some Theory <ul><li>A recurrent network is said to be controllable if an initial state is steerable to any desired state within a finite number of time steps. </li></ul><ul><li>A recurrent network is said to be observable if the state of the network can be determined from a finite set of input/output measurements. </li></ul><ul><li>A state x - is said to be an equilibrium state if for an input u it satisfies the condition x - =  ( Ax - + Bu - ) </li></ul><ul><li>Set x - = 0 and u - = 0 -> 0 =  (0). </li></ul><ul><li>Linearize x - =  ( Ax - + Bu - ) by expanding it as a Taylor series around x - = 0 and u - = 0 and retaining first-order terms </li></ul><ul><li> x (n+1) =  ’(0)W a  x (n)+  ’(0)w b  u(n)) where  x (n) and  u(n) are small displacements and the q-by-q matrix  ’(0) is the Jacobian of  (v) with respect to its argument v . </li></ul><ul><li> x (n+1) = A  x (n)+ b  u (n)) and  y(n) = c T  x (n) </li></ul><ul><li>The linearized system represented by  x (n+1) = A  x (n)+ b  u (n)) is controllable if the matrix M c = [ A q-1 b ,…, Ab , b ] is of rank q, that is, full rank, because then the linearized process equation above would have a unique solution. </li></ul><ul><li>The matrix M c s called the controllability matrix of the linearized system. </li></ul>
  14. 14. Some Theory <ul><li>In the similar way: </li></ul><ul><li> y(n) = c T  x (n) -> M O =[ c, cA T ,…, c(A T ) q-1 ] </li></ul><ul><li>The linearized system represented by  x (n+1) = A  x (n)+ b  u (n)) and  y(n) = c T  x (n) is observable if the matrix M O =[ c, cA T ,…, c(A T ) q-1 ] is of rank q, that is, full rank. </li></ul><ul><li>The matrix M O s called the observability matrix of the linearized system. </li></ul><ul><li>Let a recurrent network and its linearized version around the origin. If the linearized system is controllable, then the recurrent network is locally controllable around the origin. </li></ul><ul><li>Let a recurrent network and its linearized version around the origin. If the linearized system is observable, then the recurrent network is locally observable around the origin. </li></ul>
  15. 15. Some Theory <ul><li>Computational power of recurrent networks </li></ul><ul><li>I All Touring machines may be simulated by fully connected recurrent networks built on neurons with sigmoid activation functions. </li></ul><ul><li>The Touring machine: </li></ul><ul><li>1) control unit </li></ul><ul><li>2) linear tape </li></ul><ul><li>3) read-write head </li></ul>
  16. 16. Some Theory <ul><li>II NARX networks with one layer of hidden neurons with bounded, one-sided saturated activation functions and a linear output neuron can simulate fully connected recurrent networks with bounded, one-sided saturated activation functions, except for a linear slowdown. </li></ul><ul><li>Bounded, one-sided saturated activation functions (BOSS): </li></ul><ul><li>a ≤  (x) ≤ b, a≠b, for all x  R </li></ul><ul><li>There exist values s and S  (x) = S for all a ≤ s. </li></ul><ul><li> (x 1 ) ≠  (x 2 ) for some x 1 and x 2 . </li></ul><ul><li>NARX networks with one hidden layer of neurons with BOSS activation functions and a linear output neuron are turing equivalent. </li></ul>
  17. 17. Training <ul><li>Epochwise training: For a given epoch, the recurrent network starts running from some initial state until it reaches a new state, at which point the training is stopped and the network is reset to an initial state for the next epoch. </li></ul><ul><li>Continuous training: this is suitable for situations where there are no reset states available and/or on-line learning is required. The network learns while signal processing is being performed by the network. </li></ul>
  18. 18. Training <ul><li>The back-propagation-through-time algorithm (BPTT): is an extension of the standard back-propagation algorithm. It may be derived by unfolding the temporal operation of the network into a layered feedforward network, the topology of which grows by one layer at every time step. </li></ul>
  19. 19. Training <ul><li>Epochwise Back-propagation through time </li></ul><ul><li>E Total (n 0 ,n 1 ) = ½  n1 n=n0  j  A e j (n) ² </li></ul>
  20. 20. Training <ul><li>Truncated Bak-propagation through time in real-time fashion. </li></ul><ul><li>E(n) = ½  j  A e j (n) ² </li></ul><ul><li>We save only the relevant history of input data and network state for a fixed number of time steps. -> </li></ul><ul><li>the truncation depth </li></ul>
  21. 21. Training <ul><li>Real-time recurrent learning (RTRL) </li></ul><ul><li>concatenated input-feedback layer </li></ul><ul><li>processing layer of computational nodes </li></ul><ul><li>e (n) = d (n) – y (n) </li></ul><ul><li>E Total =  n e(n) </li></ul>
  22. 22. Training
  23. 23. Summary <ul><li>The subject was recurrent networks that involve the use of global feedback applied to a static (memoryless) multilayer perceptron. </li></ul><ul><li>1) Nonlinear autoregressive with exogeneous inputs (NARX) network using feedback from the output layer to the input layer. </li></ul><ul><li>2) Fully connected recurrent networks with feedback from the hidden layer to the input layer. </li></ul><ul><li>3) Recurrent multilayer perceptron with more than one hidden layer, using feedback from the output of each computation layer to its own input. </li></ul><ul><li>4) Second-order recurrent networks using second-order neurons. </li></ul><ul><li>All these recurrent networks use tapped-delay-line memories as a feedback channel. </li></ul><ul><li>The methods 1 -3 use a state-space framework. </li></ul>
  24. 24. Summary <ul><li>Three basic learning algorithms for the training of recurrent networks: </li></ul><ul><li>1) back-propagation through time (BPTT) </li></ul><ul><li>2) real-time recurrent learning (RTRL) </li></ul><ul><li>3) decoupled extended Kalman filter (DEKF) </li></ul><ul><li>Recurrent networks may also be used to process sequentially ordered data that do not have a straightforward temporal interpretation. </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.