Learning Algorithms For A Specific Configuration Of The Quantron
1. Simon de Montignyand Richard Labib ÉcolePolytechnique de Montréal Département de Mathématiques et GénieIndustriel Learning Algorithms for a Specific Configuration of the Quantron
2. Introduction The Quantron is a new neuron model solves nonlinear problems no efficient learning algorithm Goal : retrieve parameters from output image For a specific configuration using surrogate potential functions, we can express the activation function analytically parameterize the decision boundary develop convergent learning algorithms
3. Outline Review of the Quantron Surrogate potential functions Decision boundary and image analysis Learning algorithms and results Summary
4. Review of the Quantron Hybrid neuron model Spatial and temporal summation of potentials Input value = delay between potentials Fires when sum of potentials reaches threshold Sum of potentials Threshold Neuron fires
5. Review of the Quantron Activation function Use in classification Output is 1 if max𝐴(𝑡)≥Γ and 0 otherwise shift parameter inputs 𝐴𝑡=𝑖=0𝑁1−1𝑝1(𝑡−𝜃1−𝑖𝑥)+𝑗=0𝑁2−1𝑝2(𝑡−𝜃2−𝑗𝑦) shift parameter potentials
7. Surrogate potential functions Specific configuration of the Quantron two inputs : 𝑥>0 and 𝑦>0 weights : 𝑤1≥0 and 𝑤2≥0 widths : 𝑟1≥0and 𝑟2≥0 shift parameters 𝜃1 and 𝜃2 used to synchronize the end time of the first potential from each input infinite number of potentials We obtain analytical expressions for max𝐴(𝑡) involving ceiling functions.
9. Decision boundary and image analysis Rectangular potentials Corner coordinates (𝑎,𝑏) and 𝑐,𝑑 are linked to 𝑤1,𝑤2,𝑟1,𝑟2 by a non-invertible equation system. On a pixel grid, corners are located inside a square found by analyzing pixel rows and columns.
10. Decision boundary and image analysis Ramp potentials Corner coordinates (𝑎,𝑏) and 𝑐,𝑑 are linked to 𝑤1,𝑤2,𝑟1,𝑟2 by an invertible equation system. On a pixel grid, corners are located inside a polygon provided by a custom image analysis algorithm.
11. Learning algorithm (rect.) With rectangular potentials, if 𝑤1≤𝑤2 : we have 𝑟2=𝑏 there is an integer 𝑚 for which 𝑟1=𝑎𝑚 If 𝑤1≥𝑤2 : we have 𝑟1=𝑐 there is an integer 𝑚 for which𝑟2=𝑑𝑚 𝑚≥ total number of corners in boundary For a fixed value of 𝑚, we select corner coordinates randomly in squares and set the values of 𝑟1 and 𝑟2.
12. Learning algorithm (rect.) To train the Quantron efficiently : we consider both 𝑤1≤𝑤2 and 𝑤1≥𝑤2; we minimize sum-of-squares error functions sequentially for different values of 𝑚; we stop if zero misclassification rate is reached. Convergence of the algorithm max𝐴(𝑡) is linear in 𝑤1 and 𝑤2 unimodal error function
14. Learning algorithm (ramp) Using ramp potentials, we can train the Quantron in a single step by inverting the system linking the corner coordinates and the parameters. We choose corner coordinates randomly in polygons and obtain parameter values. We repeat this procedure to obtain a solution with a low misclassification rate.
15. Results on test problem (ramp) Misclassification rate for 50 random trials
16. Summary We obtained convergent learning algorithms for a specific configuration of the Quantron. Rect. : sequence of unimodal error functions Ramp : analytical solution to system of equations These algorithms depend on precise geometric characteristics. Future research : generalization to real classification data