SlideShare a Scribd company logo
1 of 19
Download to read offline
Threshold Logic Units
Christian Borgelt Artificial Neural Networks and Deep Learning 13
Threshold Logic Units
A Threshold Logic Unit (TLU) is a processing unit for numbers with n inputs
x1, . . . , xn and one output y. The unit has a threshold θ and each input xi is
associated with a weight wi. A threshold logic unit computes the function
y =







1, if
n
X
i=1
wixi ≥ θ,
0, otherwise.
θ
x1
xn
y
w1
wn
TLUs mimic the thresholding behavior of biological neurons in a (very) simple fashion.
Christian Borgelt Artificial Neural Networks and Deep Learning 14
Threshold Logic Units: Examples
Threshold logic unit for the conjunction x1 ∧ x2.
4
x1
x2
y
3
2
x1 x2 3x1 + 2x2 y
0 0 0 0
1 0 3 0
0 1 2 0
1 1 5 1
Threshold logic unit for the implication x2 → x1.
−1
x1
x2
y
2
−2
x1 x2 2x1 − 2x2 y
0 0 0 1
1 0 2 1
0 1 −2 0
1 1 0 1
Christian Borgelt Artificial Neural Networks and Deep Learning 15
Threshold Logic Units: Examples
Threshold logic unit for (x1 ∧ x2) ∨ (x1 ∧ x3) ∨ (x2 ∧ x3).
1
x1
x2
x3
y
2
−2
2
x1 x2 x3
P
i wixi y
0 0 0 0 0
1 0 0 2 1
0 1 0 −2 0
1 1 0 0 0
0 0 1 2 1
1 0 1 4 1
0 1 1 0 0
1 1 1 2 1
Rough Intuition:
• Positive weights are analogous to excitatory synapses.
• Negative weights are analogous to inhibitory synapses.
Christian Borgelt Artificial Neural Networks and Deep Learning 16
Threshold Logic Units: Geometric Interpretation
Review of line representations
Straight lines are usually represented in one of the following forms:
Explicit Form: g ≡ x2 = bx1 + c
Implicit Form: g ≡ a1x1 + a2x2 + d = 0
Point-Direction Form: g ≡ ~
x = ~
p + k~
r
Normal Form: g ≡ (~
x − ~
p)⊤~
n = 0
with the parameters:
b : Gradient of the line
c : Section of the x2 axis (intercept)
~
p : Vector of a point of the line (base vector)
~
r : Direction vector of the line
~
n : Normal vector of the line
Christian Borgelt Artificial Neural Networks and Deep Learning 17
Threshold Logic Units: Geometric Interpretation
A straight line and its defining parameters:
x1
x2
g
O
ϕ
c
~
p
~
r
~
n = (a1, a2)
~
q = ~
p⊤
~
n
|~
n|
~
n
|~
n|
d = −~
p⊤
~
n
b = r2
r1
Christian Borgelt Artificial Neural Networks and Deep Learning 18
Threshold Logic Units: Geometric Interpretation
How to determine the side on which a point ~
y lies:
~
n = (a1, a2)
x1
x2
g
O
ϕ
~
y
~
z
~
q = ~
p⊤
~
n
|~
n|
~
n
|~
n|
~
z = ~
y⊤
~
n
|~
n|
~
n
|~
n|
Christian Borgelt Artificial Neural Networks and Deep Learning 19
Threshold Logic Units: Geometric Interpretation
Threshold logic unit for x1 ∧ x2.
4
x1
x2
y
3
2
x1
x2
0 1
0
1
0
1
Threshold logic unit for x2 → x1.
−1
x1
x2
y
2
−2
x1
x2
0 1
0
1
0
1
Christian Borgelt Artificial Neural Networks and Deep Learning 20
Threshold Logic Units: Geometric Interpretation
Visualization of 3-dimensional
Boolean functions:
x1
x2
x3
(0, 0, 0)
(1, 1, 1)
Threshold logic unit for (x1 ∧ x2) ∨ (x1 ∧ x3) ∨ (x2 ∧ x3).
1
x1
x2
x3
y
2
−2
2
x1
x2
x3
Christian Borgelt Artificial Neural Networks and Deep Learning 21
Threshold Logic Units: Limitations
The biimplication problem x1 ↔ x2: There is no separating line.
x1 x2 y
0 0 1
1 0 0
0 1 0
1 1 1
x1
x2
0 1
0
1
Formal proof by reductio ad absurdum:
since (0, 0) 7→ 1: 0 ≥ θ, (1)
since (1, 0) 7→ 0: w1 < θ, (2)
since (0, 1) 7→ 0: w2 < θ, (3)
since (1, 1) 7→ 1: w1 + w2 ≥ θ. (4)
(2) and (3): w1 + w2 < 2θ. With (4): 2θ > θ, or θ > 0. Contradiction to (1).
Christian Borgelt Artificial Neural Networks and Deep Learning 22
Linear Separability
Definition: Two sets of points in a Euclidean space are called linearly separable,
iff there exists at least one point, line, plane or hyperplane (depending on the dimension
of the Euclidean space), such that all points of the one set lie on one side and all points
of the other set lie on the other side of this point, line, plane or hyperplane (or on it).
That is, the point sets can be separated by a linear decision function. Formally:
Two sets X, Y ⊂ IRm are linearly separable iff ~
w ∈ IRm and θ ∈ IR exist such that
∀~
x ∈ X : ~
w⊤~
x < θ and ∀~
y ∈ Y : ~
w⊤~
y ≥ θ.
• Boolean functions define two points sets, namely the set of points that are
mapped to the function value 0 and the set of points that are mapped to 1.
⇒ The term “linearly separable” can be transferred to Boolean functions.
• As we have seen, conjunction and implication are linearly separable
(as are disjunction, NAND, NOR etc.).
• The biimplication is not linearly separable
(and neither is the exclusive or (XOR)).
Christian Borgelt Artificial Neural Networks and Deep Learning 23
Linear Separability
Definition: A set of points in a Euclidean space is called convex if it is non-empty
and connected (that is, if it is a region) and for every pair of points in it every point
on the straight line segment connecting the points of the pair is also in the set.
Definition: The convex hull of a set of points X in a Euclidean space is the
smallest convex set of points that contains X. Alternatively, the convex hull of a
set of points X is the intersection of all convex sets that contain X.
Theorem: Two sets of points in a Euclidean space are linearly separable
if and only if their convex hulls are disjoint (that is, have no point in common).
• For the biimplication problem, the convex hulls are the diagonal line segments.
• They share their intersection point and are thus not disjoint.
• Therefore the biimplication is not linearly separable.
Christian Borgelt Artificial Neural Networks and Deep Learning 24
Threshold Logic Units: Limitations
Total number and number of linearly separable Boolean functions
(On-Line Encyclopedia of Integer Sequences, oeis.org, A001146 and A000609):
inputs Boolean functions linearly separable functions
1 4 4
2 16 14
3 256 104
4 65,536 1,882
5 4,294,967,296 94,572
6 18,446,744,073,709,551,616 15,028,134
n 2(2n) no general formula known
• For many inputs a threshold logic unit can compute almost no functions.
• Networks of threshold logic units are needed to overcome the limitations.
Christian Borgelt Artificial Neural Networks and Deep Learning 25
Networks of Threshold Logic Units
Solving the biimplication problem with a network.
Idea: logical decomposition x1 ↔ x2 ≡ (x1 → x2) ∧ (x2 → x1)
−1
−1
3
x1
x2
−2
−2
2
2
2
2
y = x1 ↔ x2
computes y1 = x1 → x2
computes y2 = x2 → x1
computes y = y1 ∧ y2
Christian Borgelt Artificial Neural Networks and Deep Learning 26
Networks of Threshold Logic Units
Solving the biimplication problem: Geometric interpretation
x1
x2
0 1
0
1
a
d c
b
g2
g1
0
1
0
1
=⇒
y1
y2
0 1
0
1
b
d
ac
g3
1
0
• The first layer computes new Boolean coordinates for the points.
• After the coordinate transformation the problem is linearly separable.
Christian Borgelt Artificial Neural Networks and Deep Learning 27
Representing Arbitrary Boolean Functions
Algorithm: Let y = f(x1, . . . , xn) be a Boolean function of n variables.
(i) Represent the given function f(x1, . . . , xn) in disjunctive normal form. That is,
determine Df = C1 ∨. . .∨Cm, where all Cj are conjunctions of n literals, that is,
Cj = lj1 ∧ . . . ∧ ljn with lji = xi (positive literal) or lji = ¬xi (negative literal).
(ii) Create a neuron for each conjunction Cj of the disjunctive normal form
(having n inputs — one input for each variable), where
wji =
(
2, if lji = xi,
−2, if lji = ¬xi,
and θj = n − 1 +
1
2
n
X
i=1
wji.
(iii) Create an output neuron (having m inputs — one input for each neuron
that was created in step (ii)), where
w(n+1)k = 2, k = 1, . . . , m, and θn+1 = 1.
Remark: weights are set to ±2 instead of ±1 in order to ensure integer thresholds.
Christian Borgelt Artificial Neural Networks and Deep Learning 28
Representing Arbitrary Boolean Functions
Example:
ternary Boolean function:
x1 x2 x3 y Cj
0 0 0 0
1 0 0 1 x1 ∧ x2 ∧ x3
0 1 0 0
1 1 0 0
0 0 1 0
1 0 1 0
0 1 1 1 x1 ∧ x2 ∧ x3
1 1 1 1 x1 ∧ x2 ∧ x3
Df = C1 ∨ C2 ∨ C3
One conjunction for each row
where the output y is 1 with
literals according to input values.
First layer (conjunctions):
C1 =
x1 ∧ x2 ∧ x3
C2 =
x1 ∧ x2 ∧ x3
C3 =
x1 ∧ x2 ∧ x3
Second layer (disjunction):
C1
C2
C3
Df = C1 ∨ C2 ∨ C3
Christian Borgelt Artificial Neural Networks and Deep Learning 29
Representing Arbitrary Boolean Functions
Example:
ternary Boolean function:
x1 x2 x3 y Cj
0 0 0 0
1 0 0 1 x1 ∧ x2 ∧ x3
0 1 0 0
1 1 0 0
0 0 1 0
1 0 1 0
0 1 1 1 x1 ∧ x2 ∧ x3
1 1 1 1 x1 ∧ x2 ∧ x3
Df = C1 ∨ C2 ∨ C3
One conjunction for each row
where the output y is 1 with
literals according to input value.
Resulting network of threshold logic units:
x1
x2
x3
1
3
5
1 y
2
−2
2
−2
2
2
−2
2
2
2
2
2
C1 = x1 ∧ x2 ∧ x3
C2 = x1 ∧ x2 ∧ x3
C3 = x1 ∧ x2 ∧ x3
Df = C1 ∨ C2 ∨ C3
Christian Borgelt Artificial Neural Networks and Deep Learning 30
Reminder: Convex Hull Theorem
Theorem: Two sets of points in a Euclidean space are linearly separable
if and only if their convex hulls are disjoint (that is, have no point in common).
Example function on the preceding slide:
y = f(x1, x2, x3) = (x1 ∧ x2 ∧ x3) ∨ (x1 ∧ x2 ∧ x3) ∨ (x1 ∧ x2 ∧ x3)
Convex hull of points with y = 0 Convex hull of points with y = 1
• The convex hulls of the two point sets are not disjoint (red: intersection).
• Therefore the function y = f(x1, x2, x3) is not linearly separable.
Christian Borgelt Artificial Neural Networks and Deep Learning 31

More Related Content

What's hot

Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNNShuai Zhang
 
The world of loss function
The world of loss functionThe world of loss function
The world of loss function홍배 김
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural NetworksDatabricks
 
Back propagation using sigmoid & ReLU function
Back propagation using sigmoid & ReLU functionBack propagation using sigmoid & ReLU function
Back propagation using sigmoid & ReLU functionRevanth Kumar
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 
Autoencoders
AutoencodersAutoencoders
AutoencodersCloudxLab
 
Extension principle
Extension principleExtension principle
Extension principleSavo Delić
 
Optimization for Deep Learning
Optimization for Deep LearningOptimization for Deep Learning
Optimization for Deep LearningSebastian Ruder
 
02 order of growth
02 order of growth02 order of growth
02 order of growthHira Gul
 
Parallel sorting Algorithms
Parallel  sorting AlgorithmsParallel  sorting Algorithms
Parallel sorting AlgorithmsGARIMA SHAKYA
 
Autoencoders in Deep Learning
Autoencoders in Deep LearningAutoencoders in Deep Learning
Autoencoders in Deep Learningmilad abbasi
 
Job sequencing with deadline
Job sequencing with deadlineJob sequencing with deadline
Job sequencing with deadlineArafat Hossan
 
P, NP, NP-Complete, and NP-Hard
P, NP, NP-Complete, and NP-HardP, NP, NP-Complete, and NP-Hard
P, NP, NP-Complete, and NP-HardAnimesh Chaturvedi
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 
Model Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningModel Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningPramit Choudhary
 
15 puzzle problem using branch and bound
15 puzzle problem using branch and bound15 puzzle problem using branch and bound
15 puzzle problem using branch and boundAbhishek Singh
 
Algorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms IAlgorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms IMohamed Loey
 

What's hot (20)

Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNN
 
The world of loss function
The world of loss functionThe world of loss function
The world of loss function
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
Back propagation using sigmoid & ReLU function
Back propagation using sigmoid & ReLU functionBack propagation using sigmoid & ReLU function
Back propagation using sigmoid & ReLU function
 
Back propagation
Back propagationBack propagation
Back propagation
 
Autoencoders
AutoencodersAutoencoders
Autoencoders
 
Extension principle
Extension principleExtension principle
Extension principle
 
Optimization for Deep Learning
Optimization for Deep LearningOptimization for Deep Learning
Optimization for Deep Learning
 
Complexity analysis in Algorithms
Complexity analysis in AlgorithmsComplexity analysis in Algorithms
Complexity analysis in Algorithms
 
02 order of growth
02 order of growth02 order of growth
02 order of growth
 
Parallel sorting Algorithms
Parallel  sorting AlgorithmsParallel  sorting Algorithms
Parallel sorting Algorithms
 
Autoencoders in Deep Learning
Autoencoders in Deep LearningAutoencoders in Deep Learning
Autoencoders in Deep Learning
 
Job sequencing with deadline
Job sequencing with deadlineJob sequencing with deadline
Job sequencing with deadline
 
P, NP, NP-Complete, and NP-Hard
P, NP, NP-Complete, and NP-HardP, NP, NP-Complete, and NP-Hard
P, NP, NP-Complete, and NP-Hard
 
Hash table
Hash tableHash table
Hash table
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Mc culloch pitts neuron
Mc culloch pitts neuronMc culloch pitts neuron
Mc culloch pitts neuron
 
Model Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningModel Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep Learning
 
15 puzzle problem using branch and bound
15 puzzle problem using branch and bound15 puzzle problem using branch and bound
15 puzzle problem using branch and bound
 
Algorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms IAlgorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms I
 

Similar to TLUs: Threshold Logic Units and Their Limitations in Representing Boolean Functions

Generating Chebychev Chaotic Sequence
Generating Chebychev Chaotic SequenceGenerating Chebychev Chaotic Sequence
Generating Chebychev Chaotic SequenceCheng-An Yang
 
013_20160328_Topological_Measurement_Of_Protein_Compressibility
013_20160328_Topological_Measurement_Of_Protein_Compressibility013_20160328_Topological_Measurement_Of_Protein_Compressibility
013_20160328_Topological_Measurement_Of_Protein_CompressibilityHa Phuong
 
002-functions-and-circuits.ppt
002-functions-and-circuits.ppt002-functions-and-circuits.ppt
002-functions-and-circuits.pptssuser46314b
 
06_finite_elements_basics.ppt
06_finite_elements_basics.ppt06_finite_elements_basics.ppt
06_finite_elements_basics.pptAditya765321
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfAlexander Litvinenko
 
20101017 program analysis_for_security_livshits_lecture02_compilers
20101017 program analysis_for_security_livshits_lecture02_compilers20101017 program analysis_for_security_livshits_lecture02_compilers
20101017 program analysis_for_security_livshits_lecture02_compilersComputer Science Club
 
Kolmogorov complexity and topological arguments
Kolmogorov complexity and topological argumentsKolmogorov complexity and topological arguments
Kolmogorov complexity and topological argumentsshenmontpellier
 
Epsrcws08 campbell isvm_01
Epsrcws08 campbell isvm_01Epsrcws08 campbell isvm_01
Epsrcws08 campbell isvm_01Cheng Feng
 
Modelos de Redes Neuronales
Modelos de Redes NeuronalesModelos de Redes Neuronales
Modelos de Redes NeuronalesCarlosNigerDiaz
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Isoparametric Elements 4.pdf
Isoparametric Elements 4.pdfIsoparametric Elements 4.pdf
Isoparametric Elements 4.pdfmannimalik
 
Dimension Reduction Introduction & PCA.pptx
Dimension Reduction Introduction & PCA.pptxDimension Reduction Introduction & PCA.pptx
Dimension Reduction Introduction & PCA.pptxRohanBorgalli
 
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Beniamino Murgante
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
 
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...Alexander Litvinenko
 

Similar to TLUs: Threshold Logic Units and Their Limitations in Representing Boolean Functions (20)

Generating Chebychev Chaotic Sequence
Generating Chebychev Chaotic SequenceGenerating Chebychev Chaotic Sequence
Generating Chebychev Chaotic Sequence
 
013_20160328_Topological_Measurement_Of_Protein_Compressibility
013_20160328_Topological_Measurement_Of_Protein_Compressibility013_20160328_Topological_Measurement_Of_Protein_Compressibility
013_20160328_Topological_Measurement_Of_Protein_Compressibility
 
002-functions-and-circuits.ppt
002-functions-and-circuits.ppt002-functions-and-circuits.ppt
002-functions-and-circuits.ppt
 
06_finite_elements_basics.ppt
06_finite_elements_basics.ppt06_finite_elements_basics.ppt
06_finite_elements_basics.ppt
 
5994944.ppt
5994944.ppt5994944.ppt
5994944.ppt
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
20101017 program analysis_for_security_livshits_lecture02_compilers
20101017 program analysis_for_security_livshits_lecture02_compilers20101017 program analysis_for_security_livshits_lecture02_compilers
20101017 program analysis_for_security_livshits_lecture02_compilers
 
Kolmogorov complexity and topological arguments
Kolmogorov complexity and topological argumentsKolmogorov complexity and topological arguments
Kolmogorov complexity and topological arguments
 
Epsrcws08 campbell isvm_01
Epsrcws08 campbell isvm_01Epsrcws08 campbell isvm_01
Epsrcws08 campbell isvm_01
 
Bitwise
BitwiseBitwise
Bitwise
 
The Perceptron (D1L2 Deep Learning for Speech and Language)
The Perceptron (D1L2 Deep Learning for Speech and Language)The Perceptron (D1L2 Deep Learning for Speech and Language)
The Perceptron (D1L2 Deep Learning for Speech and Language)
 
Modelos de Redes Neuronales
Modelos de Redes NeuronalesModelos de Redes Neuronales
Modelos de Redes Neuronales
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Isoparametric Elements 4.pdf
Isoparametric Elements 4.pdfIsoparametric Elements 4.pdf
Isoparametric Elements 4.pdf
 
Dimension Reduction Introduction & PCA.pptx
Dimension Reduction Introduction & PCA.pptxDimension Reduction Introduction & PCA.pptx
Dimension Reduction Introduction & PCA.pptx
 
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...
 
chapter2_alt
chapter2_altchapter2_alt
chapter2_alt
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
cswiercz-general-presentation
cswiercz-general-presentationcswiercz-general-presentation
cswiercz-general-presentation
 
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
 

Recently uploaded

“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitolTechU
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 

Recently uploaded (20)

“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 

TLUs: Threshold Logic Units and Their Limitations in Representing Boolean Functions

  • 1. Threshold Logic Units Christian Borgelt Artificial Neural Networks and Deep Learning 13
  • 2. Threshold Logic Units A Threshold Logic Unit (TLU) is a processing unit for numbers with n inputs x1, . . . , xn and one output y. The unit has a threshold θ and each input xi is associated with a weight wi. A threshold logic unit computes the function y =        1, if n X i=1 wixi ≥ θ, 0, otherwise. θ x1 xn y w1 wn TLUs mimic the thresholding behavior of biological neurons in a (very) simple fashion. Christian Borgelt Artificial Neural Networks and Deep Learning 14
  • 3. Threshold Logic Units: Examples Threshold logic unit for the conjunction x1 ∧ x2. 4 x1 x2 y 3 2 x1 x2 3x1 + 2x2 y 0 0 0 0 1 0 3 0 0 1 2 0 1 1 5 1 Threshold logic unit for the implication x2 → x1. −1 x1 x2 y 2 −2 x1 x2 2x1 − 2x2 y 0 0 0 1 1 0 2 1 0 1 −2 0 1 1 0 1 Christian Borgelt Artificial Neural Networks and Deep Learning 15
  • 4. Threshold Logic Units: Examples Threshold logic unit for (x1 ∧ x2) ∨ (x1 ∧ x3) ∨ (x2 ∧ x3). 1 x1 x2 x3 y 2 −2 2 x1 x2 x3 P i wixi y 0 0 0 0 0 1 0 0 2 1 0 1 0 −2 0 1 1 0 0 0 0 0 1 2 1 1 0 1 4 1 0 1 1 0 0 1 1 1 2 1 Rough Intuition: • Positive weights are analogous to excitatory synapses. • Negative weights are analogous to inhibitory synapses. Christian Borgelt Artificial Neural Networks and Deep Learning 16
  • 5. Threshold Logic Units: Geometric Interpretation Review of line representations Straight lines are usually represented in one of the following forms: Explicit Form: g ≡ x2 = bx1 + c Implicit Form: g ≡ a1x1 + a2x2 + d = 0 Point-Direction Form: g ≡ ~ x = ~ p + k~ r Normal Form: g ≡ (~ x − ~ p)⊤~ n = 0 with the parameters: b : Gradient of the line c : Section of the x2 axis (intercept) ~ p : Vector of a point of the line (base vector) ~ r : Direction vector of the line ~ n : Normal vector of the line Christian Borgelt Artificial Neural Networks and Deep Learning 17
  • 6. Threshold Logic Units: Geometric Interpretation A straight line and its defining parameters: x1 x2 g O ϕ c ~ p ~ r ~ n = (a1, a2) ~ q = ~ p⊤ ~ n |~ n| ~ n |~ n| d = −~ p⊤ ~ n b = r2 r1 Christian Borgelt Artificial Neural Networks and Deep Learning 18
  • 7. Threshold Logic Units: Geometric Interpretation How to determine the side on which a point ~ y lies: ~ n = (a1, a2) x1 x2 g O ϕ ~ y ~ z ~ q = ~ p⊤ ~ n |~ n| ~ n |~ n| ~ z = ~ y⊤ ~ n |~ n| ~ n |~ n| Christian Borgelt Artificial Neural Networks and Deep Learning 19
  • 8. Threshold Logic Units: Geometric Interpretation Threshold logic unit for x1 ∧ x2. 4 x1 x2 y 3 2 x1 x2 0 1 0 1 0 1 Threshold logic unit for x2 → x1. −1 x1 x2 y 2 −2 x1 x2 0 1 0 1 0 1 Christian Borgelt Artificial Neural Networks and Deep Learning 20
  • 9. Threshold Logic Units: Geometric Interpretation Visualization of 3-dimensional Boolean functions: x1 x2 x3 (0, 0, 0) (1, 1, 1) Threshold logic unit for (x1 ∧ x2) ∨ (x1 ∧ x3) ∨ (x2 ∧ x3). 1 x1 x2 x3 y 2 −2 2 x1 x2 x3 Christian Borgelt Artificial Neural Networks and Deep Learning 21
  • 10. Threshold Logic Units: Limitations The biimplication problem x1 ↔ x2: There is no separating line. x1 x2 y 0 0 1 1 0 0 0 1 0 1 1 1 x1 x2 0 1 0 1 Formal proof by reductio ad absurdum: since (0, 0) 7→ 1: 0 ≥ θ, (1) since (1, 0) 7→ 0: w1 < θ, (2) since (0, 1) 7→ 0: w2 < θ, (3) since (1, 1) 7→ 1: w1 + w2 ≥ θ. (4) (2) and (3): w1 + w2 < 2θ. With (4): 2θ > θ, or θ > 0. Contradiction to (1). Christian Borgelt Artificial Neural Networks and Deep Learning 22
  • 11. Linear Separability Definition: Two sets of points in a Euclidean space are called linearly separable, iff there exists at least one point, line, plane or hyperplane (depending on the dimension of the Euclidean space), such that all points of the one set lie on one side and all points of the other set lie on the other side of this point, line, plane or hyperplane (or on it). That is, the point sets can be separated by a linear decision function. Formally: Two sets X, Y ⊂ IRm are linearly separable iff ~ w ∈ IRm and θ ∈ IR exist such that ∀~ x ∈ X : ~ w⊤~ x < θ and ∀~ y ∈ Y : ~ w⊤~ y ≥ θ. • Boolean functions define two points sets, namely the set of points that are mapped to the function value 0 and the set of points that are mapped to 1. ⇒ The term “linearly separable” can be transferred to Boolean functions. • As we have seen, conjunction and implication are linearly separable (as are disjunction, NAND, NOR etc.). • The biimplication is not linearly separable (and neither is the exclusive or (XOR)). Christian Borgelt Artificial Neural Networks and Deep Learning 23
  • 12. Linear Separability Definition: A set of points in a Euclidean space is called convex if it is non-empty and connected (that is, if it is a region) and for every pair of points in it every point on the straight line segment connecting the points of the pair is also in the set. Definition: The convex hull of a set of points X in a Euclidean space is the smallest convex set of points that contains X. Alternatively, the convex hull of a set of points X is the intersection of all convex sets that contain X. Theorem: Two sets of points in a Euclidean space are linearly separable if and only if their convex hulls are disjoint (that is, have no point in common). • For the biimplication problem, the convex hulls are the diagonal line segments. • They share their intersection point and are thus not disjoint. • Therefore the biimplication is not linearly separable. Christian Borgelt Artificial Neural Networks and Deep Learning 24
  • 13. Threshold Logic Units: Limitations Total number and number of linearly separable Boolean functions (On-Line Encyclopedia of Integer Sequences, oeis.org, A001146 and A000609): inputs Boolean functions linearly separable functions 1 4 4 2 16 14 3 256 104 4 65,536 1,882 5 4,294,967,296 94,572 6 18,446,744,073,709,551,616 15,028,134 n 2(2n) no general formula known • For many inputs a threshold logic unit can compute almost no functions. • Networks of threshold logic units are needed to overcome the limitations. Christian Borgelt Artificial Neural Networks and Deep Learning 25
  • 14. Networks of Threshold Logic Units Solving the biimplication problem with a network. Idea: logical decomposition x1 ↔ x2 ≡ (x1 → x2) ∧ (x2 → x1) −1 −1 3 x1 x2 −2 −2 2 2 2 2 y = x1 ↔ x2 computes y1 = x1 → x2 computes y2 = x2 → x1 computes y = y1 ∧ y2 Christian Borgelt Artificial Neural Networks and Deep Learning 26
  • 15. Networks of Threshold Logic Units Solving the biimplication problem: Geometric interpretation x1 x2 0 1 0 1 a d c b g2 g1 0 1 0 1 =⇒ y1 y2 0 1 0 1 b d ac g3 1 0 • The first layer computes new Boolean coordinates for the points. • After the coordinate transformation the problem is linearly separable. Christian Borgelt Artificial Neural Networks and Deep Learning 27
  • 16. Representing Arbitrary Boolean Functions Algorithm: Let y = f(x1, . . . , xn) be a Boolean function of n variables. (i) Represent the given function f(x1, . . . , xn) in disjunctive normal form. That is, determine Df = C1 ∨. . .∨Cm, where all Cj are conjunctions of n literals, that is, Cj = lj1 ∧ . . . ∧ ljn with lji = xi (positive literal) or lji = ¬xi (negative literal). (ii) Create a neuron for each conjunction Cj of the disjunctive normal form (having n inputs — one input for each variable), where wji = ( 2, if lji = xi, −2, if lji = ¬xi, and θj = n − 1 + 1 2 n X i=1 wji. (iii) Create an output neuron (having m inputs — one input for each neuron that was created in step (ii)), where w(n+1)k = 2, k = 1, . . . , m, and θn+1 = 1. Remark: weights are set to ±2 instead of ±1 in order to ensure integer thresholds. Christian Borgelt Artificial Neural Networks and Deep Learning 28
  • 17. Representing Arbitrary Boolean Functions Example: ternary Boolean function: x1 x2 x3 y Cj 0 0 0 0 1 0 0 1 x1 ∧ x2 ∧ x3 0 1 0 0 1 1 0 0 0 0 1 0 1 0 1 0 0 1 1 1 x1 ∧ x2 ∧ x3 1 1 1 1 x1 ∧ x2 ∧ x3 Df = C1 ∨ C2 ∨ C3 One conjunction for each row where the output y is 1 with literals according to input values. First layer (conjunctions): C1 = x1 ∧ x2 ∧ x3 C2 = x1 ∧ x2 ∧ x3 C3 = x1 ∧ x2 ∧ x3 Second layer (disjunction): C1 C2 C3 Df = C1 ∨ C2 ∨ C3 Christian Borgelt Artificial Neural Networks and Deep Learning 29
  • 18. Representing Arbitrary Boolean Functions Example: ternary Boolean function: x1 x2 x3 y Cj 0 0 0 0 1 0 0 1 x1 ∧ x2 ∧ x3 0 1 0 0 1 1 0 0 0 0 1 0 1 0 1 0 0 1 1 1 x1 ∧ x2 ∧ x3 1 1 1 1 x1 ∧ x2 ∧ x3 Df = C1 ∨ C2 ∨ C3 One conjunction for each row where the output y is 1 with literals according to input value. Resulting network of threshold logic units: x1 x2 x3 1 3 5 1 y 2 −2 2 −2 2 2 −2 2 2 2 2 2 C1 = x1 ∧ x2 ∧ x3 C2 = x1 ∧ x2 ∧ x3 C3 = x1 ∧ x2 ∧ x3 Df = C1 ∨ C2 ∨ C3 Christian Borgelt Artificial Neural Networks and Deep Learning 30
  • 19. Reminder: Convex Hull Theorem Theorem: Two sets of points in a Euclidean space are linearly separable if and only if their convex hulls are disjoint (that is, have no point in common). Example function on the preceding slide: y = f(x1, x2, x3) = (x1 ∧ x2 ∧ x3) ∨ (x1 ∧ x2 ∧ x3) ∨ (x1 ∧ x2 ∧ x3) Convex hull of points with y = 0 Convex hull of points with y = 1 • The convex hulls of the two point sets are not disjoint (red: intersection). • Therefore the function y = f(x1, x2, x3) is not linearly separable. Christian Borgelt Artificial Neural Networks and Deep Learning 31