This document provides an introduction to neural networks and connectionism. It discusses what neural networks are, including their basic components and how they learn through gradual changes in connection strengths. The history of neural networks is reviewed, from early work in the 1950s to today's applications in areas like computer vision and autonomous vehicles. Biological inspiration for neural networks from the brain is described. Early models like perceptrons and limitations like the XOR problem are covered. Backpropagation networks that can learn any function are introduced. Finally, unsupervised learning techniques like Hopfield networks and self-organizing maps are briefly discussed.
ARTIFICIAL NEURAL NETWORKS
Simple computational elements forming a large network
Emphasis on learning (pattern recognition)
Local computation (neurons)
Configured for a particular application
Pattern recognition/data classification
ANN algorithm
Modeled after brain
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
This presentation provides an introduction to the artificial neural networks topic, its learning, network architecture, back propagation training algorithm, and its applications.
ARTIFICIAL NEURAL NETWORKS
Simple computational elements forming a large network
Emphasis on learning (pattern recognition)
Local computation (neurons)
Configured for a particular application
Pattern recognition/data classification
ANN algorithm
Modeled after brain
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
This presentation provides an introduction to the artificial neural networks topic, its learning, network architecture, back propagation training algorithm, and its applications.
This presentation educates you about AI- Neural Networks, Basic Structure of ANNs with a sample of ANN and Types of Artificial Neural Networks are Feedforward and Feedback.
For more topics stay tuned with Learnbay.
Artificial neural networks, usually simply called neural networks, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain.
Although a new technological advancement, the scope of Deep Learning is expanding exponentially. Advanced Deep Learning technology aims to imitate the biological neural network, that is, of the human brain.
https://takeoffprojects.com/advanced-deep-learning-projects
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance.
Comparative study of ANNs and BNNs and mathematical modeling of a neuronSaransh Choudhary
Comparative study of Artificial and Biological Neural Networks with respect to structure, functionalities, learning methods and information transmission method across fundamental units. Also includes mathematical modelling of an artificial neuron.
This presentation educates you about AI- Neural Networks, Basic Structure of ANNs with a sample of ANN and Types of Artificial Neural Networks are Feedforward and Feedback.
For more topics stay tuned with Learnbay.
Artificial neural networks, usually simply called neural networks, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain.
Although a new technological advancement, the scope of Deep Learning is expanding exponentially. Advanced Deep Learning technology aims to imitate the biological neural network, that is, of the human brain.
https://takeoffprojects.com/advanced-deep-learning-projects
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance.
Comparative study of ANNs and BNNs and mathematical modeling of a neuronSaransh Choudhary
Comparative study of Artificial and Biological Neural Networks with respect to structure, functionalities, learning methods and information transmission method across fundamental units. Also includes mathematical modelling of an artificial neuron.
Examinations of humans' central nervous systems inspired the concept of artificial neural networks. In an artificial neural network, simple artificial nodes, known as "neurons", "neurodes", "processing elements" or "units", are connected together to form a network which mimics a biological neural network
A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. In this sense, neural networks refer to systems of neurons, either organic or artificial in nature....
اسلایدهای درس شبکه عصبی و یادگیری عمیق که در دانشگاه شیراز توسط استاد اقبال منصوری تدریس می شود.
Neural network and deep learning course slide taught by Professor Iqbal Mansouri at Shiraz University.
Invited talk at Tsinghua University on "Applications of Deep Neural Network". As the tech. lead of deep learning task force at NIO USA INC, I was invited to give this colloquium talk on general applications of deep neural network.
Neural Network and Artificial Intelligence.
Neural Network and Artificial Intelligence.
WHAT IS NEURAL NETWORK?
The method calculation is based on the interaction of plurality of processing elements inspired by biological nervous system called neurons.
It is a powerful technique to solve real world problem.
A neural network is composed of a number of nodes, or units[1], connected by links. Each linkhas a numeric weight[2]associated with it. .
Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights.
Artificial neurons are the constitutive units in an artificial neural network.
WHY USE NEURAL NETWORKS?
It has ability to Learn from experience.
It can deal with incomplete information.
It can produce result on the basis of input, has not been taught to deal with.
It is used to extract useful pattern from given data i.e. pattern Recognition etc.
Biological Neurons
Four parts of a typical nerve cell :• DENDRITES: Accepts the inputs• SOMA : Process the inputs• AXON : Turns the processed inputs into outputs.• SYNAPSES : The electrochemical contactbetween the neurons.
ARTIFICIAL NEURONS MODEL
Inputs to the network arerepresented by the x1mathematical symbol, xn
Each of these inputs are multiplied by a connection weight , wn
sum = w1 x1 + ……+ wnxn
These products are simplysummed, fed through the transfer function, f( ) to generate a result and then output.
NEURON MODEL
Neuron Consist of:
Inputs (Synapses): inputsignal.Weights (Dendrites):determines the importance ofincoming value.Output (Axon): output toother neuron or of NN .
This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfEnterprise Wired
In this guide, we'll explore the key considerations and features to look for when choosing a Trusted analytics platform that meets your organization's needs and delivers actionable intelligence you can trust.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
2. What are connectionist neural
networks?
• Connectionism refers to a computer modeling
approach to computation that is loosely based
upon the architecture of the brain.
• Many different models, but all include:
– Multiple, individual “nodes” or “units” that operate at
the same time (in parallel)
– A network that connects the nodes together
– Information is stored in a distributed fashion among the
links that connect the nodes
– Learning can occur with gradual changes in connection
strength
3. Neural Network History
• History traces back to the 50’s but became popular in the 80’s
with work by Rumelhart, Hinton, and Mclelland
– A General Framework for Parallel Distributed Processing in Parallel
Distributed Processing: Explorations in the Microstructure of
Cognition
• Peaked in the 90’s. Today:
– Hundreds of variants
– Less a model of the actual brain than a useful tool, but still some
debate
• Numerous applications
– Handwriting, face, speech recognition
– Vehicles that drive themselves
– Models of reading, sentence production, dreaming
• Debate for philosophers and cognitive scientists
– Can human consciousness or cognitive abilities be explained by a
connectionist model or does it require the manipulation of symbols?
4. Comparison of Brains and Traditional
Computers
• 200 billion neurons, 32
trillion synapses
• Element size: 10-6
m
• Energy use: 25W
• Processing speed: 100 Hz
• Parallel, Distributed
• Fault Tolerant
• Learns: Yes
• Intelligent/Conscious:
Usually
• 1 billion bytes RAM but
trillions of bytes on disk
• Element size: 10-9
m
• Energy watt: 30-90W (CPU)
• Processing speed: 109
Hz
• Serial, Centralized
• Generally not Fault Tolerant
• Learns: Some
• Intelligent/Conscious:
Generally No
5. Biological Inspiration
“My brain: It's my second favorite organ.”
- Woody Allen, from the movie Sleeper
Idea : To make the computer more robust, intelligent, and learn, …
Let’s model our computer software (and/or hardware) after the brain
6. Neurons in the Brain
• Although heterogeneous, at a low level
the brain is composed of neurons
– A neuron receives input from other neurons
(generally thousands) from its synapses
– Inputs are approximately summed
– When the input exceeds a threshold the neuron
sends an electrical spike that travels that
travels from the body, down the axon, to the
next neuron(s)
7. Learning in the Brain
• Brains learn
– Altering strength between neurons
– Creating/deleting connections
• Hebb’s Postulate (Hebbian Learning)
– When an axon of cell A is near enough to excite a cell B and repeatedly
or persistently takes part in firing it, some growth process or metabolic
change takes place in one or both cells such that A's efficiency, as one
of the cells firing B, is increased.
• Long Term Potentiation (LTP)
– Cellular basis for learning and memory
– LTP is the long-lasting strengthening of the connection between two
nerve cells in response to stimulation
– Discovered in many regions of the cortex
8. Perceptrons
• Initial proposal of connectionist networks
• Rosenblatt, 50’s and 60’s
• Essentially a linear discriminant composed of
nodes, weights
I1
I2
I3
W1
W2
W3
θ O
>+
= ∑
otherwise
Iw
O i
ii
:0
0:1 θ
I1
I2
I3
W1
W2
W3
θ
O
or
1
Activation Function
9. Perceptron Example
2
1
.5
.3 θ =-1
2(0.5) + 1(0.3) + -1 = 0.3 , O=1
Learning Procedure:
Randomly assign weights (between 0-1)
Present inputs from training data
Get output O, nudge weights to gives results toward our
desired output T
Repeat; stop when no errors, or enough epochs completed
10. Perception Training
)()()1( twtwtw iii ∆+=+
ii IOTtw )()( −=∆
Weights include Threshold. T=Desired, O=Actual output.
5.1)2)(10(5.0)1(1 −=−+=+tw
Example: T=0, O=1, W1=0.5, W2=0.3, I1=2, I2=1,Theta=-1
7.0)1)(10(3.0)1(2 −=−+=+tw
2)1)(10(1)1( −=−+−=+twθ
If we present this input again, we’d output 0 instead
11. How might you use a perceptron
network?
• This (and other networks) are generally used to learn how
to make classifications
• Say you have collected some data regarding the diagnosis
of patients with heart disease
– Age, Sex, Chest Pain Type, Resting BPS, Cholesterol, …,
Diagnosis (<50% diameter narrowing, >50% diameter narrowing)
– 67,1,4,120,229,…, 1
– 37,1,3,130,250,… ,0
– 41,0,2,130,204,… ,0
• Train network to predict heart disease of new patient
12. Perceptrons
• Can add learning rate to speed up the learning process; just
multiply in with delta computation
• Essentially a linear discriminant
• Perceptron theorem: If a linear discriminant exists that can
separate the classes without error, the training procedure is
guaranteed to find that line or plane.
Class1 Class2
13. 10
1 0
XOR Problem: Not Linearly Separable!
Exclusive Or (XOR) Problem
Input: 0,0 Output: 0
Input: 0,1 Output: 1
Input: 1,0 Output: 1
Input: 1,1 Output: 0
We could however construct multiple layers of perceptrons to get around
this problem. A typical multi-layered system minimizes LMS Error,
14. LMS Learning
LMS = Least Mean Square learning Systems, more general than the
previous perceptron learning rule. The concept is to minimize the total
error, as measured over all training examples, P. O is the raw output,
as calculated by
( )∑ −=
P
PP OTLMSceDis
2
2
1
)(tan
E.g. if we have two patterns and
T1=1, O1=0.8, T2=0, O2=0.5 then D=(0.5)[(1-0.8)2
+(0-0.5)2
]=.145
We want to minimize the LMS:
E
W
W(old)
W(new)
C-learning rate
∑ +
i
ii Iw θ
15. LMS Gradient Descent
• Using LMS, we want to minimize the error. We can do this by finding
the direction on the error surface that most rapidly reduces the error
rate; this is finding the slope of the error function by taking the
derivative. The approach is called gradient descent (similar to hill
climbing).
k
k
w
Error
cw
δ
δ
−=∆
To compute how much to change weight for link k:
k
j
jk w
O
O
Error
w
Error
δ
δ
δ
δ
δ
δ
∗=
Chain rule:
)(
)(
2
1)(
2
1 22
jj
j
jj
j
P
PP
j
OT
O
OT
O
OT
O
Error
−−=
−
=
−
=
∑
δ
δ
δ
δ
δ
δ
We can remove the sum since we are taking the partial derivative wrt Oj
( ))('
kkk
k
j
WIFunctionActivationfI
w
O
=
δ
δ
( ) ( )FunctionActivationfIOTcw kjjk ')( −−−=∆
)( WIfOj =
16. Activation Function
• To apply the LMS learning rule, also
known as the delta rule, we need a
differentiable activation function.
( ) ( )FunctionActivationfOTcIw jjkk '−=∆
>+
=
∑
otherwise
Iw
O i
ii
:0
0:1 θ
Old:
θ−
New:
∑
+
= Θ+−
i
ii Iw
e
O
1
1
17. LMS vs. Limiting Threshold
• With the new sigmoidal function that is differentiable, we
can apply the delta rule toward learning.
• Perceptron Method
– Forced output to 0 or 1, while LMS uses the net output
– Guaranteed to separate, if no error and is linearly separable
• Otherwise it may not converge
• Gradient Descent Method:
– May oscillate and not converge
– May converge to wrong answer
– Will converge to some minimum even if the classes are not
linearly separable, unlike the earlier perceptron training method
18. Backpropagation Networks
• Attributed to Rumelhart and McClelland, late 70’s
• To bypass the linear classification problem, we can
construct multilayer networks. Typically we have fully
connected, feedforward networks.
I1
I2
1
Hidden Layer
H1
H2
O1
O2
Input Layer Output Layer
Wi,j
Wj,k
1’s - bias
∑
+
= −
j
jxj Hw
e
xO
,
1
1
)(
I3
1
∑
+
= −
i
ixi Iw
e
xH
,
1
1
)(
19. Backprop - Learning
Learning Procedure:
Randomly assign weights (between 0-1)
Present inputs from training data, propagate to outputs
Compute outputs O, adjust weights according to the delta
rule, backpropagating the errors. The weights will be
nudged closer so that the network learns to give the
desired output.
Repeat; stop when no errors, or enough epochs completed
20. Backprop - Modifying Weights
( ) ( )
+
=−=∆ −sumjjkk
e
fFunctionActivationfOTcIw
1
1
;'
We had computed:
For the Output unit k, f(sum)=O(k). For the output units, this is:
( ) )1(, kkkkjkj OOOTcHw −−=∆
( )( ))(1)(( sumfsumfOTcIw jjkk −−=∆
I H O
Wi,j Wj,k
For the Hidden units (skipping some math), this is:
kj
k
kkkkijjji wOOOTIHcHw ,, )1()()1( ∑ −−−=∆
21. Backprop
• Very powerful - can learn any function, given enough
hidden units! With enough hidden units, we can generate
any function.
• Have the same problems of Generalization vs.
Memorization. With too many units, we will tend to
memorize the input and not generalize well. Some schemes
exist to “prune” the neural network.
• Networks require extensive training, many parameters to
fiddle with. Can be extremely slow to train. May also fall
into local minima.
• Inherently parallel algorithm, ideal for multiprocessor
hardware.
• Despite the cons, a very powerful algorithm that has seen
widespread successful deployment.
23. Unsupervised Learning
• We just discussed a form of supervised learning
– A “teacher” tells the network what the correct output is
based on the input until the network learns the target
concept
• We can also train networks where there is no
teacher. This is called unsupervised learning.
The network learns a prototype based on the
distribution of patterns in the training data. Such
networks allow us to:
– Discover underlying structure of the data
– Encode or compress the data
– Transform the data
24. Unsupervised Learning – Hopfield
Networks
• A Hopfield network is a type of content-
addressable memory
– Non-linear system with attractor points that
represent concepts
– Given a fuzzy input the system converges to the
nearest attractor
• Possibility to have “spurious” attractors that is a blend
of multiple stored patterns
• Also possible to have chaotic patterns that never
converge
25. Standard Binary Hopfield Network
• Recurrent; Every unit is connected to every other unit
• Weights connecting units are symmetrical
– wij = wji
• If the weighted sum of the inputs exceeds a threshold, its
output is 1 otherwise its output is -1
• Units update themselves asynchronously as their inputs
change
B
C
A
D
wAD
wABwAC
wBC wBD
wCD
26. Hopfield Memories
• Setting the weights:
– A pattern is a setting of on or off for each unit
– Given a set of Q patterns to store
• For every weight connecting units i and j:
• This is a form of a Hebbian rule which makes the weight
strength proportional to the product of the firing rates of the
two interconnected units
p
j
Q
p
p
iij xxw ∑=
=
1
27. Hopfield Network Demo
• http://www.cbu.edu/~pong/ai/hopfield/hopfieldap
plet.html
• Properties
– Settles into a minimal energy state
– Storage capacity low, only 13% of number of units
– Can retrieve information even in the presence of noisy
data, similar to associative memory of humans
28. Unsupervised Learning – Self
Organizing Maps
• Self-organizing maps (SOMs) are a data
visualization technique invented by Professor
Teuvo Kohonen
– Also called Kohonen Networks, Competitive Learning,
Winner-Take-All Learning
– Generally reduces the dimensions of data through the
use of self-organizing neural networks
– Useful for data visualization; humans cannot visualize
high dimensional data so this is often a useful technique
to make sense of large data sets
29. Basic “Winner Take All” Network
• Two layer network
– Input units, output units, each input unit is connected to each
output unit
I1
I2
O1
O2
Input Layer
Wi,j
I3
Output Layer
30. Basic Algorithm
– Initialize Map (randomly assign weights)
– Loop over training examples
• Assign input unit values according to the values in the current
example
• Find the “winner”, i.e. the output unit that most closely
matches the input units, using some distance metric, e.g.
• Modify weights on the winner to more closely match the input
( )
2
1
∑=
−
n
i
iij IW
For all output units j=1 to m
and input units i=1 to n
Find the one that minimizes:
)(1 tt
i
t
WXcW −=∆ +
where c is a small positive learning constant
that usually decreases as the learning proceeds
31. Result of Algorithm
• Initially, some output nodes will randomly be a
little closer to some particular type of input
• These nodes become “winners” and the weights
move them even closer to the inputs
• Over time nodes in the output become
representative prototypes for examples in the input
• Note there is no supervised training here
• Classification:
– Given new input, the class is the output node that is the
winner
32. Typical Usage: 2D Feature Map
• In typical usage the output nodes form a 2D “map”
organized in a grid-like fashion and we update weights in a
neighborhood around the winner
I1
I2
Input Layer
I3
Output Layers
O11 O12 O13 O14 O15
O21 O22 O23 O24 O25
O31 O32 O33 O34 O35
O41 O42 O43 O44 O45
O51 O52 O53 O54 O55
…
33. Modified Algorithm
– Initialize Map (randomly assign weights)
– Loop over training examples
• Assign input unit values according to the values in the current
example
• Find the “winner”, i.e. the output unit that most closely
matches the input units, using some distance metric, e.g.
• Modify weights on the winner to more closely match the input
• Modify weights in a neighborhood around the winner so
the neighbors on the 2D map also become closer to the
input
– Over time this will tend to cluster similar items closer on the
map
34. Updating the Neighborhood
• Node O44 is the winner
– Color indicates scaling to update neighbors
Output Layers
O11 O12 O13 O14 O15
O21 O22 O23 O24 O25
O31 O32 O33 O34 O35
O41 O42 O43 O44 O45
O51 O52 O53 O54 O55
)(1 tt
i
t
WXcW −=∆ +
c=1
c=0.75
c=0.5
Consider if O42
is winner for
some other
input; “fight”
over claiming
O43,O33, O53
35. Selecting the Neighborhood
• Typically, a “Sombrero Function” or Gaussian
function is used
• Neighborhood size usually decreases over time to
allow initial “jockeying for position” and then
“fine-tuning” as algorithm proceeds
39. SOM for Classification
• A generated map can also be used for classification
• Human can assign a class to a data point, or use the
strongest weight as the prototype for the data point
• For a new test case, calculate the winning node and
classify it as the class it is closest to
• Handwriting recognition example: http://fbim.fh-
regensburg.de/~saj39122/begrolu/kohonen.html
40. Psychological and Biological
Considerations of Neural Networks
• Psychological
– Neural network models learn, exhibit some behavior similar to
humans, based loosely on brains
– Create their own algorithms instead of being explicitly programmed
– Operate under noisy data
– Fault tolerant and graceful degradation
– Knowledge is distributed, yet still some localization
• Lashley’s search for engrams
• Biological
– Learning in the visual cortex shortly after birth seems to correlate
with the pattern discrimination that emerges from Kohonen Networks
– Criticisms of the mechanism to update weights; mathematically
driven; feedforward supervised network unrealistic
41. Connectionism
• What’s hard for neural networks? Activities beyond
recognition, e.g.:
– Variable binding
– Recursion
– Reflection
– Structured representations
• Connectionist and Symbolic Models
– The Central Paradox of Cognition (Smolensky et al., 1992):
– "Formal theories of logical reasoning, grammar, and other higher
mental faculties compel us to think of the mind as a machine for rule-
based manipulation of highly structured arrays of symbols. What we
know of the brain compels us to think of human information
processing in terms of manipulation of a large unstructured set of
numbers, the activity levels of interconnected neurons. Finally, the full
richness of human behavior, both in everyday environments and in the
controlled environments of the psychological laboratory, seems to defy
rule-based description, displaying strong sensitivity to subtle statistical
factors in experience, as well as to structural properties of information.
To solve the Central Paradox of Cognition is to resolve these
contradictions with a unified theory of the organization of the mind, of
the brain, of behavior, and of the environment."
42. Possible Relationships?
• Symbolic systems implemented via
connectionism
– Possible to create hierarchies of networks with
subnetworks to implement symbolic systems
• Hybrid model
– System consists of two separate components;
low-level tasks via connectionism, high-level
tasks via symbols
43. Proposed Hierarchical Model
• Jeff Hawkins
• Founder: Palm Computing, Handspring
• Deep interest in the brain all his life
• Book: “On Intelligence”
– Variety of neuroscience research as input
– Includes his own ideas, theories, guesses
– Increasingly accepted view of the brain
44. The Cortex
• Hawkins’s point of interest in the brain
– “Where the magic happens”
• Hierarchically-arranged in regions
• Communication up the hierarchy
– Regions classify patterns of their inputs
– Regions output a ‘named’ pattern up the hierarchy
• Communication down the hierarchy
– A high-level region has made a prediction
– Alerts lower-level regions what to expect
45. Hawkins Quotes
“The human cortex is particularly large and
therefore has a massive memory capacity.
It is constantly predicting what you will see,
hear and feel, mostly in ways you are
unconscious of. These predictions are our
thoughts, and when combined with sensory
inputs, they are our perceptions. I call this
view of the brain the memory-prediction
framework of intelligence.”
46. Hawkins Quotes
“Your brain constantly makes predictions
about the very fabric of the world we live
in, and it does so in a parallel fashion. It
will just as readily detect an odd texture, a
misshapen nose, or an unusual motion. It
isn’t obvious how pervasive these mostly
unconscious predictions are, which is
perhaps why we missed their importance.”
47. Hawkins Quotes
“Your brain constantly makes predictions
about the very fabric of the world we live
in, and it does so in a parallel fashion. It
will just as readily detect an odd texture, a
misshapen nose, or an unusual motion. It
isn’t obvious how pervasive these mostly
unconscious predictions are, which is
perhaps why we missed their importance.”
48. Hawkins Quotes
“Suppose when you are out, I sneak over to
your home and change something about
your door. It could be almost anything. I
could move the knob over by and inch,
change a round knob into a thumb latch, or
turn it from brass to chrome…. When you
come home that day and attempt to open the
door, you will quickly detect that
something is wrong.”
49. Prediction
• Prediction means that the neurons involved
in sensing your door become active in
advance of them actually receiving sensory
input.
– When the sensory input does arrive, it is
compared with what is expected.
– Two way communication; classification up the
hierarchy, prediction down the hierarchy
50. Prediction
• Prediction is not limited to patterns of low-level
sensory information like hearing and seeing
• Mountcastle’s principle : we have lots of
different neurons, but they basically do the
same thing (particularly in the neocortex)
– What is true of low-level sensory areas must be true
for all cortical areas. The human brain is more
intelligent than that of other animals because it can
make predictions about more abstract kinds of
patterns and longer temporal pattern sequences.”
51. Visual Hierarchies
• Lowest visual level inputs pixels
• Second level recognizes edges, lines, etc
from known patterns of pixels
• Third level recognizes shapes from known
patterns of edges, lines, etc
• Fourth level recognizes objects from known
patterns of shapes
53. Not there yet…
• Many issues remain to be addressed by
Hawkins’ model
– Missing lots of details on how his model could
be implemented in a computer
– Creativity?
– Evolution?
– Planning?
– Rest of the brain, not just neocortex?
Recognizing pictures, making decisions, functioning with motor skills in reaction to environment, etc…
Most abstract concepts at the top (the big ideas, concepts, etc) and the features, details, pixels of images at the bottom
More on the naming shortly