DR
ASH
ER
IDAN, 0505288739
FROM
W
EB
2.0
TO
W
EB
3.0
THE THREE BASIC
LAWS OF THE
SOCIAL
NETWORKS
Links Users
2
3
4
1
3
6
NN*N/2
1050
1. Metcalfe Law: The value of  
telecommunications network is 
proportional to the squ...
2.Shirky Law: The Transaction Costs and
the Collaboration Costs Between
Individuals became much Lower than the
Organizatio...
3. Axelrod Law: The Trust Between Users is Prportional to the
Number and Frequency of the Iterations Between Them
Web 3.0: Big Data and Neural Networks
Deep-learning software attempts to mimic the activity in
layers of neurons in the neocortex, the wrinkly 80 percent
of the...
Google has been working on ways to use machine
learning and deep neural networks to solve some
of the toughest problems Go...
A program maps out a set of virtual neurons and then assigns random
numerical values, or “weights,” to connections between...
Programmers would train a neural network to detect an
object or phoneme by blitzing the network with digitized
versions of...
Neural nets (networks of functions that behave like neurons in the
human brain) have been around for a long time, since th...
Neuromorphic Computing
1.Low power consumption (human brains use
about 20 watts, whereas the supercomputers
currently used...
Money is starting to be thrown at the question.
1.The European Human Brain Project has a €1
billion ($1.3 billion) budget ...
Two of the most advanced neuromorphic programmes are being
conducted under the auspices of the Human Brain Project (HBP):
...
Boeing and General Motors
Narayan Srinivasa, the project’s leader, says his
neuromorphic chip requires not a single line o...
The other SyNAPSE project is run by Dharmendra Modha at
IBM’s Almaden laboratory in San Jose. In collaboration with four
A...
From Web 2.0 to Webs 3.0 and 4.0: Swarms of
things
Quantum
The Mulecular level: Web 4.0
Nanotech
+
Biotech
+
Neurotech
Nanotech 1,
Pizza iPhone Video
Nanotech 2: From PC to PM (Personal Manufactoring) , Baby Video,
5th
element
Neurotech: Cyberkinetics' BrainGate brain-computer interface consists of a
computer chip that is a 2-mm-by-2-mm, 100-elect...
Biotech
Web 5.0
Microtubules are protein
structures found within cells.
They have diameter of ~ 24
nm and varying length from
seve...
From Disk on Key to Hospital on Key
An Integrated Digital Microfluidic Lab-On-A-Chip for Clinical Diagnostics on Human Phy...
From web 2 to web 3
From web 2 to web 3
From web 2 to web 3
From web 2 to web 3
From web 2 to web 3
From web 2 to web 3
From web 2 to web 3
From web 2 to web 3
Upcoming SlideShare
Loading in …5
×

From web 2 to web 3

1,831 views

Published on

From Social Networks to Artificial Neural Networks. How NeuroMorphic Computation will Solve the big problems of Big Data and the Internet of Things, in the age of PostProgrraming

Published in: Technology, Education
1 Comment
2 Likes
Statistics
Notes
  • Not new and not 'news'!!! What Hopfield and student Sejnowski never realized nor gave credit for is as follows: Artificial neural-networks(ANN) patterned on biological neural networks(BNN) artificial-intelligence(ANN) were alive and well long before 1980 when physicist Edward Siegel [consulting with Richard Feynman(Caltech) for ANN AI pioneer Charles Rosen(Machine-Intelligence) & Irwin Wunderman(H.P.) & Vesco Marinov & Adolph Smith(Exxon Enterprises/A.I.) discovered trendy much-hyped 'quantum-computing' by two-steps: (1) 'EUREKA': realization that ANNs by-rote on-node switching sigmoid-function 1/[1 + e^(E/T)] ~ 1/[1 + e^(hw/kT)] ~ 1/[+ 1 + e^(E/T)] ~ 1/[ + 1 + e^(hw/kT)] is Fermi-Dirac quantum-statistics 1/[1 + e^ (E/ T)] ~ 1/[1 + e^(hw/kT)] ~ 1/[+ 1 + e^(E/T)] ~ 1/[ + 1 + e^(hw/kT)] = 1/[e^(hw/kT) + 1] dominated by Pauli exclusion-principle forcing non-optimal local-minima(example: periodic-table's chemical-elements) forcing slow memory-costly computational-complexity Boltzmann-machine plus simulated-annealing, but permitting from non-optimal local-minima to optimal global-minimum quantum-tunneling!!! (2) 'SHAZAM': quantum-statistics 'supersymmetry'- transmutation from Fermi-Dirac to Bose-Einstein 1/[+ 1/[e^(hw/kT) + 1] ---> 1/[e^(hw/kT) - 1] ~ 1/f power-spectrum, with no local-minima and permitting Bose-Einstein condensation( BEC) via a noise-induced phase-transition (NIT). Frohlich biological BEC & BNN 1/f-'noise'~'generalized-susceptibility' power-spectrum concurred!!! Siegel's work[IBM Conference on Computers & Mathematis,Stanford(1986); Symposium on Fractals, MRS Fall Meeting, Boston(1989)=five seminal-papers!!!] was used without any attribution whatsoever as/by 'Page-Brin' PageRank[R. Belew, Finding Out About, Cambridge(2000)]Google first search-engine!!! Siegel empirically rediscovered Aristotle's'square-of-opposition' in physics and mathematics, which three-dimensionally tic-tac-toe diagrams synonyms(functors) versus antonyms (morphisms) versus analogy/metaphor. Amazingly neuroimager Jan Wedeen has recently clinically discovered just such a three-dimensional network of neurons which dominates human brain thinking. Siegel 'FUZZYICS=CATEGORYICS=PRAGMATYICS'/ Category-Semantics Cognition for physics/mathematics is a purposely-simple variant of Altshuler-Tsurakov-Lewis 'TRIZ'(Russian acronym: 'Method of Inventive Problem-Solving') embodied in softwares Invention-Machine(Boston) and Ideation(Michigan)for engineers inventing optimality!
    Dr. Edward Siegel
    'physical-mathematicist'
    CATEGORYSEMANTICS@GMAIL.COM
    (206) 659-0235
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total views
1,831
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
20
Comments
1
Likes
2
Embeds 0
No embeds

No notes for slide

From web 2 to web 3

  1. 1. DR ASH ER IDAN, 0505288739 FROM W EB 2.0 TO W EB 3.0
  2. 2. THE THREE BASIC LAWS OF THE SOCIAL NETWORKS
  3. 3. Links Users 2 3 4 1 3 6 NN*N/2 1050 1. Metcalfe Law: The value of   telecommunications network is  proportional to the square of the number of connected users of the system (n2 )
  4. 4. 2.Shirky Law: The Transaction Costs and the Collaboration Costs Between Individuals became much Lower than the Organizational Costs
  5. 5. 3. Axelrod Law: The Trust Between Users is Prportional to the Number and Frequency of the Iterations Between Them
  6. 6. Web 3.0: Big Data and Neural Networks
  7. 7. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.
  8. 8. Google has been working on ways to use machine learning and deep neural networks to solve some of the toughest problems Google has, such as: 1.Natural language processing, 2.Speech recognition, 3.Computer vision 4.Ranking
  9. 9. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables
  10. 10. Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.
  11. 11. Neural nets (networks of functions that behave like neurons in the human brain) have been around for a long time, since the late '60s, but they're coming back into vogue for several reasons. 1.Neural nets, especially deep ones, is that they build features that describe the data well automatically, without humans having to get involved . 2.There's a lot more computational power available, 3.A lot more labeled data, 4.People have figured out how to train very deep networks. Until four or five years ago, it was impossible to get more than like a three-layer network to train well because, since each computer neuron is a non-linear function, as you get deeper and deeper its output gets more and more irregular. It's a very difficult optimization process the deeper the network is. But people have now figured out ways around that. You can pre-train on the first layer, do your optimization there, get it into a good state, and then add a layer. You can kind of do it layer by layer now.
  12. 12. Neuromorphic Computing 1.Low power consumption (human brains use about 20 watts, whereas the supercomputers currently used to try to simulate them need megawatts); 2.Fault tolerance (losing just one transistor can wreck a microprocessor, but brains lose neurons all the time); 3.A lack of need to be programmed (brains learn and change spontaneously as they interact with the world, instead of following the fixed paths and branches of a predetermined algorithm).
  13. 13. Money is starting to be thrown at the question. 1.The European Human Brain Project has a €1 billion ($1.3 billion) budget over a decade. http://www.humanbrainproject.eu/ 1.The American BRAIN initiative’s first-year budget is $100m, http://www.nih.gov/science/brain/index.htm
  14. 14. Two of the most advanced neuromorphic programmes are being conducted under the auspices of the Human Brain Project (HBP): 1.One, called SpiNNaker. It is a digital computer—ie, the sort familiar in the everyday world, which process information as a series of ones and zeros represented by the presence or absence of a voltage. It thus has at its core a network of bespoke microprocessors. . To test the idea they built, two years ago, a version that had a mere 18 processors. They are now working on a bigger one. Much bigger. Their 1m-processor machine is due for completion in 2014. With that number of chips, Dr Furber reckons, he will be able to model about 1% of the human brain. 2.The other machine, Spikey, harks back to an earlier age of computing. Several of the first computers were analogue machines. These represent numbers as points on a continuously varying voltage range—so 0.5 volts would have a different meaning to 1 volt and 1.5 volts would have a different meaning again. In part, Spikey works like that. Analogue computers lost out to digital ones because the lack of ambiguity a digital system brings makes errors less likely. But Dr Meier thinks that because they operate in a way closer to some
  15. 15. Boeing and General Motors Narayan Srinivasa, the project’s leader, says his neuromorphic chip requires not a single line of programming code to function. Instead, it learns by doing, in the way that real brains do. An important property of a real brain is that it is what is referred to as a small-world network. Each neuron within it has tens of thousands of synaptic connections with other neurons. This means that, even though a human brain contains about 86 billion neurons, each is within two or three connections of all the others via myriad potential routes.
  16. 16. The other SyNAPSE project is run by Dharmendra Modha at IBM’s Almaden laboratory in San Jose. In collaboration with four American universities (Columbia, Cornell, the University of California, Merced and the University of Wisconsin-Madison), he and his team have built a prototype neuromorphic computer that has 256 “integrate-and-fire” neurons—so called because they add up (ie, integrate) their inputs until they reach a threshold, then spit out a signal and reset themselves. In this they are like the neurons in Spikey, though the electronic details are different because a digital memory is used instead of capacitors to record the incoming signals.
  17. 17. From Web 2.0 to Webs 3.0 and 4.0: Swarms of things
  18. 18. Quantum
  19. 19. The Mulecular level: Web 4.0 Nanotech + Biotech + Neurotech
  20. 20. Nanotech 1, Pizza iPhone Video
  21. 21. Nanotech 2: From PC to PM (Personal Manufactoring) , Baby Video, 5th element
  22. 22. Neurotech: Cyberkinetics' BrainGate brain-computer interface consists of a computer chip that is a 2-mm-by-2-mm, 100-electrode array. Surgeons attach the array like Velcro to neurons into the motor cortex. The electrodes send information from 50 to 150 neurons at once, traveling through a fiber-optic cable to a device about the size of a VHS tape (seen on back of wheelchair) that digitizes the neuronal signals. Another cable from the digitizer runs to a computer that translates the signal.
  23. 23. Biotech
  24. 24. Web 5.0 Microtubules are protein structures found within cells. They have diameter of ~ 24 nm and varying length from several micrometers to possible millimeters in axons of nerve cells. Roger Penrose has proposed a theory of the quantum mind in which the hollow cores of microtubules inside neurons form an environment capable of supporting quantum-scale information processing and conscious awareness.
  25. 25. From Disk on Key to Hospital on Key An Integrated Digital Microfluidic Lab-On-A-Chip for Clinical Diagnostics on Human Physiological Fluids

×