Your SlideShare is downloading. ×
  • Like
Singularity
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Singularity

  • 301 views
Published

 

Published in Technology , Spiritual
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
301
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
8
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Cells (10**9), body parts (10 ** 7.8), mamals (10**7), primates (10** 6.7), humanoids (10 ** 6), Homo sapiens (10 ** 5) , Stone Tool (10 ** 4), Iron (10 ** 3), 10 ** 2.8, printing 10 ** 2.5, TV 10 ** 1.8, computers 10 ** 1.51, 10 ** 1
  • Doubling time for use. Electricity 1870 – 45y, Telephone, 1890 5 – 35 y, radio 1898 28 y, Television 1930 25 y, PC 1979 18 y, mobile phone 1984 13 y, Internet 1991 3 y

Transcript

  • 1. SingularityWhat is it? Will apositive Singularitylead to a utopia?
  • 2. What Is It? How can it occur?  The notion of some radical technological shift beyond which the world is unimaginable  Visions of the future  Just the continuation of more and better gadgets and faster computers  Accelerating Returns  The rate of change, of innovation, is accelerating, even the rate of its acceleration is accelerating.  Event Horizon  Intelligence Explosion  The advent of greater than human intelligence  Apacalyptism  Geek religion?
  • 3. Models of Singularity Advent of greater than human intelligence  AGI  Brain emulation  Augmenting human intelligence Accelerating change  Which also makes brain emulation and workable AI nearly inevitable  But some say just the change itself will grow beyond our ability to understand or predict
  • 4. Accelerating Returns Examination of multiple events and capabilities shows exponential increase All are driven by (and contribute to) intelligence  Rate of inventions, innovation  Adoption rates of innovation  Increase in computational ability  Increase in communication ability and use  Time between paradigm shifts  Computational power per unit cost
  • 5. Accelerating Returns Core Claim  Technological change feeds on itself and therefore accelerates. Our past or current rate of change is not a good predictor of the future rate of change. Strong Claim  Technological change follows smooth curves, typically exponential. Therefore we can predict with some precision when various changes will arrive and cross key thresholds, like the creation of AI. Advocates  Ray Kurzweil, Alvin Toffler, John Smart
  • 6. Paradigm Shift Time
  • 7. DNA Sequencing Costs
  • 8. Mass Use of Inventions
  • 9. Moore’s Law
  • 10. Data Mass Storage
  • 11. ISP Cost-Performance
  • 12. Random Access Memory
  • 13. Exponential Growth - Computing
  • 14. Computational Increase Predictions Human brain capability (2 * 10^16 cps) for $1000 around 2023 Human brain capability (2 * 10^16 cps) for one cent around 2037 Human race capability (2 * 10^28 cps) for $1000 around 2049 Human race capability (2 * 10^28 cps) for one cent around 2059
  • 15. Computational Acceleration Blue Gene will have 5% of the computational power needed to match a human brain In the next five years or so supercomputers will match the computing capacity of the human brain By about 2030 such a machine will cost around $1. What happens when you put 1000 of them to work 24x7 on say the AGI problem or MNT or ending aging?
  • 16. Limits of Human Intelligence20 – 500Hz. Neuron firing rate  Lately there is evidence than the pattern of firings is the important information unit Very few aspects of a problem may be held in conscious awareness at one time Very slow learning rates Faulty memory Limited communication ability with other minds Squishy deteriorating mind implementation Poor scalability Poor transfer of knowledge Very slow replication
  • 17. Past Intelligence Augmentation Communication. Starting with speech Writing – an early augmentation of human intelligence Stable cultures respectful of knowledge and encouraging to innovation Dissemination of learning Science and Mathematics  Reality respecting culture Computation  Gathering and doing more with information ever faster Device Convergence
  • 18. Future Intelligence Augmentation Ubiquitous computing Exo-cortex Wearable or embedded computing / communication Brain – computer interface Upgrading the brain with technology Bio-chemical upgrading Upload
  • 19. Event Horizon Core claim  Soon we will not be the greatest intelligence on the planet  Then the changes in our world will no longer be in our control or understadable and predictable by us Strong claim  To understand what a superintelligence would do you would have to be that intelligent. Thus the future after that is totally unpredictable. Advocates  Vernon Vinge
  • 20. Greater Than Human Intelligence"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”- I. J. Good 1965
  • 21. Intelligence Explosion Core Claim  Creating minds significantly smarter than humans closes the loop creating a positive feeback cycle. Strong Claim  The positive feedback cycle goes FOOM, each improvement triggering the next. Superintelligence up to the limits of the laws of physics develops rapidly. Advocates:  I.J. Good, Eliezer Yudkowsky
  • 22. What does Singularity Mean for Us? Short Range  Employment / Work  Cures for all disease including aging  Near limitless abundance  Economics  Political  Backlash  Sociological
  • 23. Impact of Human Equivalent Machine Intelligence Assume  Human intelligence without biological limitations and distractions  Can focus and work 24 x 7  Ultimately very cheap to own and easy to replicate any unit’s knowledge Results  Almost all human labor, especially intellectual becomes economically superfluous  Progress increases substantially when entire armies can be deployed on projects So..  economics must change a great deal for all humans to continue to partake in results  Our view of ourselves must change  Maybe we join the AIs and become uploads
  • 24. > Human Intelligence Impact Much that goes on is totally opaque to even augmented humans Humans even transhumans are not in charge, are second class citizens Are they respected? Kept around?  Our history with other species is not encouraging  Are humans irrelevant to AGIs?  Is there an objective ethical argument for treating lesser intelligences well?  Is going extinct to usher in a greater intelligence “OK”?
  • 25. What does Singularity Mean for us Long term  Do we survive?  If we do not are we happy to have produced wonderful “mind children” as Moravec suggests?  Are we recognizable as human?  Does it matter?  What of those who do not wish to change?  Unlimited future..  Unchecked or unwise recursively improving AI goos the universe
  • 26. What Could Stop Singularity? Turns out to be unnoticeable and no big deal Harsh Condition Slow, Halt, Reverse Progress Unexpected limits and difficulties
  • 27. Harsh Things Happen Economic Disaster  Some believe this will take a decade or two to clean up Energy Disaster  Easily a decade to largely replace oil Existential Risk Major War
  • 28. What would Utopia take? We become super bright, compassionate and enlightened AGI is Friendly and bring us the best we can possibly have as quickly as it deems best for us to have it
  • 29. AGI ends up Friendly because Its own decision based on:  We are it progenitors  It finds us curious or amusing  Its ethics lead it to treat lesser intelligences well Some really bright human hackers gave it unbreakable limts or top level utility function
  • 30. Is the best we can get utopia? What is utopia?  End of all suffering?  Does this mean end of wanting what you don’t and perhaps can’t yet have?  Does it mean the end of negative feedback or consequences no matter what you do?  Do you instantly become enlightened?  End of material lack  Beyond lack to people stop wanting?  Is utopia nothing left to do?  Can humans be happy as second class or so totally outclassed as to be beyond comprehension?