Your SlideShare is downloading. ×
swarm intelligence the morgan kaufmann series in evolutionary computation  2001
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

swarm intelligence the morgan kaufmann series in evolutionary computation 2001

25,708
views

Published on

book Russell C. Eberhart, Yuhui Shi, James Kennedy Swarm Intelligence The Morgan Kaufmann Series in Evolutionary Computation 2001

book Russell C. Eberhart, Yuhui Shi, James Kennedy Swarm Intelligence The Morgan Kaufmann Series in Evolutionary Computation 2001

Published in: Education, Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
25,708
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
56
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. SwarmIntelligenceTeam LRN
  • 2. The Morgan Kaufmann Series in Evolutionary ComputationSeries Editor: David B. FogelSwarm IntelligenceJames Kennedy and Russell C. Eberhart, with Yuhui ShiIllustrating Evolutionary Computation with MathematicaChristian JacobEvolutionary Design by ComputersEdited by Peter J. BentleyGenetic Programming III: Darwinian Invention and Problem SolvingJohn R. Koza, Forrest H. Bennett III, David Andre, and Martin A. KeaneGenetic Programming: An IntroductionWolfgang Banzhaf, Peter Nordin, Robert E. Keller, and Frank D. FranconeFOGA Foundations of Genetic Algorithms Volumes 1–5ProceedingsGECCO—Proceedings of the Genetic and Evolutionary Computation Conference, theJoint Meeting of the International Conference on Genetic Algorithms (ICGA) and theAnnual Genetic Programming Conference (GP)GECCO 2000GECCO 1999GP—International Conference on Genetic ProgrammingGP 4, 1999GP 3, 1998GP 2, 1997ICGA—International Conference on Genetic AlgorithmsICGA 7, 1997ICGA 6, 1995ICGA 5, 1993ICGA 4, 1991ICGA 3, 1989ForthcomingFOGA 6Edited by Worthy N. Martin and William M. SpearsCreative Evolutionary SystemsEdited by Peter J. Bentley and David W. CorneEvolutionary Computation in BioinformaticsEdited by Gary Fogel and David W. CorneTeam LRN
  • 3. SwarmIntelligenceJames KennedyRussell C. EberhartPurdue School of Engineering and Technology,Indiana University Purdue University Indianapoliswith Yuhui ShiElectronic Data Systems, Inc.Team LRN
  • 4. Senior Editor Denise E. M. PenrosePublishing Services Manager Scott NortonAssistant Publishing Services Manager Edward WadeEditorial Coordinator Emilia ThiuriCover Design Chen Design Associates, SFCover Photography Max Spector/Chen Design Associates, SFText Design Rebecca Evans & AssociatesTechnical Illustration and Composition Technologies ‘N TypographyCopyeditor Ken DellaPentaProofreader Jennifer McClainIndexer Bill MeyersPrinter Courier CorporationDesignations used by companies to distinguish their products are often claimed astrademarks or registered trademarks. In all instances where Morgan KaufmannPublishers is aware of a claim, the product names appear in initial capital or allcapital letters. Readers, however, should contact the appropriate companies formore complete information regarding trademarks and registration.ACADEMIC PRESSA Harcourt Science and Technology Company525 B Street, Suite 1900, San Diego, CA 92101-4495, USAhttp://www.academicpress.comAcademic PressHarcourt Place, 32 Jamestown Road, London, NW1 7BY, United Kingdomhttp://www.academicpress.comMorgan Kaufmann Publishers340 Pine Street, Sixth Floor, San Francisco, CA 94104-3205, USAhttp://www.mkp.com© 2001 by Academic PressAll rights reservedPrinted in the United States of America05 04 03 02 01 5 4 3 2 1No part of this publication may be reproduced, stored in a retrieval system, ortransmitted in any form or by any means—electronic, mechanical, photocopying,or otherwise—without the prior written permission of the publisher.Library of Congress Cataloging-in-Publication DataKennedy, JamesSwarm intelligence : collective, adaptive/James Kennedy, Russell C. Eberhart,with Yuhui Ship. cm.Includes bibliographical references and index.ISBN 1-55860-595-91. Swarm intelligence. 2. Systems engineering. 3. Distributed artificialintelligence. I. Eberhart, Russell C. II. Shi, Yuhui. III. TitleQ337.3 .K45 2001006.3—dc21 00-069641This book is printed on acid-free paper.Team LRN
  • 5. Significant strangers and friends who have probably forgotten me havecontributed to this book in ways unknown to them. I owe inexpressiblegratitude to Cathy, Bonnie, and Jamey for continuing to love me eventhrough the many hours I sat at the computer ignoring them.–Jim KennedyThis book is dedicated to Francie, Mark, and Sean, and to Professor Mi-chael S. P. Lucas. The support of the Purdue School of Engineering andTechnology at Indiana University Purdue University Indianapolis, espe-cially that of Dean H. Öner Yurtseven, is gratefully acknowledged.–Russ EberhartTeam LRN
  • 6. Team LRN
  • 7. ContentsPreface xiiipartone Foundationschapter oneModels and Concepts of Life and Intelligence 3The Mechanics of Life and Thought 4Stochastic Adaptation: Is Anything Ever Really Random? 9The “Two Great Stochastic Systems” 12The Game of Life: Emergence in Complex Systems 16The Game of Life 17Emergence 18Cellular Automata and the Edge of Chaos 20Artificial Life in Computer Programs 26Intelligence: Good Minds in People and Machines 30Intelligence in People: The Boring Criterion 30Intelligence in Machines: The Turing Criterion 32chapter twoSymbols, Connections, and Optimization by Trial and Error 35Symbols in Trees and Networks 36Problem Solving and Optimization 48A Super-Simple Optimization Problem 49Three Spaces of Optimization 51Fitness Landscapes 52High-Dimensional Cognitive Space and Word Meanings 55Two Factors of Complexity: NK Landscapes 60Combinatorial Optimization 64viiTeam LRN
  • 8. Binary Optimization 67Random and Greedy Searches 71Hill Climbing 72Simulated Annealing 73Binary and Gray Coding 74Step Sizes and Granularity 75Optimizing with Real Numbers 77Summary 78chapter threeOn Our Nonexistence as Entities: The Social Organism 81Views of Evolution 82Gaia: The Living Earth 83Differential Selection 86Our Microscopic Masters? 91Looking for the Right Zoom Angle 92Flocks, Herds, Schools, and Swarms: Social Behavior as Optimization 94Accomplishments of the Social Insects 98Optimizing with Simulated Ants: Computational Swarm Intelligence 105Staying Together but Not Colliding: Flocks, Herds, and Schools 109Robot Societies 115Shallow Understanding 125Agency 129Summary 131chapter fourEvolutionary Computation Theory and Paradigms 133Introduction 134Evolutionary Computation History 134The Four Areas of Evolutionary Computation 135Genetic Algorithms 135Evolutionary Programming 139Evolution Strategies 140Genetic Programming 141Toward Unification 141Evolutionary Computation Overview 142EC Paradigm Attributes 142Implementation 143Genetic Algorithms 146An Overview 146A Simple GA Example Problem 147viii ContentsTeam LRN
  • 9. A Review of GA Operations 152Schemata and the Schema Theorem 159Final Comments on Genetic Algorithms 163Evolutionary Programming 164The Evolutionary Programming Procedure 165Finite State Machine Evolution 166Function Optimization 169Final Comments 171Evolution Strategies 172Mutation 172Recombination 174Selection 175Genetic Programming 179Summary 185chapter fiveHumans—Actual, Imagined, and Implied 187Studying Minds 188The Fall of the Behaviorist Empire 193The Cognitive Revolution 195Bandura’s Social Learning Paradigm 197Social Psychology 199Lewin’s Field Theory 200Norms, Conformity, and Social Influence 202Sociocognition 205Simulating Social Influence 206Paradigm Shifts in Cognitive Science 210The Evolution of Cooperation 214Explanatory Coherence 216Networks in Groups 218Culture in Theory and Practice 220Coordination Games 223The El Farol Problem 226Sugarscape 229Tesfatsion’s ACE 232Picker’s Competing-Norms Model 233Latané’s Dynamic Social Impact Theory 235Boyd and Richerson’s Evolutionary Culture Model 240Memetics 245Memetic Algorithms 248Cultural Algorithms 253Convergence of Basic and Applied Research 254Contents ixTeam LRN
  • 10. Culture—and Life without It 255Summary 258chapter sixThinking Is Social 261Introduction 262Adaptation on Three Levels 263The Adaptive Culture Model 263Axelrod’s Culture Model 265Experiment One: Similarity in Axelrod’s Model 267Experiment Two: Optimization of an Arbitrary Function 268Experiment Three: A Slightly Harder and More Interesting Function 269Experiment Four: A Hard Function 271Experiment Five: Parallel Constraint Satisfaction 273Experiment Six: Symbol Processing 279Discussion 282Summary 284parttwo The Particle Swarm and Collective Intelligencechapter sevenThe Particle Swarm 287Sociocognitive Underpinnings: Evaluate, Compare, and Imitate 288Evaluate 288Compare 288Imitate 289A Model of Binary Decision 289Testing the Binary Algorithm with the De Jong Test Suite 297No Free Lunch 299Multimodality 302Minds as Parallel Constraint Satisfaction Networks in Cultures 307The Particle Swarm in Continuous Numbers 309The Particle Swarm in Real-Number Space 309Pseudocode for Particle Swarm Optimization in Continuous Numbers 313Implementation Issues 314An Example: Particle Swarm Optimization of Neural Net Weights 314A Real-World Application 318The Hybrid Particle Swarm 319Science as Collaborative Search 320x ContentsTeam LRN
  • 11. Emergent Culture, Immergent Intelligence 323Summary 324chapter eightVariations and Comparisons 327Variations of the Particle Swarm Paradigm 328Parameter Selection 328Controlling the Explosion 337Particle Interactions 342Neighborhood Topology 343Substituting Cluster Centers for Previous Bests 347Adding Selection to Particle Swarms 353Comparing Inertia Weights and Constriction Factors 354Asymmetric Initialization 357Some Thoughts on Variations 359Are Particle Swarms Really a Kind of Evolutionary Algorithm? 361Evolution beyond Darwin 362Selection and Self-Organization 363Ergodicity: Where Can It Get from Here? 366Convergence of Evolutionary Computation and Particle Swarms 367Summary 368chapter nineApplications 369Evolving Neural Networks with Particle Swarms 370Review of Previous Work 370Advantages and Disadvantages of Previous Approaches 374The Particle Swarm Optimization Implementation Used Here 376Implementing Neural Network Evolution 377An Example Application 379Conclusions 381Human Tremor Analysis 382Data Acquisition Using Actigraphy 383Data Preprocessing 385Analysis with Particle Swarm Optimization 386Summary 389Other Applications 389Computer Numerically Controlled Milling Optimization 389Ingredient Mix Optimization 391Reactive Power and Voltage Control 391Battery Pack State-of-Charge Estimation 391Summary 392Contents xiTeam LRN
  • 12. chapter tenImplications and Speculations 393Introduction 394Assertions 395Up from Social Learning: Bandura 398Information and Motivation 399Vicarious versus Direct Experience 399The Spread of Influence 400Machine Adaptation 401Learning or Adaptation? 402Cellular Automata 403Down from Culture 405Soft Computing 408Interaction within Small Groups: Group Polarization 409Informational and Normative Social Influence 411Self-Esteem 412Self-Attribution and Social Illusion 414Summary 419chapter elevenAnd in Conclusion . . . 421Appendix A Statistics for Swarmers 429Appendix B Genetic Algorithm Implementation 451Glossary 457References 475Index 497xii ContentsTeam LRN
  • 13. PrefaceAt this moment, a half-dozen astronauts are assembling a new space sta-tion hundreds of miles above the surface of the earth. Thousands of sail-ors live and work under the sea in submarines. Incas jog through theAndes. Nomads roam the Arabian sands. Homo sapiens—literally, “intelli-gent man”—has adapted to nearly every environment on the face of theearth, below it, and as far above it as we can propel ourselves. We must bedoing something right.In this book we argue that what we do right is related to our sociality.We will investigate that elusive quality known as intelligence, which isconsidered first of all a trait of humans and second as something thatmight be created in a computer, and our conclusion will be that what-ever this “intelligence” is, it arises from interactions among individuals.We humans are the most social of animals: we live together in families,tribes, cities, nations, behaving and thinking according to the rules andnorms of our communities, adopting the customs of our fellows, includ-ing the facts they believe and the explanations they use to tie those factstogether. Even when we are alone, we think about other people, andeven when we think about inanimate things, we think using language—the medium of interpersonal communication.Almost as soon as the electronic computer was invented (or, we couldpoint out, more than a century earlier, when Babbage’s mechanicalanalytical engine was first conceived), philosophers and scientists beganto ask questions about the similarities between computer programs andminds. Computers can process symbolic information, can derive conclu-sions from premises, can store information and recall it when it is appro-priate, and so on—all things that minds do. If minds can be intelligent,those thinkers reasoned, there was no reason that computers could notbe. And thus was born the great experiment of artificial intelligence.To the early AI researchers, the mark of intelligence was the ability tosolve large problems quickly. A problem might have a huge number ofxiiiTeam LRN
  • 14. possible solutions, most of which are not very good, some of which arepassable, and a very few of which are the best. Given the huge numberof possible ways to solve a problem, how would an intelligent computerprogram find the best choice, or at least a very good one? AI researchersthought up a number of clever methods for sorting through the possibili-ties, and shortcuts, called heuristics, to speed up the process. Since logicalprinciples are universal, a logical method could be developed for oneproblem and used for another. For instance, it is not hard to see thatstrings of logical premises and conclusions are very similar to toursthrough cities. You can put facts together to draw conclusions in thesame way that you can plan routes among a number of locations. Thus,programs that search a geographical map can be easily adapted to ex-plore deductive threads in other domains. By the mid-1950s, programsalready existed that could prove mathematical theorems and solve prob-lems that were hard even for a human. The promise of these programswas staggering: if computers could be programmed to solve hard prob-lems on their own, then it should only be a short time until they wereable to converse with us and perform all the functions that we the livingfound tiresome or uninteresting.But it was quickly found that, while the computer could performsuperhuman feats of calculation and memory, it was very poor—a com-plete failure—at the simple things. No AI program could recognize aface, for instance, or carry on a simple conversation. These “brilliant”machines weren’t very good at solving problems having to do with realpeople and real business and things with moving parts. It seemed thatno matter how many variables were added to the decision process, therewas always something else. Systems didn’t work the same when theywere hot, or cold, or stressed, or dirty, or cranky, or in the light, or in thedark, or when two things went wrong at the same time. There was alwayssomething else.The early AI researchers had made an important assumption, so fun-damental that it was never stated explicitly nor consciously acknowl-edged. They assumed that cognition is something inside an individual’shead. An AI program was modeled on the vision of a single disconnectedperson, processing information inside his or her brain, turning the prob-lem this way and that, rationally and coolly. Indeed, this is the way weexperience our own thinking, as if we hear private voices and see privatevisions. But this experience can lead us to overlook what should be ourmost noticeable quality as a species: our tendency to associate with oneanother, to socialize. If you want to model human intelligence, we arguehere, then you should do it by modeling individuals in a social context,interacting with one another.xiv PrefaceTeam LRN
  • 15. In this regard it will be made clear that we do not mean the kinds ofinteraction typically seen in multiagent systems, where autonomoussubroutines perform specialized functions. Agent subroutines may passinformation back and forth, but subroutines are not changed as a resultof the interaction, as people are. In real social interaction, information isexchanged, but also something else, perhaps more important: individ-uals exchange rules, tips, and beliefs about how to process the informa-tion. Thus a social interaction typically results in a change in the think-ing processes—not just the contents—of the participants.It is obvious that sexually reproducing animals must interact occa-sionally, at least, in order to make babies. It is equally obvious that mostspecies interact far more often than that biological bottom line. Fishschool, birds flock, bugs swarm—not just so they can mate, but for rea-sons extending above and beyond that. For instance, schools of fish havean advantage in escaping predators, as each individual fish can be a kindof lookout for the whole group. It is like having a thousand eyes. Herdinganimals also have an advantage in finding food: if one animal findssomething to eat, the others will watch and follow. Social behavior helpsindividual species members adapt to their environment, especially byproviding individuals with more information than their own senses cangather. You sniff the air and detect the scent of a predator; I, seeing youtense in anticipation, tense also, and grow suspicious. There are numer-ous other advantages as well that give social animals a survival advan-tage, to make social behavior the norm throughout the animal kingdom.What is the relationship between adaptation and intelligence? Somewriters have argued that in fact there is no difference, that intelligence isthe ability to adapt (for instance, Fogel, 1995). We are not in a hurry totake on the fearsome task of battling this particular dragon at the mo-ment and will leave the topic for now, but not without asserting thatthere is a relationship between adaptability and intelligence, and notingthat social behavior greatly increases the ability of organisms to adapt.We argue here against the view, widely held in cognitive science, ofthe individual as an isolated information-processing entity. We wish towrite computer programs that simulate societies of individuals, eachworking on a problem and at the same time perceiving the problem-solving endeavors of its neighbors, and being influenced by those neigh-bors’ successes. What would such programs look like?In this book we explore ideas about intelligence arising in social con-texts. Sometimes we talk about people and other living—carbon-based—organisms, and at other times we talk about silicon-based entities, exist-ing in computer programs. To us, a mind is a mind, whether embodiedin protoplasm or semiconductors, and intelligence is intelligence. ThePreface xvTeam LRN
  • 16. important thing is that minds arise from interaction with other minds.That is not to say that we will dismiss the question casually. The interest-ing relationship between human minds and simulated minds will keepus on our toes through much of the book; there is more to it than meetsthe eye.In the title of this book, and throughout it, we use the word swarm todescribe a certain family of social processes. In its common usage,“swarm” refers to a disorganized cluster of moving things, usually in-sects, moving irregularly, chaotically, somehow staying together evenwhile all of them move in apparently random directions. This is a goodvisual image of what we talk about, though we won’t try to convince youthat gnats possess some little-known intelligence that we have discov-ered. As you will see, an insect swarm is a three-dimensional version ofsomething that can take place in a space of many dimensions—a space ofideas, beliefs, attitudes, behaviors, and the other things that minds areconcerned with, and in spaces of high-dimensional mathematical sys-tems like those computer scientists and engineers may be interested in.We implement our swarms in computer programs. Sometimes theemphasis is on understanding intelligence and aspects of culture. Othertimes, we use our swarms for optimization, showing how to solve hardengineering problems. The social-science and computer-science ques-tions are so interrelated here that it seems they require the same answers.On the one hand, the psychologist wants to know, how do minds workand why do people act the way they do? On the other, the engineerwants to know, what kinds of programs can I write that will help mesolve extremely difficult real-world problems? It seems to us that if youknew the answer to the first question, you would know the answer to thesecond one. The half-century’s drive to make computers intelligent hasbeen largely an endeavor in simulated thinking, trying to understandhow people arrive at their answers, so that powerful electronic computa-tional devices can be programmed to do the hard work. But it seems re-searchers have not understood minds well enough to program one. Inthis volume we propose a view of mind, and we propose a way to imple-ment that view in computer programs—programs that are able to solvevery hard mathematical problems.In The Computer and the Brain, John von Neumann (1958) wrote, “Isuspect that a deeper mathematical study of the nervous system . . . willaffect our understanding of the aspects of mathematics itself that are in-volved. In fact, it may alter the way in which we look on mathematicsand logics proper.” This is just one of the prescient von Neumann’s pre-dictions that has turned out to be correct; the study of neural systems hasxvi PrefaceTeam LRN
  • 17. opened up new perspectives for understanding complex systems of allsorts. In this volume we emphasize that neural systems of the intelligentkind are embedded in sociocultural systems of separate but connectednervous systems. Deeper computational studies of biological and cul-tural phenomena are affecting our understanding of many aspects ofcomputing itself and are altering the way in which we perceive comput-ing proper. We hope that this book is one step along the way toward thatunderstanding and perception.A Thumbnail Sketch of Particle Swarm OptimizationThe field of evolutionary computation is often considered to comprisefour major paradigms: genetic algorithms, evolutionary programming,evolution strategies, and genetic programming (Eberhart, Simpson, andDobbins, 1996). (Genetic programming is sometimes categorized as asubfield of genetic algorithms.) As is the case with these evolutionarycomputation paradigms, particle swarm optimization utilizes a “popula-tion” of candidate solutions to evolve an optimal or near-optimal solu-tion to a problem. The degree of optimality is measured by a fitness func-tion defined by the user.Particle swarm optimization, which has roots in artificial life and so-cial psychology as well as engineering and computer science, differs fromevolutionary computation methods in that the population members,called particles, are flown through the problem hyperspace. When thepopulation is initialized, in addition to the variables being given randomvalues, they are stochastically assigned velocities. Each iteration, eachparticle’s velocity is stochastically accelerated toward its previous bestposition (where it had its highest fitness value) and toward a neighbor-hood best position (the position of highest fitness by any particle in itsneighborhood).The particle swarms we will be describing are closely related to cellularautomata (CA), which are used for self-generating computer graphicsmovies, simulating biological systems and physical phenomena, design-ing massively parallel computers, and most importantly for basic re-search into the characteristics of complex dynamic systems. Accordingto mathematician Rudy Rucker, CAs have three main attributes: (1) indi-vidual cell updates are done in parallel, (2) each new cell value dependsonly on the old values of the cell and its neighbors, and (3) all cells areupdated using the same rules (Rucker, 1999). Individuals in a particlePreface xviiTeam LRN
  • 18. swarm population can be conceptualized as cells in a CA, whose stateschange in many dimensions simultaneously.Particle swarm optimization is powerful, easy to understand, easy toimplement, and computationally efficient. The central algorithm com-prises just two lines of computer code and is often at least an order ofmagnitude faster than other evolutionary algorithms on benchmarkfunctions. It is extremely resistant to being trapped in local optima.As an engineering methodology, particle swarm optimization hasbeen applied to fields as diverse as electric/hybrid vehicle battery packstate of charge, human performance assessment, and human tremor di-agnosis. Particle swarm optimization also provides evidence for theo-retical perspectives on mind, consciousness, and intelligence. Thesetheoretical views, in addition to the implications and applications forengineering and computer science, are discussed in this book.What This Book Is, and Is Not, AboutLet’s start with what it’s not about. This book is not a cookbook or a how-to book. In this volume we will tell you about some exciting research thatyou may not have heard about—since it covers recent findings in bothpsychology and computer science, we expect most readers will findsomething here that is new to them. If you are interested in trying outsome of these ideas, you will either find enough information to getstarted or we will show you where to go for the information.This book is not a list of facts. Unfortunately, too much science, andespecially science education, today has become a simple listing of re-search findings presented as absolute truths. All the research described inthis volume is ongoing, not only ours but others’ as well, and all conclu-sions are subject to interpretation. We tend to focus on issues; accom-plishments and failures in science point the way to larger theoreticaltruths, which are what we really want. We will occasionally make state-ments that are controversial, hoping not to hurt anyone’s feelings but toincite our readers to think about the topics, even if it means disagreeingwith us.This book is about emergent behavior (self-organization), about simpleprocesses leading to complex results. It’s about the whole being morethan the sum of its parts. In the words of one eminent mathematician,Stephen Wolfram: “It is possible to make things of great complexity outof things that are very simple. There is no conservation of simplicity.”xviii PrefaceTeam LRN
  • 19. We are not the first to publish a book with the words “swarm intelli-gence” in the title, but we do have a significantly distinct viewpoint fromsome others who use the term. For example, in Swarm Intelligence: FromNatural to Artificial Systems, by Bonabeau, Dorigo, and Theraulaz (1999),which focuses on the modeling of social insect (primarily ant) behavior,page 7 states:It is, however, fair to say that very few applications of swarm intelli-gence have been developed. One of the main reasons for this relativelack of success resides in the fact that swarm-intelligent systems arehard to “program,” because the paths to problem solving are notpredefined but emergent in these systems and result from interac-tions among individuals and between individuals and their environ-ment as much as from the behaviors of the individuals themselves.Therefore, using a swarm-intelligent system to solve a problem re-quires a thorough knowledge not only of what individual behaviorsmust be implemented but also of what interactions are needed to pro-duce such or such global behavior.It is our observation that quite a few applications of swarm intelligence(at least our brand of it) have been developed, that swarm intelligent sys-tems are quite easy to program, and that a knowledge of individual be-haviors and interactions is not needed. Rather, these behaviors and inter-actions emerge from very simple rules.Bonabeau et al. define swarm intelligence as “the emergent collectiveintelligence of groups of simple agents.” We agree with the spirit of thisdefinition, but prefer not to tie swarm intelligence to the concept of“agents.” Members of a swarm seem to us to fall short of the usual quali-fications for something to be called an “agent,” notably autonomy andspecialization. Swarm members tend to be homogeneous and followtheir programs explicitly. It may be politically incorrect for us to fail toalign ourselves with the popular paradigm, given the current hype sur-rounding anything to do with agents. We just don’t think it is the best fit.So why, after all, did we call our paradigm a “particle swarm?” Well, totell the truth, our very first programs were intended to model the coordi-nated movements of bird flocks and schools of fish. As the programsevolved from modeling social behavior to doing optimization, at somepoint the two-dimensional plots we used to watch the algorithms per-form ceased to look much like bird flocks or fish schools and startedlooking more like swarms of mosquitoes. The name came as simply asthat.Preface xixTeam LRN
  • 20. Mark Millonas (1994), at Santa Fe Institute, who develops his kind ofswarm models for applications in artificial life, has articulated five basicprinciples of swarm intelligence:I The proximity principle: The population should be able to carryout simple space and time computations.I The quality principle: The population should be able to respond toquality factors in the environment.I The principle of diverse response: The population should not com-mit its activity along excessively narrow channels.I The principle of stability: The population should not change itsmode of behavior every time the environment changes.I The principle of adaptability: The population must be able tochange behavior mode when it’s worth the computational price.(Note that stability and adaptability are the opposite sides of the samecoin.) All five of Millonas’ principles seem to describe particle swarms;we’ll keep the name.As for the term particle, population members are massless andvolumeless mathematical abstractions and would be called “points” ifthey stayed still; velocities and accelerations are more appropriately ap-plied to particles, even if each is defined to have arbitrarily small massand volume. Reeves (1983) discusses particle systems consisting of cloudsof primitive particles as models of diffuse objects such as clouds, fire, andsmoke within a computer graphics framework. Thus, the label we choseto represent the concept is particle swarm.AssertionsThe discussions in this book center around two fundamental assertionsand the corollaries that follow from them. The assertions emerge fromthe interdisciplinary nature of this research; they may seem like strangebedfellows, but they work together to provide insights for both socialand computer scientists.I. Mind is social. We reject the cognitivistic perspective of mind as aninternal, private thing or process and argue instead that bothxx PrefaceTeam LRN
  • 21. function and phenomenon derive from the interactions of indi-viduals in a social world. Though it is mainstream social science,the statement needs to be made explicit in this age where thecognitivistic view dominates popular as well as scientific thought.A. Human intelligence results from social interaction. Evaluating,comparing, and imitating one another, learning from experi-ence and emulating the successful behaviors of others, peopleare able to adapt to complex environments through the discov-ery of relatively optimal patterns of attitudes, beliefs, and be-haviors. Our species’ predilection for a certain kind of socialinteraction has resulted in the development of the inherent in-telligence of humans.B. Culture and cognition are inseparable consequences of human soci-ality. Culture emerges as individuals become more similarthrough mutual social learning. The sweep of culture movesindividuals toward more adaptive patterns of thought andbehavior. The emergent and immergent phenomena occur si-multaneously and inseparably.II. Particle swarms are a useful computational intelligence (soft com-puting) methodology. There are a number of definitions of “com-putational intelligence” and “soft computing.” Computationalintelligence and soft computing both include hybrids of evolu-tionary computation, fuzzy logic, neural networks, and artificiallife. Central to the concept of computational intelligence is sys-tem adaptation that enables or facilitates intelligent behavior incomplex and changing environments. Included in soft computingis the softening “parameterization” of operations such as AND,OR, and NOT.A. Swarm intelligence provides a useful paradigm for implementingadaptive systems. In this sense, it is an extension of evolutionarycomputation. Included application areas are simulation, con-trol, and diagnostic systems in engineering and computerscience.B. Particle swarm optimization is an extension of, and potentially im-portant new incarnation of, cellular automata. We speak of courseof topologically structured systems in which the members’ top-ological positions do not vary. Each cell, or location, performsonly very simple calculations.Preface xxiTeam LRN
  • 22. Organization of the BookThis book is intended for researchers; senior undergraduate and graduatestudents with a social science, cognitive science, engineering, or com-puter science background; and those with a keen interest in this quicklyevolving “interdiscipline.” It is also written for what is referred to in thebusiness as the “intelligent layperson.” You shouldn’t need a Ph.D. toread this book; a driving curiosity and interest in the current state of sci-ence should be enough. The sections on application of the swarm algo-rithm principles will be especially helpful to those researchers and engi-neers who are concerned with getting something that works. It is helpfulto understand the basic concepts of classical (two-valued) logic and ele-mentary statistics. Familiarity with personal computers is also helpful,but not required. We will occasionally wade into some mathematicalequations, but only an elementary knowledge of mathematics should benecessary for understanding the concepts discussed here.Part I lays the groundwork for our journey into the world of particleswarms and swarm intelligence that occurs later in the book. We visit bigtopics such as life, intelligence, optimization, adaptation, simulation,and modeling.Chapter 1, Models and Concepts of Life and Intelligence, first looks atwhat kinds of phenomena can be included under these terms. What islife? This is an important question of our historical era, as there are manyambiguous cases. Can life be created by humans? What is the role of ad-aptation in life and thought? And why do so many natural adaptive sys-tems seem to rely on randomness?Is cultural evolution Darwinian? Some think so; the question of evo-lution in culture is central to this volume. The Game of Life and cellularautomata in general are computational examples of emergence, whichseems to be fundamental to life and intelligence, and some artificial lifeparadigms are introduced. The chapter begins to inquire about the na-ture of intelligence and reviews some of the ways that researchers havetried to model human thought. We conclude that intelligence justmeans “the qualities of a good mind,” which of course might not be de-fined the same by everybody.Chapter 2, Symbols, Connections, and Optimization by Trial andError, is intended to provide a background that will make the later chap-ters meaningful. What is optimization and what does it have to do withminds? We describe aspects of complex fitness landscapes and somemethods that are used to find optimal regions on them. Minds can bexxii PrefaceTeam LRN
  • 23. thought of as points in high-dimensional space: what would be neededto optimize them? Symbols as discrete packages of meaning are con-trasted to the connectionist approach where meaning is distributedacross a network. Some issues are discussed having to do with numericrepresentations of cognitive variables and mathematical problems.Chapter 3, On Our Nonexistence as Entities: The Social Organism,considers the various zoom angles that can be used to look at living andthinking things. Though we tend to think of ourselves as autonomousbeings, we can be considered as macroentities hosting multitudes of cel-lular or even subcellular guests, or as microentities inhabiting a planetthat is alive. The chapter addresses some issues about social behavior.Why do animals live in groups? How do the social insects manage tobuild arches, organize cemeteries, stack woodchips? How do bird flocksand fish schools stay together? And what in the world could any of thishave to do with human intelligence? (Hint: It has a lot to do with it.)Some interesting questions have had to be answered before robotscould do anything on their own. Rodney Brooks’ subsumption archi-tecture builds apparently goal-directed behavior out of modules. Andwhat’s the difference between a simulated robot and an agent? Finally,Chapter 3 looks at computer programs that can converse with people.How do they do it? Usually by exploiting the shallowness or mindless-ness of most conversation.Chapter 4, Evolutionary Computation Theory and Paradigms, de-scribes in some detail the four major computational paradigms that useevolutionary theory for problem solving. The fitness of potential prob-lem solutions is calculated, and the survival of the fittest allows better so-lutions to reproduce. These powerful methods are known as the “second-best way” to solve any problem.Chapter 5, Humans—Actual, Imagined, and Implied, starts off mus-ing on language as a bottom-up phenomenon. The chapter goes on toreview the downfall of behavioristic psychology and the rise of cog-nitivism, with social psychology simmering in the background. Clearlythere is a relationship between culture and mind, and a number of re-searchers have tried to write computer programs based on that relation-ship. As we review various paradigms, it becomes apparent that a lot ofpeople think that culture must be similar to Darwinistic evolution. Arethey the same? How are they different?Chapter 6, Thinking Is Social, eases us into our own research on so-cial models of optimization. The adaptive culture model is based onAxelrod’s culture model—in fact, it is exactly like it except for one littlething: individuals imitate their neighbors, not on the basis of similarity,Preface xxiiiTeam LRN
  • 24. but on the basis of their performance. If your neighbor has a better solu-tion to the problem than you do, you try to be more like them. It is a verysimple algorithm with big implications.Part II focuses on our particle swarm paradigm and the collective andindividual intelligence that arises within the swarm. We first introducethe conceptually simplest version of particle swarms, binary particleswarms, and then discuss the “workhorse” of particle swarms, the real-valued version. Variations on the basic algorithm and the performanceof the particle swarm on benchmark functions precede a review of a fewapplications.Chapter 7, The Particle Swarm, begins by suggesting that the samesimple processes that underlie cultural adaptation can be incorporatedinto a computational paradigm. Multivariate decision making is re-flected in a binary particle swarm. The performance of binary particleswarms is then evaluated on a number of benchmarks.The chapter then describes the real-valued particle swarm optimiza-tion paradigm. Individuals are depicted as points in a shared high-dimensional space. The influence of each individual’s successes andthose of neighbors is similar to the binary version, but change is nowportrayed as movement rather than probability. The chapter concludeswith a description of the use of particle swarm optimization to find theweights in a simple neural network.Chapter 8, Variations and Comparisons, is a somewhat more techni-cal look at what various researchers have done with the basic particleswarm algorithm. We first look at the effects of the algorithm’s main pa-rameters and at a couple of techniques for improving performance. Areparticle swarms actually just another kind of evolutionary algorithm?There are reasons to think so, and reasons not to. Considering the simi-larities and differences between evolution and culture can help us under-stand the algorithm and possible things to try with it.Chapter 9, Applications, reviews a few of the applications of particleswarm optimization. The use of particle swarm optimization to evolveartificial neural networks is presented first. Evolutionary computationtechniques have most commonly been used to evolve neural networkweights, but have sometimes been used to evolve neural network struc-ture or the neural network learning algorithm. The strengths and weak-nesses of these approaches are reviewed. The use of particle swarm opti-mization to replace the learning algorithm and evolve both the weightsand structure of a neural network is described. An added benefit ofthis approach is that it makes scaling or normalization of input dataxxiv PrefaceTeam LRN
  • 25. unnecessary. The classification of the Iris Data Set is used to illustrate theapproach. Although a feedforward neural network is used as the exam-ple, the methodology is valid for practically any type of network.Chapter 10, Implications and Speculations, reviews the implicationsof particle swarms for theorizing about psychology and computation. Ifsocial interaction provides the algorithm for optimizing minds, thenwhat must that be like for the individual? Various social- and computer-science perspectives are brought to bear on the subject.Chapter 11, And in Conclusion . . . , looks back at some of the motifsthat were woven through the narrative.Appendix A, Statistics for Swarmers, is where we review some meth-ods for scientific experimental design and data analysis. The discussion isa high-level overview to help researchers design their investigations; youshould be conversant with these tools if you’re going to evaluate whatyou are doing with particle swarm optimization—or any other stochasticoptimization, for that matter. Included are sections on descriptive andinferential statistics, confidence intervals, student’s t-test, one-way anal-ysis of variance, factorial and multivariate ANOVA, regression analysis,and the chi-square test of independence. The material in this appendixprovides you with sufficient information to perform some of the simplestatistical analyses.Appendix B, Genetic Algorithm Implementation, explains how to usethe genetic algorithm software distributed at the book’s web site. Theprogram, which includes the famous Fisher Iris Data Set, is set up to opti-mize weights in a neural network. You can experiment with various pa-rameters described in Chapter 4 to see how they affect the ability of thealgorithm to optimize the weights in the neural network, to accuratelyclassify flowers according to several measurements taken on them. Thesource code is also available at the book’s web site and can be edited tooptimize any kind of function you might like to try.SoftwareThe software associated with this book can be found on the Internet atwww.engr.iupui.edu/~eberhart/web/PSObook.html. The decision to use theInternet as the medium to distribute the software was made for two mainreasons. First, by not including it with the book as, say, a CD-ROM, thecost of the book can be lower. And we hope more folks will read the bookPreface xxvTeam LRN
  • 26. as a result of the lower price. Second, we can update the software (andadd new stuff) whenever we want—so we can actually do somethingabout it when readers let us know about the (inevitable?) software crit-ters known as bugs. Some of the software is designed to be run onlinefrom within your web browser; some of it is downloadable and execut-able in a Windows environment on your PC.DefinitionsA few terms that are used at multiple places in the book are defined inthis section. These terms either do not have universally accepted defini-tions or their definitions are not widely known outside of the researchcommunity. Throughout the book, glossary terms are italicized and willbe defined in the back of the book. Unless otherwise stated, the followingdefinitions are to be used throughout the book:Evolutionary computation comprises machine learning optimizationand classification paradigms roughly based on mechanisms of evolutionsuch as biological genetics and natural selection (Eberhart, Simpson, andDobbins, 1996). The evolutionary computation field includes genetic al-gorithms, evolutionary programming, genetic programming, and evolu-tion strategies, in addition to the new kid on the block: particle swarmoptimization.Mind is a term we use in the ordinary sense, which is of coursenot very well defined. Generally, mind is “that which thinks.” DavidChalmers helps us out by noting that the colloquial use of the concept ofmind really contains two aspects, which he calls “phenomenological”and “psychological.” The phenomenological aspect of mind has to dowith the conscious experience of thinking, what it is like to think, whilethe psychological aspect (as Chalmers uses the term, perhaps many psy-chologists would disagree) has to do with the function of thinking, theinformation processing that results in observable behavior. The connec-tion between conscious experience and cognitive function is neithersimple nor obvious. Because consciousness is not observable, falsifiable,or provable, and we are talking in this book about computer programsthat simulate human behavior, we mostly ignore the phenomenology ofmind, except where it is relevant in explaining function. Sometimes theexperience of being human makes it harder to perceive functional cogni-tion objectively, and we feel responsible to note where first-person sub-jectivity steers the folk-psychologist away from a scientific view.xxvi PrefaceTeam LRN
  • 27. A swarm is a population of interacting elements that is able to opti-mize some global objective through collaborative search of a space. In-teractions that are relatively local (topologically) are often emphasized.There is a general stochastic (or chaotic) tendency in a swarm for individ-uals to move toward a center of mass in the population on critical dimen-sions, resulting in convergence on an optimum.An artificial neural network (ANN) is an analysis paradigm that isroughly modeled after the massively parallel structure of the brain. Itsimulates a highly interconnected, parallel computational structure withmany relatively simple individual processing elements (PEs) (Eberhart,Simpson, and Dobbins, 1996). In this book the terms artificial neural net-work and neural network are used interchangeably.AcknowledgmentsWe would like to acknowledge the help of our editor, Denise Penrose,and that of Edward Wade and Emilia Thiuri, at Morgan Kaufmann Pub-lishers. Special thanks goes to our reviewers, who stuck with us through amajor reorganization of the book and provided insightful and usefulcomments. Finally, we thank our families for their patience for yet an-other project that took Dad away for significant periods of time.Preface xxviiTeam LRN
  • 28. Team LRN
  • 29. partoneFoundationsTeam LRN
  • 30. Team LRN
  • 31. chapteroneModels and Concepts of Lifeand IntelligenceThis chapter begins to set the stage for thecomputational intelligence paradigm we call“particle swarm,” which will be the focusof the second half of the book. As humancognition is really the gold standard for in-telligence, we will, as artificial intelligenceresearchers have done before us, base ourmodel on people’s thinking. Unlike manyprevious AI researchers, though, we do notsubscribe to the view of mind as equivalentto brain, as a private internal process, assome set of mechanistic dynamics, and wedeemphasize the autonomy of the individualthinker. The currently prevailing cognitivistview, while it is extreme in its assumptions,has taken on the mantle of orthodoxy inboth popular and scientific thinking. Thus weexpect that many readers will appreciate oursetting a context for this new perspective.This introductory discussion will emphasizethe adaptive and dynamic nature of life ingeneral, and of human intelligence in partic-ular, and will introduce some computationalapproaches that support these views.We consider thinking to be an aspectof our social nature, and we are in verygood company in assuming this. Further, wetend to emphasize the similarities betweenhuman social behavior and that of otherspecies. The main difference to us is thatpeople, that is, minds, “move” in a high-dimensional abstract space. People navi-gate through a world of meaning, of manydistinctions, gradations of differences, anddegrees of similarity. This chapter then willinvestigate some views of the adaptability ofliving things and computational models andthe adaptability of human thought, againwith some discussion of computationalinstantiations. I3Team LRN
  • 32. The Mechanics of Life and ThoughtFrom the beginning of written history there has been speculation aboutexactly what distinguished living from nonliving things. The distinctionseemed obvious, but hard to put a finger on. Aristotle believed:What has soul in it differs from what has not, in that the former dis-plays life . . . Living, that is, may mean thinking or perception or localmovement and rest, or movement in the sense of nutrition, decay,and growth . . . This power of self-nutrition . . . is the originativepower, the possession of which leads us to speak of things as living.This list of attributes seemed to summarize the qualities of living things,in the days before genetic engineering and “artificial life” computer pro-grams were possible; Aristotle’s black-and-white philosophy defined or-thodox thought for a thousand years and influenced it for anotherthousand.It does not seem that the idea was seriously entertained that livingbodies were continuous with inorganic things until the 17th century,when William Harvey discovered that blood circulates through the body;suddenly the heart was a pump, like any other pump, and the bloodmoved like any other fluid. The impact was immediate and profound.The year after the publication of Harvey’s On the Motion of the Heart andBlood in Animals, Descartes noted: “Examining the functions whichmight . . . exist in this body, I found precisely all those that might exist inus without our having the power of thought, and consequently withoutour soul—that is to say, this part of us, distinct from the body, of which ithas been said that its nature is to think.” So in the same stroke withwhich he noted—or invented—the famous dichotomy between mindand body, Descartes established as well the connection between livingbodies and other physical matter that is perhaps the real revolution ofthe past few centuries. Our living bodies are just like everything else inthe world. Where earlier philosophers had thought of the entire humanorganism, mind and body, as a living unity distinct from inanimate mat-ter, Descartes invited the domain of cold matter up into the body, andsqueezed the soul back into some little-understood abstract dimensionof the universe that was somehow—but nobody knew how—connectedwith a body, though fundamentally different from it. It was not that Des-cartes invented the notion that mental stuff was different from physicalstuff—everybody already thought that. It was that he suggested that4 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 33. living bodies were the same as all the other stuff in the world. Mindsstayed where they were: different.Though he knew it must be true, even Charles Darwin found it hardto accept that living matter was continuous with inanimate matter: “Themost humble organism is something much higher than the inorganicdust under our feet; and no one with an unbiased mind can study anyliving creature, however humble, without being struck with enthusiasmat its marvelous structure and properties.” Indeed it seems that a hall-mark of life is its incredible complexity. Even the smallest, most primi-tive microbe contains processes and structures that can only be describedas amazing. That these phenomena were designed by chance generationand selection is so different from the way we ordinarily conceive designand creation that people have difficulty even imagining that life couldhave developed in this way, even when they know it must be true.In considering a subtle aspect of the world such as the difference be-tween living and nonliving objects, it seems desirable, though it mayturn out to be impossible, to know whether our distinctions are based onthe qualities of things or our attributions about them. A major obstacle isthat we are accustomed to thinking of ourselves as above and beyond na-ture somehow; while human accomplishments should not be trivialized,we must acknowledge (if this discussion is going to continue) that someof our feelings of grandeur are delusional—and we can’t always tellwhich ones. The taxonomic distinction between biological and otherphysical systems has been one of the cornerstones of our sense of beingspecial in the world. We felt we were divine, and our flesh was the livingproof of it. But just as Copernicus bumped our little planet out of thecenter of the universe, and Darwin demoted our species from divinity tobeast, we live to witness modern science chipping away these days ateven this last lingering self-aggrandizement, the idea that life itself con-tains some element that sets it above inanimate things. Today, ethical ar-guments arise in the contemplation of the aliveness of unborn fetuses, ofcomatose medical patients, of donor organs, of tissues growing in testtubes, of stem cells. Are these things alive? Where is the boundary be-tween life and inanimate physical objects, really? And how about thosescientists who argue that the earth itself is a living superorganism? Orthat an insect colony is a superorganism—doesn’t that make the so-called “death” of one ant something less than the loss of a life, some-thing more like cutting hair or losing a tooth? On another front, the cre-ation of adaptive robots and lifelike beings in computer programs, withgoal-seeking behaviors, capable of self-reproduction, learning and rea-soning, and even evolution in their digital environments, blurs theThe Mechanics of Life and Thought 5Team LRN
  • 34. division as well between living and nonliving systems. Creatures in arti-ficial life programs may be able to do all the things that living things do.Who is to say they are not themselves alive?And why does it matter? Why should we have a category of things wecall “alive” and another category for which the word is inappropriate? Itseems to us that the distinction has to do with a moral concern aboutkilling things—not prohibition exactly, but concern. It is morally accept-able to end some kinds of dynamic processes, and it is not acceptable toend others. Termination of a living process calls for some special kinds ofemotion, depending mainly on the bond between the living thing andourselves. For whatever reasons, we humans develop an empathic rela-tionship with particular objects in the world, especially ones that inter-act with us, and the concept of “living” then is hopelessly bound up inthese empathic relations. The tendency to distinguish something vi-brant and special in living things is part of our sociality; it is an extensionof our tendency to fraternize with other members of our species, which isa theme you will encounter a lot in this book.There may be good, rational reasons to draw a line between living andother things. Perhaps a definition of life should include only those ob-jects that possess a particular chemical makeup or that have evolvedfrom a single original lineage. The organisms we count as living are basedon hydrocarbon molecules, albeit of wide varieties, and self-reproducingDNA and RNA are found throughout the biological kingdoms. Is thiswhat is meant by life? We doubt it. The concept of extraterrestrial life, forinstance, while hard to imagine, is nonetheless easy to accept. Whatwould we say if there were silicon-based beings on another planet? Whatif we discovered Plutonians who had evolved from silica sand into a life-like form with highly organized patterning—which we would recognizeimmediately as microchips—and intricate, flexible, adaptive patterns ofcognition and behavior? If we ran into them in the Star Wars bar, wouldwe consider them alive? The answer is plainly yes. Does it make anysense to allow that alien computers can be alive, while terrestrial onescannot? The answer has to be no.Possibly the reluctance to consider computer programs, robots, ormolecularly manufactured beings as living stems from the fact that thesethings are man-made—and how can anything living be man-made? Thisargument is religious, and we can’t dispute it; those who believe life cancome only from deity can go ahead and win the argument.Biologists with the Minimal Genome Project have been investigat-ing the bottom line, trying to discern what is the very least information6 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 35. that needs to be contained in the genes for an organism (or is it?) to per-petuate its dynamics, to be called “alive” (Hutchinson et al., 1999). Bymethodically altering the genes of some very simple one-celled organ-isms and by comparing the genomes of some very simple bacteria, thesescientists, led by J. Craig Venter, have been able to identify which onesare necessary, and which are not, for the organisms’ survival. They havedetermined that 265 to 350 of Mycoplasma genitalium’s 480 genes are es-sential for life under laboratory conditions (interestingly, the biologistsdon’t know what 111 of those genes do—only that they are necessary).With the minimal subset, the organism may not be able to survive out-side the warm comfort of its petri dish, but in carefully controlled condi-tions it should be able to stay alive. As we are talking about a sequence ofonly a few hundred genes, and molecular engineering is an everyday ac-tivity, the question arises, what will you call it when—not if—some labo-ratory mechanics put together a package of molecules that can reproduceand metabolize nutrients in a carefully controlled synthetic environ-ment? Do you call it life?In all ways known, biological organisms are like other machines (e.g.,Wooldridge, 1968). The operation of muscle, of digestive enzyme, ofneuron, of DNA—as Descartes observed, all these things are explainablein technological terms. We know now that there is no special problemwith replacing body parts with man-made machinery, once the details oftheir processes are known. If our bodies may be a kind of ordinary hard-ware, what about our minds? Knowing that brain injury can result inmental anomalies, and knowing that electrical and chemical activities inthe brain correlate with certain kinds of mental activities, we can cor-rectly conclude that brains provide the machinery of minds. We will notattempt to tackle the problem of the relationship of minds and brainsin this introductory section—we’ll work up to that. For now we onlynote that brains are physical objects, extremely complex but physicalnonetheless, bound by the ordinary laws of physics; the machinery ofthought is like other machinery.As for the question of whether man-made artifacts can really be alive,we have encountered a situation that will appear at several points in thisbook, so we may as well address it now. In science as in other domains ofdiscourse, there may be disagreements about how things work, whatthey are made of, whether causes are related to effects, about the truequalities of things—questions about the world. These kinds of questionscan be addressed through observation, experiment, and sometimes de-duction. Sometimes, though, the question is simply whether somethingThe Mechanics of Life and Thought 7Team LRN
  • 36. belongs to a semantic category. Given a word that is the label for a cate-gory, and given some thing, idea, or process in the world, we may argueabout whether that label should properly be attached to the object.There is of course no way to prove the answer to such a question, as cate-gories are linguistic conventions that derive their usefulness exactlyfrom the fact that people agree on their usage. Is X an A? If we agree it is,then it is: a tomato is a vegetable, not a fruit. If we disagree, then we mayas well go on to the next thing, because there is no way to prove that oneof us is correct. One of the fundamental characteristics of symbols, in-cluding words, is that they are arbitrary; their meaning derives fromcommon usage. In the present case, we will wiggle away from the disputeby agreeing with the scientific convention of calling lifelike beings incomputer programs “artificial life” or “Alife.” Some later questions willnot offer such a diplomatic resolution.Computers are notorious for their inability to figure out what the userwants them to do. If you are writing a document and you press ALT in-stead of CTRL, the chances are good that your word processor will dosome frustrating thing that is entirely different from what you intended.Hitting the wrong key, you might end up deleting a file, editing text,changing channels, navigating to some web site you didn’t want to goto—the computer doesn’t even try to understand what you mean. Thiskind of crisp interpretation of the world is typical of “inanimate” objects.Things happen in black and white, zero and hundred percents. Now andthen a software product will contain a “wizard” or other feature that issupposed to anticipate the user’s needs, but generally these things aresimply based on the programmer’s assumptions about what users mightwant to do—the program doesn’t “know” what you want to do, it justdoes what it was programmed to do and contains a lot of if-then state-ments to deal with many possibilities. Some of them are quite clever, butwe never feel guilty shutting down the computer. People who shut downother people are called “inhuman”—another interesting word. You arenot considered inhuman for turning off your computer at the end of theday. It is not alive.Contrast the crispness of machines with animate things. Last weekthere was a dog on television who had lost both his rear legs. An animatesystem, for instance, the two-legged dog, adapts to novelty and to ambi-guity in the environment. This dog had learned to walk on its forepawsperfectly well. The television program showed him chasing a ball, fol-lowing his master, eating, doing all the things we expect a dog to do, bal-anced on his forepaws with his tail up in the air. For all he knew, dogswere supposed to have two legs.8 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 37. Stochastic Adaptation: Is Anything Ever Really Random?There was a word in the previous paragraph that will become a core con-cept for everything we will be talking about. The word is adapt. It hasbeen argued that an important dimension of difference between animateand inanimate things is the ability to adapt. Later we will consider an ar-gument that intelligence is a propensity or ability to adapt, as well. So itwill be good to dwell on the term for a moment, to consider what itmeans.The word “adaptation” comes from a Latin root meaning “to fit to.”Thus right from the beginning the word implies that there are twothings: something that adapts and something it adapts to. Among livingthings, we say that organisms or species adapt to their environments.Of course, interacting species adapting to their environments end upchanging the environments and adapting to the changed niches, whichinclude other adapting organisms, and so on—adaptation can be end-lessly dynamic. It is important to keep in mind that an entity is adaptivein relation to some criterion.Adaptation in nature is almost always a stochastic process, meaningthat it contains randomness; the word usually refers to a phenomenonthat is probabilistic in nature. Most of the paradigms discussed here willhave a random component. By one definition, randomness exists whenrepeated occurrences of the same phenomenon can result in differentoutcomes. Almost always, things that appear random turn out to be de-terministic (they follow certainly from causes), except that we don’t knowthe chains of causality involved. For instance, people may lament afteran accident, “If only I had taken Third Street instead of Main,” or “If onlyI had never introduced her to him.” These “if only’s” belie our inabilityto perceive the chains of causes that reach from an early state of a systemto a later state.“Random numbers” in a computer are of course not random at all,though we have little trouble referring to them as such. Given the samestarting point, the random number generator will always produce ex-actly the same series of numbers. These quasirandom processes appearrandom to us because the sequence is unpredictable, unless you happento know the formula that produces it. Most of the time, when we say“random” we really mean “unpredictable,” or even just “unexpected;” inother words, we are really describing the state of our understandingrather than a characteristic of the phenomena themselves. Randomnessmay be another one of those things that don’t exist in the world, butStochastic Adaptation: Is Anything Ever Really Random? 9Team LRN
  • 38. only in our minds, something we attribute to the world that is not aquality of it. The attribution of randomness is based on the observer’s in-ability to understand what caused a pattern of events and is not necessar-ily a quality of the pattern itself. For instance, we flip a coin in order tointroduce randomness into a decision-making process. If the directionand magnitude of the force of the thumb against the coin were known, aswell as the mass of the coin and the distribution of density through itsvolume, relevant atmospheric characteristics, and so on, the trajectory ofthe coin could be perfectly predicted. But because these factors are hardto measure and control, the outcome of a flip is unpredictable, what wecall “random”—close enough.Sociobiologist E. O. Wilson (1978) proposed an interesting elabora-tion on the coin-flipping example. He agrees with us that if all the knowl-edge of physical science were focused on the coin flip, the outcome couldbe perfectly predictable. So, he suggests, let us flip something more inter-esting. What would happen if we flipped something more complicated—perhaps a bee? A bee has memory, can learn, reacts to things, and in factwould probably try to escape the trajectory imposed on it by the flipper’sthumb. But, as Wilson points out, if we had knowledge of the nervoussystem of the bee, the behaviors of bees, and something of the history ofthis bee in particular, as well as the other things that we understood inthe previous paragraph about forces acting on a flipped object, we mightbe able to predict the trajectory of the bee’s flight quite well, at leastbetter than chance. Wilson makes the profound point that the bee, kick-ing and flapping and twitching as it soars off the thumb, has “free will,”at least from its own point of view. From the human observer’s perspec-tive, though, it is just an agitated missile, quite predictable.Of course it is easy to extend Wilson’s provocative example one littlestep further and imagine flipping a human being. Perhaps a mighty crea-ture like King Kong or a more subtle Leviathan—perhaps social forces orforces of nature—could control the actual trajectory of a human throughspace. Would that human being have free will? He or she would probablythink so.Let’s insert a social observation here. In a later chapter we will be dis-cussing coordination games. These are situations involving two or moreparticipants who affect one another’s, as well as their own, outcomes.Game-theory researchers long ago pointed out that it is impossible to de-vise a strategy for successfully interacting with someone who is makingrandom choices. Imagine someone who spoke whether the other personwas talking or not, and whether the subject was related to the previousone; such unpredictable behavior would be unconscionably rude andunnerving. Being predictable is something we do for other people; it is a10 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 39. service we provide to enable other people to deal with us. Random be-havior is rude. Even contrary behavior, where the person does the op-posite of what we want, is at least predictable, and though we might notappreciate them, we can accept rebels who behave contrary to our expec-tations. But behavior performed without any reference to our expecta-tions, that is, random behavior, is dangerous; the “loose cannon” is adanger. Some writers, such as cultural psychologist Michael Tomasello(1999), neuroscientist Leslie Brothers (1997), and social psychologistTom Ostrom (1984), have argued for the primacy of social cognition: cat-egorization and other cognitive processes are simply extensions of socialinformation-processing techniques to apply to nonsocial objects. Our or-dinary interpretation of randomness and its opposite, the identificationof simple kinds of order, might very well be a form of social thinking.For most of the 20th century it was thought that “true” randomnessexisted at the subatomic level. Results from double-slit experiments andnumerous thought experiments had convinced quantum physicists thatsubatomic entities such as photons should be conceptualized both asparticles and as waves. In their wave form, such objects were thought tooccupy a state that was truly stochastic, a probability distribution, andtheir position and momentum were not fixed until they were observed.In one of the classic scientific debates of 20th-century physics, Niels Bohrargued that a particle’s state was truly, unknowably random, while Ein-stein argued vigorously that this must be impossible: “God does not playdice.” Until very recently, Bohr was considered the winner of the dispute,and quantum events were considered to be perhaps the only example oftrue stochasticity in the universe. But in 1998, physicists Dürr, Nonn,and Rempe (1998) disproved Bohr’s theorizing, which had been based onHeisenberg’s uncertainty principle. The real source of quantum “ran-domness” is now believed to be the interactions or “entanglements” ofparticles, whose behavior is in fact deterministic.The basis of observed randomness is our incomplete knowledge of theworld. A seemingly random set of events may have a perfectly good ex-planation; that is, it may be perfectly compressible. Press, Teukolsky,Vetterling, and Flannery’s bible of scientific programming, NumericalRecipes in C (1993), shows a reasonably good random number generatorthat can be written with one line of code. If we don’t know what the un-derlying process is that is generating the observed sequence, we call itrandom. If we define “randomness” as ignorance, we can continue to usethe term, in humility.In a future section we will consider cellular automata, computer pro-grams whose rules are fully deterministic, but whose outputs appear tobe random, or sometimes orderly in an unpredictable way. In discussionsStochastic Adaptation: Is Anything Ever Really Random? 11Team LRN
  • 40. of randomness in this book, we will assume that we are talking aboutsomething similar to the unpredictability that arises from complex sys-tems such as cellular automata. This kind of randomness—and let’s goahead and call it that—is an integral aspect of the environment; life andintelligence must be able to respond to unexpected challenges.We have noted that adaptation usually seems to include some ran-dom component, but we have not asked why. If we consider an adaptivesystem as one that is adjusting itself in response to feedback, then thequestion is in finding the appropriate adjustment to make; this can bevery difficult in a complex system, as we will see shortly, because of bothexternal demands and the need to maintain internal consistency. Inadaptive computer programs, randomness usually serves one of twofunctions. First, it is often simply an expression of uncertainty. Maybe wedon’t know where to start searching for a number, or where to go next,but we have to go somewhere—a random direction is as good as any. Agood, unbiased quasirandom number can be especially useful in areaswhere people have known predispositions. Like the drunk who looks forhis keys under the streetlight, instead of in the bushes where he droppedthem, “because there’s more light here,” we often make decisions that re-flect our own cognitive tendencies more than the necessities of the taskat hand. A random choice can safeguard against such tendencies. Thesecond important function of random numbers is, interestingly, to intro-duce creativity or innovation. Just as artists and innovators are often theeccentrics of a society, sometimes we need to introduce some random-ness just to try something new, in hopes of improving our position. Andlots of times it works.The “Two Great Stochastic Systems”Stochastic adaptation is seen in all known living systems. Probabilisticchoices allow creative, innovative exploration of new possibilities, espe-cially in a changing environment. There are other advantages as well—for instance, perfectly predictable prey would be a little too convenientfor predators. In general, organisms can adapt by making adjustmentswithin what Gregory Bateson (1979) called the “two great stochastic sys-tems.” These systems are evolution and mind.Later we will go into evolutionary and genetic processes in more de-tail. For the present, we note that evolution operates through variationand selection. A variety of problem solutions (chromosomes or patternsof features) are proposed and tested; those that do well in the test tend to12 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 41. survive and reproduce, while those that perform poorly tend to be elimi-nated. This is what is meant by “survival of the fittest.” Looking at itfrom the population level, for instance, looking at the proportion of in-dividuals with a phenotypic trait, we see a kind of stochastic change, asthe probabilities of various traits increase and decrease in a populationover time, and the mean values of quantitative features, for instance,height or weight, shift along their continua.The second “great stochastic system” is the mind. Contemporary cog-nitive scientists see the mind as a stochastic system of neurons, adjustingtheir firings in response to stimuli that include other neurons. The brainis an excellent exemplar of the concept of a complex adaptive system—but this is not what Bateson meant when he described mind as a greatstochastic system, and it is not what we mean. The distinction betweenbrain and mind is extremely important and not subtle at all, thoughmost people today fail to observe it. We are talking about minds here, notbrains.Some theorists argue that mental processes are very similar to evolu-tionary ones, where hypotheses or ideas are proposed, tested, and eitheraccepted or rejected by a population. The most prevalent opinion alongthese lines is the “memetic” view, proposed by Dawkins (1976) in TheSelfish Gene, which suggests that ideas and other cultural symbols andpatterns, called memes, act like genes; they evolve through selection,with mutation and recombination just like biological genes, increasingtheir frequency in the population if they are adaptive, dying out ifthey’re not.Dawkins points out that genes propagate through replication, mak-ing copies of themselves with slight differences from one “generation” tothe next. Now the evolution of species, which crawls along over theeons, has created a kind of environment for the propagation of a newfaster kind of replicator. In particular, the human brain provides a hostenvironment for memetic evolution. Memes are not restricted to brains,though; they can be transmitted from brains to books and to computers,and from books to computers, and from computer to computer. The evo-lution of self-reproducing memes is much faster than biological evolu-tion, and as we will see, some memes even try to dominate or control thedirection of human genetic evolution.Dawkins asserts that memes replicate through imitation, and in facthe coined the word from the ancient Greek mimeme, meaning to imitate.Ideas spread through imitation; for instance, one person expresses anidea in the presence of another, who adopts the idea for his or her own.This person then expresses the idea, probably in their own words, addingthe possibility of mutation, and the meme replicates through theThe “Two Great Stochastic Systems” 13Team LRN
  • 42. population. As if manifesting some kind of self-referential amplification,the theory of memes itself has spread through the scientific communitywith surprising efficiency; now there are scientific journals dedicated tothe topic, and a large number of recent books on scientific and computa-tional subjects have endorsed the view in some form. The view that mindand evolution operate by similar or identical processes is very widely ac-cepted by those who should know.The similarities between the two stochastic systems are significant,but the analogy becomes ambiguous when we try to get specific aboutthe evolution of raw abstractions. The ancient question is, what, really, isan “idea?” You could take the view that ideas exist somewhere, in a Pla-tonic World of Forms, independent of minds. Our insights and under-standing, then, are glimpses of the ideal world, perceptions of shadowsof pure ideas. “New” ideas existed previously, they had just not beenknown yet to any human mind. Next we would have to ask, how doesevolution operate on such ideas? No one has proposed that ideas evolvein their disembodied state, that truth itself changes over time, that theWorld of Forms itself is in a dynamical process of evolving. The samemathematical facts, for instance, that were true thousands of years agoare equally true today—and if they are not true now, they never were. Ithas only been proposed that ideas evolve in the minds of humans; inother words, our knowledge evolves. The distinction is crucial.The other view is that ideas have no existence independent of minds.According to this view, ideas are only found in the states of individualminds. At its most extreme, this position holds that an idea is nothingmore than a pattern of neuronal connections and activations. Memeticevolution in this view then consists in imitating neuronal patterns, test-ing them, and accepting them if they pass the test. The problem here isthat the same idea is almost certainly embodied in different neuronalpatterns in different people—individuals’ brains are simply wired updifferently.Luckily for us, we don’t have to resolve millennia-old questions aboutthe preexistence of ideas. For the present discussion it is only importantto note that for memetic evolution to act upon ideas, they have to be-come manifest somehow in minds. They have to take on a form that al-lows mental representation of some type (a controversial topic in itself),and if they are to propagate through the population, their form mustpermit communication. The requirement of communication probablymeans that ideas need to be encoded in a symbol system such aslanguage.Our argument is that cultural evolution should be defined, not as op-erations on ideas, but as operations on minds. The evolution of ideas14 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 43. involves changes in the states of minds that hold ideas, not changes inthe ideas themselves; it is a search—by minds—through the universe ofideas, to find the fitter ones. This will become more important when wediscuss mental activity, intelligence, and culture.We will emphasize now, and later as well, that cognition is a differentprocess from genetic evolution. The great stochastic systems are differentfrom one another: one uses selection, removing less fit members fromthe population, and the other adapts by changing the states of individu-als who persist over time. These are two different kinds of adaptation.This is not to deny that ideas evolve in the minds of humans. The ideasexpressed by people certainly change over time, in an adaptive way. Weare suggesting a change in emphasis, that a scientific view of the evolu-tion of ideas should look at changes of states of individuals, rather thanat the ideas themselves.There is something difficult about thinking of minds in populations.We are used to thinking of ourselves as autonomous thinkers, sometimesaccepting beliefs and processes that others around us hold, but most ofthe time figuring out things on our own. Our experience is that we are incontrol of our cognitive systems, perhaps with some exceptions, such aswhen we are surprised or overcome with emotions; we experience ourthoughts as controlled and even logical. If we are to consider the evolu-tion of ideas through a population, though, we will need to transcendthis illusion of autonomy and observe the individual in the context of asociety, whether it is family, tribe, nation, culture, or pancontinentalspecies. We are not writing this book to justify or romanticize commonlyheld beliefs about mankind’s important place in the universe; we in-tend to look unblinking at the evidence. In order to do that, we needto remove self-interest and sentimental self-aggrandizement from thediscussion.In Mind and Nature, Bateson detailed what he considered to be the cri-teria of mind, qualities that were necessary and sufficient for somethingto be called a mind:1. A mind is an aggregate of interacting parts or components.2. The interaction between parts of mind is triggered by difference. For in-stance, perception depends on changes in stimuli.3. Mental process requires collateral energy. The two systems involvedeach contribute energy to an interaction—as Bateson says, “Youcan take a horse to water, but you cannot make him drink. Thedrinking is his business” (p. 102).The “Two Great Stochastic Systems” 15Team LRN
  • 44. 4. Mental process requires circular (or more complex) chains of determina-tion. The idea of reciprocal causation, or feedback, is a very impor-tant one and is fundamental to mental processes.5. In mental process, the effects of difference are to be regarded as trans-forms (i.e., coded versions) of the difference which preceded them. Ef-fects are not the same as their causes; the map is not the same asthe territory.6. The description and classification of these processes of transformationdiscloses a hierarchy of logical types immanent in the phenomena.We will not attempt here to explain or elaborate Bateson’s insightfuland subtle analysis. We do want to note though that he hints—in fact itis a theme of his book—that biological evolution meets these criteria,that nature is a kind of mind. This seems a fair and just turnaround of thecurrently prevalent opinion that mind is a kind of evolution. We will seethat evolutionary processes can be encoded in computer programs usedto solve seemingly intractable problems, where the problem is defined asthe analogue of an ecological niche, and recombined and mutated varia-tions are tested and competitively selected. In other words, we can capi-talize on the intelligence, the mental power, of evolution to solve manykinds of problems. Once we understand the social nature of mental pro-cesses, we can capitalize on those as well.It appears to some thinkers that mind is a phenomenon that occurswhen human beings coexist in societies. The day-to-day rules of livingtogether are not especially complicated—some relatively straightforwardscraps of wisdom will get you through life well enough. But the accumu-lated effect of these rules is a cultural system of imponderable depth andbreadth. If we could get a sufficiently rich system of interactions going ina computer, we just might be able to elicit something like human intelli-gence. The next few sections will examine some ways that scientists andother curious people have tried to understand how to write computerprograms with the qualities of the great stochastic systems.The Game of Life: Emergence in Complex SystemsWe have described inanimate things as being intolerant of variation inthe environment and have used everyday commercial computer pro-grams as an example of this. But it doesn’t need to be that way: research-ers have developed methods of computing that are tolerant of errors and16 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 45. novel inputs, computer programs that show the kinds of adaptation thatwe associate with living things.The Game of LifeAn arbitrary starting point for the story of the modern paradigm knownas artificial life is an idea published in Scientific American in October 1970,in Martin Gardner’s “Mathematical Games” column: mathematicianJohn Conway’s “Game of Life” (Gardner, 1970). The Game of Life is agrid of binary elements, maybe checkerboard squares or pixels on ascreen, arranged on a two-dimensional plane, with a simple set of rulesto define the state of each element or “cell.” Each cell is conceptualizedas belonging to a neighborhood comprising its eight immediate neigh-bors (above, below, to the sides, and diagonal). The rules say thatI If an occupied cell has fewer than two neighbors in the “on” state(which we can call “alive”), then that cell will die of loneliness—itwill be in the “off” or “dead” state in the next turn.I If it has more than three neighbors in the on state, then it will dieof overcrowding.I If the cell is unoccupied and it has exactly three alive neighbors, itwill be born in the next iteration.The Game of Life can be programmed into a computer, where therules can be run iteratively at a high speed (unlike in Conway’s time, the1960s, when the game had to be seen on a checkerboard or Go board, orother matrix, in slow motion). The effect on the screen is mesmerizing,always changing and never repeating, and the program demonstratesmany of the features that have come to be considered aspects of artificiallife.A common way to program it is to define cells or pixels on the screenas lit when they are in the alive state, and dark when they are not, or viceversa. The program is often run on a torus grid, meaning that the cells atthe edges are considered to be neighbors to the cells at the opposite end.A torus is shaped like a doughnut; if you rolled up a sheet of paper so thatthe left and right edges met, it would be a tube or cylinder, and if yourolled that so that the open ends met, it would be in a torus shape. A to-rus grid is usually displayed as a rectangle on the screen, but where rulesin the program require a cell to assess the states of its neighbors, those atthe edges look at the cells at the far end.The Game of Life: Emergence in Complex Systems 17Team LRN
  • 46. When the Game of Life runs, cohesive patterns form out of apparentrandomness on the grid and race around the screen. Two patterns collid-ing may explode into myriad new patterns, or they may disappear, orthey may just keep going right through one another. Some simple pat-terns just sit there and blink, while some have been likened to cannons,shooting out new patterns to careen around the grid. A kind of Game ofLife lore has developed over the years, with names for these different pat-terns, such as “blinkers,” “b-heptaminos,” “brains” and “bookends,”“gliders” and “glider guns,” and “r-pentaminos.” Some of the names arescientifically meaningful, and some are just fun. Figure 1.1 shows theglider pattern, which moves diagonally across the screen forever, unlessit hits something.The Game of Life is fascinating to watch. Writer M. Mitchell Waldrop(1992) has called it a “cartoon biology” (p. 202). When it is programmedto start with a random pattern, it is different every time, and sometimesit runs for a long time before all the cells die out. Artificial life researchershave pointed out many ways that Conway’s simple game is similar to bi-ological life, and it has even been suggested that this algorithm, or onelike it, might be implemented as a kind of universal computer. Mostly,though, there is a sense conveyed by the program that is chillingly famil-iar, a sense of real life in process.EmergencePerhaps the most obvious and most interesting characteristic of theGame of Life is a property called emergence. There is much discussion inthe scientific community about a complete definition of emergence; atleast in this simple instance we can see that the complex, self-organizingpatterns on the screen were not written into the code of the program thatproduced them. The program only says whether you are dead or alive de-pending on how many of your neighbors are alive; it never defines blink-ers and gliders and r-pentaminos. They emerge somehow from a lower-level specification of the system.Emergence is considered to be a defining characteristic of a complexdynamical system. Consider for instance the economy of a nation. (Notethat economies of nations interact, and that the “economy of a nation”is simply an arbitrary subset of the world’s economy.) As a physical sys-tem, an economy consists of a great many independent pairwise interac-tions between people who offer services or goods and people who wantthem. The 18th-century Scottish economist Adam Smith proposed that18 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 47. there seems to be something like an “invisible hand” guiding the wholeprocess, and in fact it is the absence of an invisible hand that makes thesystem interesting to us. Somehow, though no person controls an econ-omy, it evolves toward and maintains an equilibrium; it is a relatively sta-ble stochastic process, not really predictable but dependable. Prices andwages are consistent from place to place and over time. This does notmean they are the same everywhere, or at all times, but the differencesare more or less consistent. The important point is that there is no cen-tral control. The stability or consistency of an economy at the large scaleemerges from the qualities of the very many person-to-person interac-tions that make it up. (We will emphasize later the fact that the local in-dividual interactions are affected by the global state of the system as well,a top-down phenomenon we call immergence.)An ecology (note that all ecologies are interconnected) is another ex-ample of a complex system with emergent qualities. (Note that econo-mies and ecologies are components of the same global system.) Predatorsand prey keep one another in check; trees and underbrush, hosts andparasites, heat and gases and fluids and soils interact incalculably to pro-duce a system that persists very well over time in a kind of dynamic equi-librium. These complex systems are resilient; that is, if they are per-turbed, they return to balance. This is a feature that we earlier describedas “animate,” a feature of life: graceful degradation or even complete re-covery in the face of disruption. Any individual in an economy may ex-perience failure. Any organism in an ecological environment might beuprooted or killed. Whole species may become extinct. Volcanoes, hurri-canes, stock market crashes may result in devastation, but the system re-pairs itself and returns to a stable state—without central control.Some have argued that emergence is simply a word, like “random,”to cover up our ignorance about the true relationships between causesand effects. In this view, “emergent” really means that an effect wasnot predicted, implying that the person describing the system didn’tThe Game of Life: Emergence in Complex Systems 19TimeFigure 1.1 The ubiquitous “glider” pattern in the Game of Life.Team LRN
  • 48. understand it well enough to guess what was going to happen. On theother hand, emergence can be viewed as a kind of process in which thesystem returns robustly to an attractor that is an irresistible mathemati-cal necessity of the dynamics that define it. These are not mutually exclu-sive definitions of emergence; an omniscient person would expect theemergent effect, but its relation to the lower-level events that produce itis often too complex for most of us to comprehend. In his book Emer-gence: From Chaos to Order, John Holland (1998) simply refuses to try todefine the term, leaving us to understand it in both its senses, whicheveris more appropriate in any instance. Whichever way you choose to con-sider emergence, the fact remains that very complex systems are able tomaintain something like equilibrium, stability, or regularity without anyinvisible hand or central control.An important aspect of the Game of Life is the fact that the rules asgiven by Conway cause the system to run for a long time without repeat-ing. Other rules can be made up: for instance, a cell could stay alive onlyif all or none of its neighbors were alive. There are many freeware Gameof Life programs that readers can acquire for experimenting with variousrules. The finding will be that most rule sets are uninteresting. For mostsets of rules the system simply stops after a few time steps, with all cellseither on or off, or the screen fills with utter chaos. Only a few knownsets of rules result in a system that continues without repetition. In orderto understand how this occurs, we will have to move the discussion up anotch to discuss the superset of which the Game of Life is a member: cel-lular automata.Cellular Automata and the Edge of ChaosThe cellular automaton (CA) is a very simple virtual machine that re-sults in complex, even lifelike, behavior. In the most common one-dimensional, binary versions, a “cell” is a site on a string of ones andzeroes (a bitstring): a cell can exist in either of two states, represented aszero or one. The state of a cell at the next time step is determined by itsstate at the current time and by the states of cells in its neighborhood. Aneighborhood usually comprises some number of cells lying on each sideof the cell in question. The simplest neighborhood includes a cell and itsimmediately adjacent neighbors on either side. Thus a neighborhoodmight look like 011, which is the cell in the middle with the state value 1and its two adjacent neighbors, one in state 0 and one in state 1.20 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 49. A neighborhood is embedded in a bitstring of some length greaterthan the neighborhood size. For instance, the neighborhood 011 occursaround the fifth position of this bitstring of length = 20:01001100011011110111The behavior of a CA is defined in a set of rules based on the pattern ofstates in the neighborhood. With a three-cell neighborhood, there are 23= 8 possible neighborhood configurations. A CA rule set specifies thestate of the cell at the next iteration for each of these eight configura-tions. For instance, a rule set might be given as shown in Table 1.1.These rules are applied to every cell in the bitstring, the next state iscalculated, and then the new bitstring is printed in the next row, or morelikely, ones are indicated by lit pixels and zeroes by dark pixels. As thesystem iterates, the patterns of cells as they are affected by their rulesover time are printed on the screen, each iteration producing the row ofpixels directly under the preceding iteration’s. Figure 1.2 shows someexamples.In an important early contribution to the study that became artificiallife theory, Stephen Wolfram (1984/1994) noted that CA behaviors canbe assigned to four classes:1. Evolution leads to a homogeneous state, with all cells in the samestate.Cellular Automata and the Edge of Chaos 21Table 1.1 A set of rules for implementing a one-dimensionalcellular automaton.Neighbor 1 Cell now Neighbor 2 Cell next1 1 1 → 01 1 0 → 11 0 1 → 01 0 0 → 10 1 1 → 10 1 0 → 00 0 1 → 10 0 0 → 0Team LRN
  • 50. Figure 1.2 Some examples of one-dimensional binary cellular automata. Each row of pixelsis determined by the states of the pixels in the row above it, as operated on bythe particular set of rules.Team LRN
  • 51. 2. Evolution leads to a set of simple stable or periodic structures.3. Evolution leads to a chaotic pattern.4. Evolution leads to complex localized structures, sometimes long-lived.The first type of CA can be characterized as a point attractor; the dy-namics of the system simply stop with all cells in the one state or all inthe zero state. The second type results in a repeating, cyclic pattern,called a periodic attractor, and the third type results in a complex nonre-peating pattern with no apparent patterns to be seen: a strange attractor.Keep in mind that, though these strange system behaviors appear ran-dom, they are fully deterministic. The rules are clear and inviolable;there are no perturbations from outside the system or surprises intro-duced. The state of every cell can be predicted precisely from its neigh-borhood’s state in the previous iteration, all the way back from the start-ing configuration.Cellular automata of the fourth type are the interesting ones to us.Certain sets of rules result in patterns that run on and on, maybe forever,creating patterns that are characteristic and identifiable but that neverrepeat. These patterns look like something produced by nature, like thetangle of branches in a winter wood, the king’s crowns of raindrops on apuddle, the wiggling ripples of wind blowing over a pond’s surface, thefuzz on a thistle. Looking at the computer screen as the sequence of CAiterations unfolds, you feel a sense of recognizing nature at work.John von Neumann (1951), who first proposed the idea of cellular au-tomata a half-century ago, described an organism composed of cells,each one of which was a “black box,” a unit with some kinds of unknownprocesses inside it. The term “cellular automata” was actually coined byArthur Burks (1970). The state of each cell, for instance, a neuron, de-pends on the states of the cells it is connected to. The rules of a CA de-scribe patterns of causality, such as exist among parts of many kinds ofcomplex systems. The state of any element in the system—ecology,brain, climate, economy, galaxy—depends causally on the states of otherelements, and not just on the states of individual elements, but on thepattern or combination of states. Life and mind are woven on just thiskind of causal patterning, and it is a theme we will be dwelling on muchin these pages.Stephen Wolfram had shown that Type 4 cellular automata (theGame of Life is one) could function as universal computers. Langtondemonstrated how CA gliders or moving particles can be used to performCellular Automata and the Edge of Chaos 23Team LRN
  • 52. computations, while stationary, period-two blinkers can store informa-tion. In one of the pioneering essays on artificial life, ChristopherLangton (1991) argued that the unending complexity of the Type 4 CA,almost comprehensible but never predictable, may be directed and ma-nipulated in such a way that it can perform any kind of computationimaginable. In his resonating phrase, this is “computation on the edge ofchaos.”Why the “edge of chaos?” Langton compared “solid” and “fluid”states of matter, that is, static and dynamic molecular states, and sug-gested that these were two “fundamental universality classes” of dynam-ical behavior. A system that exhibits Type 1 or 2 behavior, that is, a staticor periodic system, is predictable and orderly. Langton compared thesekinds of systems to the behaviors of molecules in solids and noted thatsuch simple and predictable systems are unable to either store or processinformation. Type 3 systems, similar to the gaseous states of matter, areessentially random. Whatever information they may contain is over-powered by the noise of the system. Langton argued that the interestingType 4 dynamical systems, which are capable of both storing and pro-cessing information, resemble matter in between states, at the instant ofa phase transition. For instance, molecules in frozen water are locked intotheir positions, unable to move, while molecules in a liquid move chaoti-cally, unpredictably. Decreasing the temperature of a liquid can cause themolecules to change from the unpredictable to the predictable state, butright at the transition each molecule must make an either-or decision, totry to lock into place or to try to break free; at the same moment, thatmolecule’s neighbors are making the same decision, and they may decidethe same or different. The result is the formation of extended, complexJack-Frost fingers or islands of molecules in the solid state, propagatingthrough the liquid matrix, dissolving in some places even while newstructures form in others. The behavior of the system at the edge of chaosis not predictable, and it’s not random: it is complex.Langton observed the appearance, at the edge of chaos, of simple lin-ear patterns in the CA that persist over time, sometimes shifting positionregularly so as to appear to move across the screen (see Figure 1.3); someof Conway’s patterns are examples of these phenomena in two dimen-sions. Langton compared these to solitary waves and described them as“particle-like.” It is as if a quality were transmitted across the fieldthrough changes at successive locations of a medium, though the emer-gent entity is composed only of changes of the states of those locations.These “particles” sometimes continue through time until they collidewith other particles, with the result being dependent on the nature of the24 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 53. particles, the angle of their impact, the state of the neighborhood inwhich they collided, and other factors.Langton’s early interest in cellular automata has evolved into his cur-rent involvement in the development of Swarm, a computer programthat allows researchers to simulate the interactions of actors in a com-plex system. Langton and his colleagues at Santa Fe Institute and theSwarm Corporation use the word “swarm” to mean any loosely struc-tured collection of agents that interact with one another. In common us-age, the term agent refers to an object in a computer program that has anidentity and performs some actions, usually with some presumption thatthe activities of the agent are somewhat autonomous or independent ofthe activities of other agents. An ant colony is a kind of swarm where theagents are ants, highway traffic can be conceptualized as a swarm whoseagents are cars and drivers, and so on; all these kinds of swarms can besimulated using this CA-derived software.Swarm researchers are especially interested in the interactions be-tween individual- and group-level phenomena; for instance, a bird flockhas properties over and above the properties of the birds themselves,though there is of course a direct relation between the two levels of phe-nomena. The Swarm simulation program allows a user to program hier-archical swarms; for instance, an economy might be made up of a swarmof agents who are people, while each person might be made up of aswarm that is their beliefs or even the neurons that provide theCellular Automata and the Edge of Chaos 25Figure 1.3 Particles in a cellular automaton. As the system steps through time, three patterns (inthis case) are perpetuated.Team LRN
  • 54. infrastructure for their beliefs—swarms within swarms. So far, uses ofSwarm have been focused on providing simulations of higher-level phe-nomena that emerge from lower-level specification of rules, much asgliders and other CA particles emerge from the low-level rule sets speci-fied in a cellular automaton.One-dimensional cellular automata are the most commonly studiedtype. Where the two-dimensional Game of Life was pictured as a torus,the one-dimensional CA is simply a ring, with the first and last cells of arow joined. One row of binary cells iterates down the screen. In theGame of Life cells are defined on a plane rather than a line, and the stateof each cell depends on the cells above and below as well as to the sides ofit. That means that patterns run not only down the screen, but up it, andacross it, and at weird unpredictable angles like a rat in a dairy.Later we will be spending some time on the topic of dimensionality.Just as a teaser, think about what a three-dimensional Game of Lifewould look like. Gliders and other patterns can move not only in a rect-angle on a plane, but above and below it; they can move up away fromthe surface of the environment as well as across it. Now the CA can be abird flock, or a star cluster, or weather, extending in space.It was easy to imagine a third dimension—now try a fourth. Mathe-matically there is no reason not to devise a rule set for a four-dimensionalCA. What would it look like? Well, we can’t say. Imagine a five-, six-, orseven-dimensional, try a one-hundred-dimensional vision of gliders,blinkers, patterns marching through the hyperspace (a hyperspace is aspace of more than three dimensions). Perhaps we can eventually ap-proach the emergent dynamics of the great stochastic systems. The cellu-lar automaton, a simple machine brimming with interactions, gives us away of thinking of randomness and stochastic behavior in the systemswe will be discussing; everything follows the rules, but nothing ispredictable.Artificial Life in Computer ProgramsWe have mentioned that life may be a quality we attribute to somethings, even more than it is a quality of the things themselves. This issueis perhaps most salient in the consideration of computer programs thatemulate life processes. An artificial life program does not have to resem-ble any “real” living thing; it might be considered living by some set ofcriteria but seem, well, completely strange by earthly biostandards, bywhat we know from nature as it is on earth. Things might be able to26 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 55. learn, to reproduce, to evolve in an adaptive way, without resemblingearthling life much at all. Conway’s Game of Life opened the door for in-tense study of systems that exhibit lifelike behaviors.And what are lifelike behaviors? It is sometimes said that a definingfeature of life is its resistance to entropy. Entropy is disorder, which al-ways increases in a system. Well-organized patterns fall into decay, disin-tegration, deterioration, as rust never sleeps. Yet life alone seems contin-ually to renew itself. Wolfram’s Type 4 CAs seem to have the ability toperpetually renew themselves, and it is for this reason that they have be-come the symbol, if not the seed, for artificial life research. They seem re-sistant to entropy.It is possible to apply evolutionary algorithms to CA rules to producebehaviors of particular types, “breeding” interesting CAs. For instance,random mutations can be introduced in order to explore variations onCA rules by flipping a bit in the rule table occasionally—not too often, oryou will destroy the behavior, but a low rate of mutation might allow theevolution of interesting dynamics. In this case we are considering therule set as a kind of chromosome, a string of genes encoding the behaviorof the cellular automata. This is a good analogy to the distinction be-tween genotype and phenotype in nature: you can’t see the genotype; it isan abstract coding scheme or program for the organism. What you see innature is the phenotypic expression of the genotype. Just so in cellularautomata, a researcher might encode the rule set in a string of ones andzeroes and mutate that string to evolve surprising behaviors in the visiblecells.One early program that could be called artificial life was produced as asort of demonstration of the effects of genetic mutation on phenotypicforms. In The Blind Watchmaker, Richard Dawkins (1987) created graphi-cal creatures he called biomorphs (see Figure 1.4), whose form was en-coded in a chromosome of nine genes, each of which could take on avalue from zero to nine. These genes encoded the rules for the develop-ment of the biomorph, for instance, the angle or length of a branch, andthey could be “mutated” by one step. That is, a gene in state 5 can,through mutation, be changed to a 4 or a 6. The user could generate apopulation of biomorphs with random genes, then select one thatlooked interesting. Dawkins reported that, though he expected his or-ganisms to look like trees, he discovered that insect forms were also pos-sible. Thus, he might select a biomorph that looked particularly insect-like. Once selected, this biomorph reproduces to create a generation ofchildren, each containing a random mutation that causes it to differslightly from its parent. The user can then select an interesting-lookingmember of the offspring population to produce another generation ofArtificial Life in Computer Programs 27Team LRN
  • 56. mutations resembling it, and so forth. The result is incremental and un-predictable change in a desired direction: evolution.In nature of course it is the environment that decides what individu-als will produce the next generation. Individual organisms with somenew genetic pattern, that is, a mutation, may or may not reproduce, de-pending on the effect of the mutation in combination with the rest ofthe genetic heritage in the environment. In Dawkins’ program the re-searcher “plays God” with the population, deciding who lives and whodies, guiding evolution from step to step according to whatever whim orgoal he or she has.It is important to think about the size and structure of the search spacefor a population of biomorphs. With nine genes, each of which can existin one of 10 states, we can see that there are 910, that is, 3,486,784,401,possible genetic combinations. These can be thought of as points in anine-dimensional space. Though of course we can’t visualize such aspace, we can visualize objects with attributes selected from locations oneach of those nine dimensions. If mutation consists of making a changeon a single dimension, and if the changes are (as in Dawkins’ program)simple moves to positions that are a single unit away on the dimension,then we can say that the child is adjacent to the parent. That means that28 Chapter One—Models and Concepts of Life and IntelligenceFigure 1.4 Examples of biomorphs as described by Richard Dawkins.Team LRN
  • 57. conceptually, mathematically, they are near one another in the searchspace. This conceptualization turns out to be important when we wantto search for a good point or region in the space, for instance, in search-ing for the solution to a problem. Whether the objects are continuous,binary, or—as in the current case—discrete, the concept of distance per-tains in searching the space; in particular, the concept applies when wewant to decide what size steps to take in our search. If we can’t look at ev-ery point in a space, as in this case where there are billions of possibili-ties, then we must come up with a plan for moving around. This planwill usually have to specify how big steps should be in order to balancethe need to explore or look at new regions, versus the need to exploitknowledge that has already been gained, by focusing on good areas.Dawkins’ biomorphs search by taking small steps through the space ofpossible mutations, so that the creatures evolve gradually, changing oneaspect at a time. In nature as well, adaptive changes introduced by muta-tion are generally small and infrequent.Other artificial life researchers have taken a slightly less godlike ap-proach to the evolution of computer-generated organisms. If the re-searcher provides information about the physics and other aspects ofthe environment, then life-forms can adapt to environmental demandsand to one another. The artificial creatures developed by Karl Sims(e.g., 1994) are a beautiful example of this. Sims creates artificial three-dimensional worlds with physical properties such as gravity, friction,collision protection, and elasticity of objects so that they bounce realisti-cally off one another upon colliding. “Seed” creatures are introducedinto this world and allowed to evolve bodies and nervous systems. In or-der to guide the evolution, the creatures are given a task to measure theirfitness; for instance, in one version the goal is to control a cube in thecenter of the world. The creatures are built out of rectangular formswhose size, shape, number, and arrangement is determined by evolu-tion. Their “minds” are built out of artificial neurons composed of math-ematical operations that are also selected by evolution.The randomly initialized creatures in Sims’ programs have beenshown to evolve surprising strategies for controlling the cube, as well asinnovative solutions to problems of locomotion on land and in water,with all forms of walking, hopping, crawling, rolling, and many kinds offin waving, tail wagging, and body wriggling in a fluid environment.Some movies of these creatures are available on the web—they are wellworth looking at.It is possible then, in computer programs, to evolve lifelike beingswith realistic solutions to the kinds of problems that real life-forms mustsolve. All that is needed is some way to define and measure fitness, someArtificial Life in Computer Programs 29Team LRN
  • 58. method for generating new variations, and some rules for capitalizing onadaptive improvements.Christopher Langton (1988) has defined artificial life as “the study ofman-made systems that exhibit behaviors characteristic of natural livingsystems.” In the same paper, he states, “Life is a property of form, notmatter . . .” If we accept Langton’s premise, then we would have to admitthat the similarity between an artificial-life program and life itself may besomewhat stronger than the usual kind of analogy. We will avoid declar-ing that computer programs live, but will note here and in other placesthroughout this book that it is unsettlingly difficult sometimes to drawthe line between a phenomenon and a simulation of that phenomenon.Intelligence: Good Minds in People and MachinesWe have discussed the futility of arguing about whether an instance ofsomething fits into one category or another. The next topic is perhapsthe most famous example of this dilemma. Intelligence is a word usuallyused to describe the mental abilities of humans, though it can be appliedto other organisms and even to inanimate things, especially computersand computer programs. There is very little agreement among psycholo-gists and little agreement among computer scientists about what thisword means—and almost no agreement between the two groups. Be-cause the Holy Grail of computing is the elicitation of intelligence fromelectronic machines, we should spend some time considering the mean-ing of the word and the history of the concept in modern times. It isn’talways such a warm and fuzzy story. Again we find that the Great Sto-chastic Systems, evolution and mind, are tightly linked.Intelligence in People: The Boring CriterionIn this Bell Curve world (Herrnstein and Murray, 1994) there is a greatamount of heated controversy and disagreement about the relationshipbetween intelligence and heredity. Much of the discussion focuses on theissues of statistical differences among various populations on IQ test per-formance. It is interesting in this light to note that the modern conceptof intelligence arose in association with heredity, and that attempts tounderstand intelligence and heredity have historically been intertwined.Following the world-shaking evolutionary pronouncements of Darwin,30 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 59. the eugenicists of the 19th and the first half of the 20th centuries had theidea that humans should practice artificial selection. According to Dar-winist theory, natural selection is the process by which species arechanged; adaptation is propelled by selection pressures applied by theenvironment. Artificial selection then would be a process whereby humanauthorities would decide which of their peers would be allowed to propa-gate and which would not. As Victoria Woodhull, who in 1872 was thefirst woman ever nominated for president of the United States, said, “Ifsuperior people are to be desired, they must be bred; and if imbeciles,criminals, paupers, and [the] otherwise unfit are undesirable citizens,they must not be bred.” Thus was born the modern concept of intelli-gence: a criterion for selecting who should be allowed to reproduce.The concept of intelligence has always been intended to distinguish“better” people from “worse.” Intelligence can be simply defined by a setof measures that support the experts’ opinion of what comprises a“good” mind. In 20th-century American and European society, thedefinition has included some widely agreed-upon qualities such as mem-ory, problem-solving ability, and verbal and mathematical abilities. Thefact is, though, that intelligent people, including intelligence experts,have different abilities from one another; as far as they attempt to defineintelligence they end up describing their own favorite qualities—thus in-evitable disagreement.We noted above that there is very little consensus on a definition ofintelligence. In psychology, the most quoted (off the record) definitionof intelligence is that given in 1923 by E. G. Boring: intelligence is what-ever it is that an intelligence test measures. The study of intelligence inpsychology has been dominated by a focus on testing, ways to measurethe trait, and has suffered from a lack of success at defining what it is.Intelligence has always been considered as a trait of the individual;for instance, an underlying assumption is that measurements of a per-son’s intelligence will produce approximately the same results at differ-ent times (an aspect of measurement known as “reliability”). This book isabout collective adaptation, and we will tend to view intelligence fromthe viewpoint of the population. As noted in the first paragraphs of thePreface, humans have proven themselves as a species. While differencesbetween individuals exist, the achievements of the outstanding individ-uals are absorbed into their societies, to the benefit of all members. Wecan’t all be Sir Isaac Newton, but every high school offers courses inphysics and calculus, and all of us benefit from Newton’s discoveries. Theachievements of outstanding individuals make us all more intelligent—we may not all rise to the standard of an Einstein, but by absorbing theirIntelligence: Good Minds in People and Machines 31Team LRN
  • 60. ideas we raise our level of functioning considerably. Michael Tomasello(1999) calls this the ratchet effect: cumulative cultural evolution resultingfrom innovation and imitation produces an accumulation of problemsolutions building on one another. Like a ratchet that prevents slippagebackwards, cultures maintain standards and strive to meet or exceedthem.Intelligence in Machines: The Turing CriterionSo far we have been talking about intelligence as a psychological phe-nomenon, as it appears in humans. Yet for the past half-century or morethere has been a parallel discussion in the computer-science community.It became immediately apparent that electronic computers could dupli-cate many of the processes associated with minds: they could processsymbols and statistical associations, could reason and remember and re-act to stimuli, all things that minds do. If they could do things that ourbiological brains do, then perhaps we could eventually build computersthat would be more powerful than brains and program them to solveproblems that brains can’t solve. There was even the suggestion thatcomputers could become more intelligent than people.In a nutshell: that hasn’t happened. Modern computers have notturned out to be very good at thinking, at solving real problems for us.Perhaps part of the reason may be found in the way that computer scien-tists have defined intelligence, which is very different from how psychol-ogists have defined it, and how they have implemented their beliefsabout intelligence.It is fair to say that any discussion of computer intelligence eventuallycomes around to the Turing test (Turing, 1950). The Turing test soundssimple enough. A subject is placed in a room with a computer keyboardand monitor, while in another room there is a computer and a person.The subject types questions into the keyboard and receives a reply fromthe other side. A simplistic summary of the test is this: if the subject can’ttell if the computer’s responses were generated by the human or the ma-chine, then the computer is considered intelligent.At first this sounds like a strange test of intelligence. It certainly is dif-ferent from the IQ tests we give ourselves! The funny thing is, the puniestcomputer could do perfectly well on a human intelligence test. A com-puter has perfect recall, and a simple brute-force algorithm, for instance,one that tests every possible combination, can find the solution to anypattern-matching or arranging problem such as are found in IQ tests, if32 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 61. it’s given enough time. A computer can be programmed with a huge vo-cabulary and some rules of grammar for the verbal scales, and of coursethe math test would be a snap. By the standard that we set for ourselves,computers are already very intelligent. So what we look for in computerintelligence (e.g., the Turing test) is something else entirely. We can onlyreally be fooled into thinking a computer is a person if it makes mistakes,or takes a long time to do a math problem, or claims to have gaps in itsknowledge. This is a confusing situation indeed.For our discussion, the important thing is where the Turing test looksfor intelligence. Turing did not suggest looking “inside the head” of thecomputer for a sign of intelligence; he suggested looking directly at thesocial interface between the judge and the subject. A program that wascapable of forming a convincing bond was intelligent. It was not that theinteraction was a “sign of” intelligence—it was in fact intelligence. To-day’s cognitivistic orthodoxy looks for intelligence inside the person, orin the information processing of the machine. Turing knew to look at thesocial nexus directly.Evolutionary computation researcher David Fogel (1995) says that agood definition of intelligence should apply to humans and machinesequally well, and believes that the concept should apply to evolution aswell as to behaviors perceptible on the human time scale. According tohim, intelligence is epitomized by the scientific method, in which pre-dictions about the world are tested by comparison to measurements ofreal-world phenomena and a generalized model is proposed, refined, andtested again. Successful models might be adjusted, extended, or com-bined with other models. The result is a new generation of ideasthat carry what David’s father Larry Fogel has called “a heredity ofreasonableness.” The process is iterated, with the model successivelyapproximating the environment.The Fogels conclude that intelligence, whether in animate or inani-mate contexts, can be defined as the “ability of a system to adapt itsbehavior to meet its goals in a range of environments” (Fogel, 1995,p. 24). Their research enterprise focuses on eliciting intelligence fromcomputing machines by programming processes that are analogous toevolution, learning incrementally by trial and error, adapting their ownmutability in reaction to aspects of the environment as it is encountered.David Fogel (1995) theorizes, following Wirt Atmar (1976), thatsociogenetic learning is an intelligent process in which the basic unit ofmutability is the idea, with culture being the reservoir of learned behav-iors and beliefs. “Good” adaptive ideas are maintained by the society,much as good genes increase in a population, while poor ideas areIntelligence: Good Minds in People and Machines 33Team LRN
  • 62. forgotten. In insect societies (as we will see) this only requires the evapo-ration of pheromone trails; in humans it requires time for actualforgetting.For some researchers, a hallmark of the scientific method and intelli-gence in general is its reliance on inductive logical methods. Whereas de-ductive reasoning is concerned with beliefs that follow from one anotherby necessity, induction is concerned with beliefs that are supported bythe accumulation of evidence. It comprises the class of reasoning tech-niques that generalize from known facts to unknown ones, for instance,through use of analogy or probabilistic estimates of future events basedon past ones. Interestingly, inductive reasoning, which people use everyday, is not logically valid. We apply our prior knowledge of the subject athand, assumptions about the uncertainty of the facts that have been dis-covered, observations of systematic relationships among events, theoret-ical assumptions, and so on, to understand the world—but this is notvalid logic, by a long shot.The popular view of scientists as coldly rational, clear-thinking lo-gicians is challenged by the observation that creativity, imagination,serendipity, and induction are crucial aspects of the scientific method.Valid experimental methods are practiced, as far as possible, to test hy-potheses—but where do those hypotheses come from? If they followednecessarily from previous findings, then experimentation would not benecessary, but it is. A scientist resembles an artist more than a calculator.34 Chapter One—Models and Concepts of Life and IntelligenceTeam LRN
  • 63. chaptertwoSymbols, Connections, andOptimization by Trial and ErrorThis chapter introduces some of the techni-cal concepts that will provide the founda-tion for our synthesis of social and computerscientific theorizing. Thinking people andevolving species solve problems by trying so-lutions, making adjustments in response tofeedback, and trying again. While we mayconsider the application of logical rules tobe a “higher” or more sophisticated kindof problem-solving approach, in fact the“lower” methods work extremely well. Inthis chapter we develop a vocabulary and setof concepts for discussing cognitive pro-cesses and the trial-and-error optimization ofhard problems. What are some approachesthat can be taken? What have computer sci-entists implemented in artificially intelligentprograms, and how did it work? Human lan-guage—now there’s a nice, big problem!How in the world can people (even peoplewho aren’t outstandingly intelligent in otherways) possibly navigate through a semanticjungle of thousands of words and constantlyinnovative grammatical structures? We dis-cuss some of the major types of problems,and major types of solutions, at a level thatshould not be intimidating to the reader whois encountering these concepts for the firsttime. I35Team LRN
  • 64. Symbols in Trees and NetworksThough we are asserting that the source of human intelligence is to befound in the connections between individuals, we would not argue thatno processing goes on within them. Clearly there is some ordering of in-formation, some arranging of facts and opinions in such a way as to pro-duce a feeling of certainty or knowing. We could argue (and later will de-velop the argument) that the feeling of certainty itself is a social artifact,based on the sense that the individual could, if asked, explain or act onsome facts in a socially acceptable—that is, consistent—way. But first wewill consider some of the ways that theorists have attempted to modelthe processes that comprise human thinking and have attempted to sim-ulate, and even to replicate, human thinking. As we develop the swarmmodel of intelligence, we will need to represent cognitive processes insome way. Where representations are complex—and how could they notbe?—it will be necessary to find optimal states of mind. We will findthese states through social processes.The earliest approaches to artificial intelligence (AI) assumed that hu-man intelligence is a matter of processing symbols. Symbol processingmeans that a problem is embedded in a universe of symbols, which arelike algebraic variables; that is, a symbol is a discrete unit of knowledge (awoefully underdefined term!) that can be manipulated according tosome rules of logic. Further, the same rules of logic were expected to ap-ply to all subject-matter domains. For instance, we could set up a syllo-gistic algorithm:All A are CIf B is AThen B is Cand plug words in where the letters go. The program then would be giventhe definitions of some symbols (say, A = “man,” B = “Socrates,” and C =“mortal”) as facts in its “knowledge base”; then if the first two statementswere sent to the “inference engine,” it could create the new fact, “Socra-tes is mortal.” Of course it would only know that Socrates was a B if wetold it so, unless it figured it out from another thread of logic—and thesame would be true for that reasoning; that is, either a human gave theprogram the facts or it figured them out from a previous chain of logi-cally linked symbols. This can only go back to the start of the program,and it necessarily eventually depends on some facts that have been given36 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 65. it. This is an aspect of a question that has been called the grounding prob-lem. There is a problem indeed if facts depend on other facts that are in-evitably defined themselves by the facts that depend on them. There isnecessarily an innate assumption, in the symbol-processing paradigm,that all the relevant facts are known and that their interrelationships areknown.There is another assumption, too, and one that was not recognizedfor a long time but that meant that the crisp symbol-processing para-digm could never really succeed. The assumption was revealed by LotfiZadeh (see Yager et al., 1987), a Berkeley computer-science professor,who forged a philosophy that revolutionized the way many computerscientists think about thinking. Zadeh pointed out that it is usually thecase that “Some A are sort of C.” Truth, that is, is fuzzy—it has degree.Fuzzy logic is built on the presupposition that the truth of propositionscan be rated on a scale from zero to one, so that a statement that is en-tirely false has a truth value of 0.0, a statement that is perfectly true has avalue of 1.0, and most statements go somewhere in the middle. Zadehmethodically worked out the rules for reasoning with approximate facts.Fuzzy logic operates in parallel, unlike the earlier symbol-processingmodels, which evaluated facts in a linked series, one after the other;fuzzy facts are evaluated all at once. Some people think that “fuzzy logic”means something like fuzzy thinking, a kind of sloppy way of vaguelyguessing at things instead of really figuring them out, but in fact the re-sults of fuzzy operations are precise. It is also sometimes alleged thatfuzziness is another word for probability, but it is an entirely differentform of uncertainty. For instance, take a second to draw a circle on asheet of paper. Now look at it. Is it really a circle? If you measured, wouldyou find that every point on the edge was exactly the same distance fromthe center? No—even if you used a compass you would find that varia-tions in the thickness of the line result in variations in the radius. So yourcircle is not really a circle, it is sort of a circle (see Figure 2.1). If you did re-ally well, it may be 99 percent a circle, and if you drew a square or a figureeight or something uncirclelike, it might be closer to 1 percent a circle—maybe it has roundish sides or encloses an area. It is not meaningfulto say that it is “probably” a circle (unless you mean that you probablymeant to draw a circle, which is an entirely different question). Fuzzylogic has turned out to be a very powerful way to control complicatedprocesses in machines and to solve hard problems of many kinds.Traditional (symbol-processing) AI commonly made inferences by ar-ranging symbols in the form of a tree. A question is considered, and de-pending on the answer to it another question is asked. For instance, youSymbols in Trees and Networks 37Team LRN
  • 66. might start by asking, “Is the door locked?” If the answer is yes, then thenext thing is to find the key; if the answer is no, then the next step is toturn the doorknob and push the door open. Each step leads to anotherdecision, and the answer to that leads to yet another, and so on. A graphof the domain of such a decision-making process looks like a tree, withever more numerous branches as you progress through the logic. Fuzzylogic, on the other hand, assumes that, rather than making crisp deci-sions at each point, a reasonable person might “sort of” choose one alter-native over another, but without absolutely rejecting another one, or astatement may be partly true and partly false at the same time. Instead ofa tree, a fuzzy logic diagram might look more like ripples flowing over asurface or a storm blowing through the countryside, stronger in someareas and weaker in others. Rules are implemented in parallel, all atonce, and the conclusion is obtained by combining the outputs of allthe if-then rules and defuzzifying the combination to produce a crispoutput.One limitation of tree representations is the requirement that therebe no feedback. A might imply or cause B, but in a tree we must be surethat B does not imply or cause A in return, either directly or by actingthrough intermediate links. In reality though premises and conclusionsimply one another all the time, and causes affect other causes that affectthem back. We saw that Bateson considered this kind of reciprocal causa-tion as a defining or necessary quality of mind. Not only are feedbackloops common in nature, but some theorists think that feedback is reallythe basis for many natural processes.We should point out here, because we will be dealing with themextensively, the interchangeability of matrices and graphs. Suppose wehave a table of numbers in rows and columns. We can say that the38 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorFigure 2.1 Is this a circle? Yes and no. In some ways it is a circle and in other ways it is not. It is afuzzy circle.Team LRN
  • 67. numbers indicate the strength of the effect of the row variable upon thecolumn variable. Variables—which we might think of as symbols, orthings, or events, or ideas, or people, for that matter—can be assumed tohave positive or negative effects on one another.When the system is considered as a tree, it really only makes sense toallow positive connections; zero and negative connection strengthswould be logically indiscriminable. That is because each node representsa discrete decision to follow one route or another. An option unchosen isin the same state whether there is a vacuous link or a negative one.In contrast to a tree, in a network matrix it is not hard to depict thingsin a feedback relationship, having effects upon one another. Further, ef-fects in a network can be positive or negative; one thing can prevent an-other as easily as it can cause it. When two things have a positive causaleffect on one another, we would say they have a positive feedback orautocatalytic effect. If they decrease one another, their relationship isone of negative feedback. In many of the examples throughout thisbook, autocatalysis is the basis for the dynamics that make a systeminteresting.Interrelated variables in a matrix can be graphed as well (see Figure2.2). Each variable is a node in the graph, and we can draw arrows be-tween pairs of them where a table has nonzero entries, indicating causal(one causes the other) or implicative (one implies the other) effects be-tween them. Now it should be clear that a graph and a table can containexactly the same information. The connections among the nodes aresometimes called constraints, where the state of the node (for instance,off or on) is intended to satisfy the constraints upon it. Of course thereare other terminologies, but for this discussion we will call them nodesand constraints, or sometimes just “connections.”Tree representations use only half of the matrix, either the half aboveor below the diagonal, depending on how you decide to implement it. Ina tree structure, no node later in the chain of logic can affect any nodeearlier, so you don’t need both halves of the matrix. A graph or matrixwhere nodes can affect one another would be called a recurrent network.Trees can be graphed as networks, but networks that contain feedbackcannot be diagrammed as trees.One common kind of network is depicted with symmetrical connec-tions between nodes. Instead of arrows between pairs of nodes pointingfrom one to the other, the graph needs only lines; the implication is thatthings with positive connections “go together,” and things with negativelinks do not. Figure 2.3 shows an example of such a network. In a matrix,the upper and lower diagonal halves of this kind of network are mirrorSymbols in Trees and Networks 39Team LRN
  • 68. images of one another. This kind of graph is very common as a cognitivemodel in contemporary psychological theorizing, and it can be useful forproblem solving in computer programs, too.We will make the simplifying assumption here that nodes are binary,they are either on or off, yes or no, one or zero. In a constraint-satisfac-tion paradigm, the goal is to find a pattern of node states that best satis-fies the constraints. Each node is connected to some other nodes by posi-tive or negative links. The effect of an active node through a positive linkis positive and through a negative link is negative, and the effect of aninactive node through a positive link is zero or negative, depending onhow it is specified, and through a negative link is zero or positive. Inother words, the effect is the product of the node state times the con-straint value. A node with mostly positive inputs fits its constraints bestwhen it is in the active state, and one with zero or negative inputs shouldbe in the inactive state.40 Chapter Two—Symbols, Connections, and Optimization by Trial and Error123456Figure 2.2 Graph of a pattern of causes with feedback. Dotted lines indicate negative causalforce: the sending node prevents or inhibits the receiving node.Team LRN
  • 69. Paul Smolensky (1986) has noted that the optimal state of such a net-work, that is, the state where the most constraints are satisfied, can befound by maximizing the sum of the products of pairs of node states (0and 1, or −1 and +1) times the value of the strength of the connectionbetween them:H a w ai ij jji= ∑∑where ai is the activation of node i, wij is the strength of the connectionbetween nodes i and j, and aj is the activation of node j. The products ofpairs of nodes times the weights of the connections between themshould be as big as possible. The “harmony function” (Smolensky’s cog-nitive theory was called “harmony theory”) can be calculated over theentire network, or the optimality of each node can be calculated sepa-rately and the sum taken.It has long been noted that people strive to maintain cognitive consis-tency. We want our thoughts to fit together with one another as well aswith facts in and about the world. Leon Festinger (1957) theorized thatSymbols in Trees and Networks 41Figure 2.3 An example of a parallel constraint satisfaction network. Solid lines represent positiveor excitatory connections, while dotted lines represent negative or inhibitoryconnections between pairs of nodes.Team LRN
  • 70. people have a drive to reduce cognitive dissonance, defined as the clashamong cognitive elements, whether they are beliefs, behaviors, or atti-tudes. Later we will stress that there is an important interpersonal com-ponent to this drive, but for the moment it is only necessary to point outthat—for whatever reasons—people do strive to maintain consistency.Harmony maximization is a good way to represent this principle in acomputer program, where inconsistencies between elements are de-picted as negative connections.We know that we want to maximize the harmony function over thenetwork—but how do you do that? Any time we change the state of anode, we are about as likely to make it inconsistent with some othernodes as we are to improve its fit with some; if the network is not triv-ial, there are probably a number of conflicting constraints in it. JohnHopfield had shown, in 1982, a method that usually arrived at the beststate (a network’s state is the pattern of its node activations). Hopfield ac-tually had two methods, one for binary nodes and another for networkswhose nodes can take on continuous values; here we will stay with thebinary type (Hopfield, 1982, 1984). Hopfield’s algorithm starts by ran-domly initializing nodes as zeroes or ones. Then a node is selected at ran-dom, and its inputs are summed; that is, the states of nodes connected tothe selected one are multiplied by the strength of their connections,which can be positive or negative. If the sum of the inputs is greater thana threshold, then the node is assigned the active state; for instance, it isgiven a state value of one. If the sum is below the threshold, then thenode is set to zero (or minus one), the inactive state. Then another nodeis selected at random and the process is repeated until all nodes are in thestate they should be in.Most parallel constraint satisfaction models use some version ofHopfield optimization, but we will be arguing that it is not a good modelof the way people reduce conflicts among cognitive elements. The fact is,people talk with one another about their thoughts, and they learn fromone another how to arrange their beliefs, attitudes, and behaviors tomaximize their consistency, to reduce cognitive dissonance. We don’tthink that people organize their thoughts by considering them one at atime in the isolated privacy of their own skulls, and we don’t think this isthe best way for computers to do it, either.The models we have discussed, whether crisp symbol processing,fuzzy logic, or constraint-satisfaction networks, all assume that some-body has given them knowledge. Given a set of connections—directed orsymmetrical—between nodes in a graph, we have been trying to find42 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 71. node patterns that best fit. But where do the connections come from?Finding the pattern of connections among elements is called learning.When two things change together, for instance, when they are bothloud or both intense or both present whenever the other is present, orwhen they are only loud or intense or present when the other thing is inthe opposite state, then we say the two things are correlated. Things maybe positively correlated, like heat and sunshine or money and power, orpairs of things can be negatively correlated, for instance, wealth and star-vation or aspirin and headaches. Notice that positive and negative corre-lations are the same, they are still correlations; you can always define athing negatively—we could measure “poverty” instead of “wealth”—tochange the sign of the correlation. Correlation is a form of statistical as-sociation between two variables that gives some evidence for predictingone from the other. Learning can be described in terms of finding correla-tions among things. The strengths of the connections in the graphs wehave been looking at are a kind of correlation coefficient, showingwhether the two nodes change together, how strong the relationship is,and whether the changes are positive or negative.Every college student learns—or should learn—that “correlation doesnot imply causation.” The fact that two things rise and fall together doesnot mean that one causes the other. A correlation matrix, just like thesymmetrical table in a parallel constraint satisfaction network, is madeup of mirror-image halves, as A’s correlation with B is equal to B’s cor-relation with A. In the network, a symmetrical connection weight be-tween binary nodes indicates the likelihood of one being active whenthe other is.The meaning of a connection weight is a little more complicated, orshould we say interesting, when the connections are asymmetrical. Inthis case the weight indicates the likelihood or strength of activation ofthe node at the head of the arrow as a function of the state of the node atthe arrow’s origin. So if an arrow points from A to B, with a positive con-nection between them, that is saying that A’s activation makes it morelikely that B will be active or increases B’s activation level: A causes B.Let’s consider an example that is simple but should make the situa-tion clearer. We will have some binary causes, called x1, x2, and x3, and abinary effect, y (see Figure 2.4). In this simple causal network, we will saythat y is in the zero state when the sum of its inputs is less than or equalto 0.5 and is in the active state when the sum is above that. We see thatthe connection from x1 to y has a weight of 1.0; thus, if x1 is active, thenthe input to y is 1 × 1.0 = 1.0, and y will be activated. The connectionSymbols in Trees and Networks 43Team LRN
  • 72. from x2 is only 0.5, though, so when it is active it sends an input of 1 ×0.5 = 0.5 to y—not enough to activate it, all by itself. The same holds forx3: it is not enough by itself to cause y, that is, to put y into its active state.Note, though, that if x2 and x3 are both on, their summed effect is(1 × 0.5) + (1 × 0.5) = 1.0, and y will be turned on. Now the system ofweighted connections, it is seen, can numerically emulate the rules oflogic. The network shown in the top of Figure 2.4 can be stated inBoolean terms: IF (x1) OR (x2 AND x3) THEN y. Not only are a matrix anda graph interchangeable, but logical propositions are interchangeablewith these as well.More complex logical interactions might require the addition of hid-den nodes to the graph. For instance, if we had more inputs, and the rulewas IF (x1 AND x2) OR (x3 AND x4) THEN y, then we might need to linkthe first pair to a hidden node that sums its inputs, do the same for thesecond two, and create a logical OR relation between the hidden nodesand y, where each hidden node would have a 1.0 connection (or anystrength above the threshold) to the output node y. To implement a NOTstatement, we could use negative weights. All combinations are possiblethrough judicious use of connection weights and hidden layers of nodes.A network where the information passes through from one or more in-puts to one or more outputs is called a feedforward network.In the previous example we used a crisp threshold for deciding if theoutput node y should be on or off: this a less-than-satisfying situation. Ifsome of the causes of an effect, or premises of a conclusion, are present,then that should count for something, even if the activation does notreach the threshold. Say the inputs to y had added up to exactly 0.5,where we needed a “greater than 0.5” to flip its state. In reality things donot usually wait until the exactly perfect conditions are met and then gointo a discretely opposite state. More commonly, the effect just becomesmore uncertain up to a point near the decision boundary, then more cer-tain again as the causes sum more clearly past the cutoff. If y were thekind of thing that was really binary, that is, it exists either in one state orthe other and not in between, then we might interpret the sum of the in-puts to indicate the probability of it being true or false. The more com-mon instance, though, is where y is not binary but can exist to some de-gree, in which case the inputs to y indicate “how true” y is. If I wanted toknow if “Joe is a big guy,” the inputs to y might be Joe’s height and per-haps Joe’s weight or girth. If Joe is six feet tall and weighs, say, 200pounds, then he might be a big guy, but if he is six foot, six inches talland weighs 300 pounds, he is even more of a big guy. So the truth of thestatement has degree, and we are back to fuzzy logic. In fact, many44 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 73. Symbols in Trees and Networks 45x1yx3x21.00.50.5x1yx3x20.50.50.5IF ( AND ) OR ( AND ) thenx1 x2 x3 x4 yx40.5h2h11.01.0x1yx3x21.01.00.5x40.5h2h1−1.01.0IF ( OR ) AND NOT ( AND ) thenx1 x2 x3 x4 yIF ( ) OR ( AND ) thenx1 x2 x3 yFigure 2.4 Logical propositions can be graphed as feedforward networks using binary nodes.Team LRN
  • 74. networks—and almost all feedforward networks—do have continuous-valued nodes, and the states of those nodes, usually in the range betweenzero and one, can be interpreted to indicate either probability or fuzzytruth values.We should also mention that there are formulas for squashing the in-puts into the tolerable range, usually zero to one, to make this businessmore manageable. The most common formula is referred to as the sig-moid function, since its output is S-shaped. Its equation iss xx( )exp( )=+ −11and its graph is seen in Figure 2.5. When x = 0, s(x) = 0.5. Negative in-puts result in a value that approaches zero as the x becomes more nega-tive, and positive values approach a limit of one. The sigmoid function isvery useful and will come up again later in this book.Networks like the ones we have been describing are often called neuralnetworks because they roughly resemble the way cells in the brain arehooked up, and because they are often able to categorize things andmake decisions something like a brain does.We still have not answered the question of learning: finding valuesfor the connection weights that makes for the accurate estimation of out-puts from inputs. This is another optimization problem. Optimizationmeans that we are looking for input values that minimize or maximizethe result of some function—we’ll be going into the gory details of opti-mization later in this chapter. Normally in training a feedforward net-work we want to minimize error at the output; that is, we want thesums of products to make the best estimate of what y should be. Again,there is a traditional way to find optimal patterns of weights, and there isour way.The traditional way is to use some version of gradient descent, a kindof method for adjusting numbers in a direction that seems to result inimprovement. Backpropagation of error is the most standard way to adjustneural net weights. “Backprop,” as it is called, finds optimal combina-tions of weights by adjusting its search according to a momentum term,so that shorter steps are taken as the best values are approached.Neural networks were introduced to the psychological community bythe PDP (Parallel Distributed Processing) Group headquartered at theUniversity of California at San Diego, to suggest a theory of cognitionthat is called connectionism (Rumelhart, McClelland, and the PDP Group,1986; McClelland, Rumelhart, and the PDP Group, 1986). Connectionist46 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 75. theory differs from the symbol-processing view in that it assumes thatcognitive elements have a “microstructure”: they are composed of pat-terns distributed over a network of connection weights. Both gradientdescent and Hopfield optimization are part of the connectionist perspec-tive. Connectionist approaches, though, just like their traditional AI pre-decessors, tend to miss something extremely crucial about human cogni-tion: we don’t do it alone. Learning often, if not usually, occurs in theform of adaptation to information received from other people, includingboth information about facts and how to make sense of them. Optimiza-tion of node activation patterns as well as patterns of connection weightscan be an extremely complicated process, with lots of conflicting con-straints, redundancy, nonlinearity, and noise in the data; that is, thingsmay not always be what they appear to be. Some optimization methodsare better able than others to deal with such messy problems; we assertthat social forms of optimization are at least as powerful and are more re-alistic for cognitive simulations than the inside-the-head methods thathave been proposed.A good thing about network models of cognition is that the kinds ofjudgments they make are similar to those that humans make. For in-stance, symbol-processing methods require linear separability of elementsfor categorization. This mouthful means that if you graphed the attri-butes of some category members on a chart, you should be able to draw astraight line or surface that separates two categories of things from oneSymbols in Trees and Networks 470.00.20.40.60.81.0Output0−10Input value ( )x10Figure 2.5 The sigmoid function takes numeric inputs and squashes them into the rangebetween zero and one.Team LRN
  • 76. another. A simple rule or combination of rules should distinguish them.In fact, though, humans do not categorize by linear separation; neitherdo neural networks. Categories are not sets. Another advantage is that,since networks work by a kind of statistical analysis of data, they can bemore robust in the face of noise and measurement error. They don’t as-sume that symbols or the rules that connect them are hard-and-fast dis-crete entities, with information flowing through one logical branch oranother.Neural networks and the closely related fuzzy logic models have beencalled “universal function approximators” because they can reproducethe outputs of any arbitrarily complex mathematical function. Theirpower as psychological models has grown as well because of their abilityto simulate human processes such as perception, categorization, learn-ing, memory, and attention, including the errors that humans make.These errors seem to be at the heart of the Turing test of machineintelligence. Imagine that a judge in a Turing test situation asked the sub-ject on the other side of the wall to answer a mathematical question, say,what is 43,657 times 87,698? If the perfectly correct answer came backafter several milliseconds, there would be no question about whether ahuman or mechanical mind had answered. Even a highly trained mathe-matician or a savant would have to take a few seconds at least to answersuch a question. The “most human” response would be an estimate, andit would take some time. People make errors in crisp calculations, but atthe same time exceed any known computational models in “simple”tasks such as recognizing faces.So far we have considered a few issues regarding the adaptiveness ofliving things, and in particular adaptation as it applies in the mentalworld. By perusing a few kinds of structures that have been used to simu-late human thinking processes,we have inched our way closer to the goalof writing computer programs that embed such structures in social con-texts. As we said, these models will tend to be complicated—in fact theircomplexity is part of what makes them work—and it will be difficult tofind the right combinations of weights and activations. The next sec-tions examine some methods for finding solutions to complex problems.Problem Solving and OptimizationBoth evolution and mind are concerned with a certain kind of business,and that is the business of finding patterns that satisfy a complex set ofconstraints. In one case the pattern is a set of features that best fit a48 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 77. dynamic ecological niche, and in the other case it is a pattern of beliefs,attitudes, and behaviors that minimize conflict with personal, social,and physical constraints. Some propensities to acquire particular mentalqualities seem to be inherited, though the topic of individual differencesin the various propensities (traits such as intelligence, introversion, cre-ativity, etc.) is still the subject of lively controversy. While a tendency tolearn, or to learn in a particular way, may be inherited, learning can onlytake place within a lifetime; there is no genetic transfer of learned knowl-edge from one generation to another. As we have seen, the two greatstochastic systems have different ways of going about their business,though both rely heavily, if not exclusively, on some version of trial anderror. In this section we discuss some aspects of problem solving wherethe goal is to fit a number of constraints as well as possible.It may seem that we use the word “problem” in an unusual way. In-deed, the word has specialized meanings to specific groups who use it.We use it to describe situations—not necessarily mathematical ones(they might be psychological, ecological, economical, or states of anykind of phenomena)—where some facts exist and there is a need to findother facts that are consistent with them. A changing environmentmight be a problem to an evolving species. An ethical dilemma is a prob-lem to a moral person. Making money is a problem for most people. Anequation with some unknowns presents a problem to a mathematician.In some cases, the facts that are sought might be higher-level facts abouthow to arrange or connect the facts we already have.Optimization is another term that has different connotations to differ-ent people. Generally this term refers to a process of adjusting a system toget the best possible outcome. Sometimes a “good” outcome is goodenough, and the search for the very best outcome is hopeless or unneces-sary. As we have defined a problem as a situation where some facts aresought, we will define optimization as a process of searching for the miss-ing facts by adjusting the system. In the swarm intelligence view of thepresent volume, we argue that social interactions among individuals en-able the optimization of complex patterns of attitudes, behaviors, andcognitions.A Super-Simple Optimization ProblemA problem has some characteristics that allow the goodness of a solutionto be estimated. This measurement often starts contrarily with an esti-mate of the error of a solution. For example, a simple arithmetic problemmight have some unknowns in it, as 4 + x = 10, where the problem is toProblem Solving and Optimization 49Team LRN
  • 78. find a value (a pattern consisting of only one element in this trivial ex-ample) for x that results in the best performance or fitness. The error of aproposed solution in this case is the difference between the actual anddesired results; where we want 4 + x to equal 10, it might be that wetake a guess at what x should be and find that 4 + x actually equals 20, forinstance, if we tried x = 16. So we could say that the error, when x = 16,is 10.Error is a factor that decreases when goodness increases. An optimiza-tion problem may be given in terms of minimizing error or maximizinggoodness, with the same result. Sometimes, though, it is preferable tospeak in terms of the goodness or fitness of a problem solution—note thetie-in to evolution. Fitness is the measure of the goodness of a genetic orphenotypic pattern. Converting error to goodness is not always straight-forward; there are a couple of standard tricks, but neither is universallyideal. One estimate of goodness is the reciprocal of error (e.g., 1/e). Thisparticular measure of goodness approaches infinity as error approacheszero and is undefined if the denominator equals zero, but that’s not nec-essarily a problem, as we know that if the denominator is zero we haveactually solved the problem. Besides, in floating-point precision we willprobably never get so close to an absolutely perfect answer that the com-puter will think it’s a zero and crash. Another obvious way to measurethe goodness of a potential solution is to take the negative of the error(multiply e by −1); then the highest values are the best.We can use the super-simple arithmetic problem given above to dem-onstrate some basic trial-and-error optimization concepts. Our generalapproach is to try some solutions, that is, values for x, and choose theone that fits best. As we try answers, we will see that the search process it-self provides us with some clues about what to try next. First, if 4 + x doesnot equal 10, then can we look at how far it is from 10 and use thatknowledge—the error—to guide us in looking for a better number. If theerror is very big, then maybe we should take a big jump to try the nextpotential solution; if the error is very small, then we are probably close toan answer and should take little steps.There is another kind of useful information provided by the searchprocess as well. If we tried a number selected at random, say, 20, for x, wewould see that the answer was not very good (error is (10 − 24) = 14).If we tried another number, we could find out if performance improvedor got worse. Trying 12, we see that 4 + 12 is still wrong, but the result isnearer to 10 than 4 + 20 was, with error = 6. If we went past x = 6, say, wetried x = 1, we would discover that, though error has improved to 5, thesign of the difference has flipped, and we have to change direction and50 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 79. go up again. Thus various kinds of facts are available to help solve a prob-lem by trial and error: the goodness or error of a potential solution givesus a clue about how far we might be from the optimum (which may be aminimum or a maximum). Comparing the fitness of two or more pointsand looking at the sign of their difference gives us gradient informationabout the likely direction toward an optimum so we can improve ourguess about which way to go. A gradient is a kind of multidimensionalslope. A computer program that can detect a gradient might be able tomove in the direction that leads toward the peak. This knowledge can behelpful if the gradient indicates the slope of a peak that is good enough;sometimes, though, it is only the slope of a low-lying hill. Thus an algo-rithm that relies only on gradient information can get stuck on a medio-cre solution.Three Spaces of OptimizationOptimization can be thought to occur in three interrelated numberspaces. The parameter space contains the legal values of all the elements—called parameters—that can be entered into the function to be tested. Inthe exceedingly simple arithmetic problem above, x is the only parame-ter; thus the parameter space is one-dimensional and can be representedas a number line extending from negative to positive infinity: the legalvalues of x. Most interesting optimization problems have higher parame-ter dimensionality, and the challenge might be to juggle the values of alot of numbers. Sometimes there are infeasible regions in the parameterspace, patterns of input values that are paradoxical, inconsistent, ormeaningless.A function is a set of operations on the parameters, and the functionspace contains the results of those operations. The usual one-dimen-sional function space is a special case, as multidimensional outputs canbe considered, for instance, in cases of multiobjective optimization; thismight be thought of as the evaluation of a number of functions at once,for example, if you were to rate a new car simultaneously in terms of itsprice, its appearance, its power, and its safety. Each of these measures isthe result of the combination of some parameters; for instance, the as-sessment of appearance might combine color, aerodynamic styling, theamount of chrome, and so on.The fitness space is one-dimensional; it contains the degrees of successwith which patterns of parameters optimize the values in the functionspace, measured as goodness or error. To continue the analogy, theProblem Solving and Optimization 51Team LRN
  • 80. fitness is the value that determines whether you will decide to buy thecar. Having estimated price, appearance, power, and so on, you will needto combine these various functions into one decision-supporting quan-tity; if the quantity is big, you are more likely to buy the car. Each pointin the parameter space maps to a point in the function space, which inturn maps to a point in the fitness space. In many cases it is possible tomap directly from the parameter space to the fitness space, that is, to di-rectly calculate the degree of fitness associated with each pattern of pa-rameters. When the goal is to maximize a single function result, thefitness space and the function space might be the same; in functionminimization they may be the inverse of one another. In many commoncases the fitness and function spaces are treated as if they were the same,though it is often helpful to keep the distinction in mind. The point ofoptimization is to find the parameters that maximize fitness.Of course, for a simple arithmetic problem such as 4 + x = 10 wedon’t have to optimize—someone a long time ago did the work for us,and we only have to memorize their answers. But other problems, espe-cially those that human minds and evolving species have to solve,definitely require some work, even if they are not obviously numeric. Itmay seem unfamiliar or odd to think of people engaging in day-to-dayoptimization, until the link is made between mathematics and the struc-tures of real human situations. While we are talking about the rather aca-demic topic of optimizing mathematical functions, what is being saidalso applies to the dynamics of evolving species and thinking minds.Fitness LandscapesIt is common to talk about an optimization problem in terms of a fitnesslandscape. In the simple arithmetic example above, as we adjusted ourparameter up and down, the fitness of the solution changed by degree.As we move nearer to and farther from the optimum x = 6, the goodnessof the solution rises and falls. Conceptually, this equation with one un-known is depicted as a fitness landscape plotted in two dimensions, withone dimension for the parameter being adjusted and the second dimen-sion plotting fitness (see Figure 2.6). When goodness is the negative of er-ror, the fitness landscape is linear, but when goodness = 1/abs(10 − (4 +x)), it stretches nonlinearly to infinity at the peak. The goal is to find thehighest point on a hill plotted on the y-axis, with the peak indicating thepoint in the parameter space where the value of x results in maximumfitness. We find, as is typical in most nonrandom situations, that solu-tions in the region of a global optimum are pretty good, relative to points52 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 81. in other regions of the landscape. Where the function is not random—and therefore impossible to optimize—a good optimization algorithmshould be able to capitalize on regularities of the fitness landscape.Multimodal functions have more than one optimum. A simple one-dimensional example is the equation x2 = 100. The fitness landscape hasa peak at x = 10 and another at x = −10. Note that there is a kind ofbridge or “saddle” between the two optima, where fitness drops until itgets to x = 0, then whichever way we go it increases again. The fitness ofx = 0 is not nearly so bad as, say, the fitness of x = 10,000. No matterwhere you started a trial-and-error search, you would end up finding oneof the optima if you simply followed the gradient.John Holland has used a term that well describes the strategic goal infinding a solution to a hard problem in a short time; the issue, he said, is“the optimal allocation of trials.” We can’t look everywhere for the an-swer to a problem; we need to limit our search somehow. An ideal algo-rithm finds the optimum relatively efficiently.Two basic approaches can be taken in searching for optima on afitness landscape. Exploration is a term that describes the broad search fora relatively good region on a landscape. If a problem has more than onepeak, and some are better than others, the lesser peaks are known as localProblem Solving and Optimization 53Fitness6543210 7 8 9 10 11Value of xFitness peakFigure 2.6 Fitness landscapes for the one-dimensional function x + 4 = 10, with goodnessdefined two different ways.Team LRN
  • 82. optima. Normally we prefer to find the global optimum, or at least thehighest peak we can find in some reasonable amount of time. Explora-tion then is a strategic approach that samples widely around the land-scape, so we don’t miss an Everest while searching on a hillside. Themore focused way to search is known as exploitation. Having found agood region on the landscape, we wish to ascend to the very best point init, to the tip-top of the peak. Generally, exploitation requires smallersteps across the landscape, in fact they should often decrease as the topof a peak is neared. The most common exploitational method is hillclimbing, in which search proceeds from a position that is updated whena better position is found. Then the search can continue around that newpoint, and so on. There are very many variations on the hill-climbingscheme. All are guaranteed to find hilltops, but none can guarantee thatthe hill is a high one. The trade-off between exploration and exploitationis central to the topic of finding a good algorithm for optimization: theoptimal allocation of trials.The examples given above are one-dimensional, as the independentvariable can be represented on a single number line, using the y-axis toplot fitness. A more complex landscape exists when two parametersaffect the fitness of the system; this is usually plotted using a three-dimensional coordinate system, with the z-axis representing fitness; thatis, the parameters or independent variables are plotted on a plane withthe fitness function depicted as a surface of hills and valleys above theplane. Systems of more than two dimensions present a perceptual dif-ficulty to us. Though they are not problematic mathematically, there isno way to graph them using the Cartesian method. We can only imaginethem, and we cannot do that very well.The concepts that apply to one-dimensional problems also hold inthe multidimensional case, though things can quickly get much morecomplicated. In the superficial case where the parameters are indepen-dent of one another, for example, where the goal is to minimize thesphere function f xi xi( ) ,= ∑ 2then the problem is really just multiple(and simultaneous) instances of a one-dimensional problem. The solu-tion is found by moving the values of all the xi’s toward zero, and reduc-ing any of them will move the solution equally well toward the globaloptimum. In this kind of case the fitness landscape looks like a volcanosloping gradually upward from all directions toward a single global opti-mum (see Figure 2.7). On the other hand, where independent variablesinteract with one another, for example,when searching for a set of neuralnetwork weights, it is very often the case that what seems a good positionon one dimension or subset of dimensions deteriorates the optimality of54 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 83. values on other dimensions. Decreasing one of the xi’s might improveperformance if and only if you simultaneously increase another one anddeteriorates goodness otherwise. In this more common case the fitnesslandscape looks like a real landscape, with hills and valleys and some-times cliffs.High-Dimensional Cognitive Space and Word MeaningsIt may seem odd that we have wandered away from the interesting sub-ject of minds and cultures into a discussion of mathematics, of all things.Before digging any deeper into this topic, let us mention one example ofwhy we think that optimization of complex functions has anything todo with minds.One of the most exciting findings in cognitive science in the past de-cade has to do with the discovery of computer algorithms that can beginto find the meanings of words. Two groups of scientists have indepen-dently developed approaches for statistically extracting the meaning of aword from its context.The story goes back to the 1950s, when Osgood, Suci, and Tannen-baum (1957) investigated the placement of words and concepts in aHigh-Dimensional Cognitive Space and Word Meanings 55Figure 2.7 The sphere function in two dimensions.Team LRN
  • 84. multidimensional space. Their approach, called the semantic differential,presented people with a word on the top of a page and asked them to ratethat word on a lot of scales. For instance, the word might be “radio,” andpeople were asked to rate it on scales such as cold to hot, cruel to kind,and so on (see Figure 2.8). (The first surprise was that people generallydid not have much trouble doing this.)The well-known halo effect occurs when a person or object with somepositive characteristics is assumed to have other positive characteristicsas well. In other words, ratings along different dimensions tend to corre-late with one another. These multiple correlations can be identified us-ing principal components or factor analytical techniques. With thesestatistical methods, Osgood and his colleagues were able to show thatthree major dimensions most affected the words and concepts theylooked at in their studies. By far the most important dimension was onethey labeled “evaluation.” People rate things in terms of good versus bad,liked versus disliked, and their opinions about other aspects of the thingtend to go along with these evaluative ratings. Two other orthogonal fac-tors, which they labeled “potency” and “activity,” were also seen to beimportant, but not as important as evaluation.In the 1990s, researchers at the University of Colorado and at the Uni-versity of California, Riverside, started to revisit and extend this work.First they noted that there is an inherent weakness in picking targetwords and rating scales out of the blue; the researchers’ choices of ratingscales might be affected by their own preconceptions, for instance, andnot really say anything at all about how ordinary people think of thewords and concepts. The investigators decided to do a simple-soundingthing. They took a large body of writing, called a corpus, and analyzedthe co-occurrences of the words in it.If you have ever read Usenet news, then you know that there is a lot oftext there, a lot of words. Usenet is a set of discussion groups on theInternet; nobody knows how many there are really, but there are easilymore than 30,000 different ones, many of which are very busy with par-ticipants writing or “posting” to the groups, replying to one another’spostings, and replying to the replies. Again, if you have ever visitedUsenet, you will know that this is not a place for the King’s English; thereis a lot of slang and, yes, obscenity, with flame wars and other tangentsmixed in with level-headed talk of love and software and philosophy andpet care and other things. Riverside psychologists Curt Burgess andKevin Lund (e.g., Burgess, Livesay, and Lund, 1998; Burgess, 1998; Lund,Burgess, and Atchley, 1995) downloaded 300 million words of Usenetdiscussions for analysis by their program, which they call HAL, for“Hyperspace Analogue to Language.”56 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 85. Their program went through the entire corpus, looking at each wordthrough a “window” of its 10 nearest neighbors on each side. A matrixwas prepared, containing every word (there were about 170,000 differentwords) in the entire corpus as row and as column headings. When a wordoccurred directly before the target word, a 10 was added to the targetword’s row under the neighbor’s column. A word appearing two posi-tions before the target had a 9 added to its column in the row corre-sponding to the target word, and so on. Words appearing after the targetword were counted up in that word’s columns. Thus the matrix con-tained bigger numbers in the cells defined by the rows and columns ofHigh-Dimensional Cognitive Space and Word Meanings 57RadioBadCruelUglySadNegativeUnpleasantWorthlessWeakSmallSoftLightShallowSubmissiveSimplePassiveRelaxedSlowColdQuietDimRoundedEvaluation1 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 7Potency1 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 7Activity1 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 71 2 3 4 5 6 7GoodKindBeautifulHappyPositivePleasantValuableStrongLargeHardHeavyDeepAssertiveComplexActiveTenseFastHotNoisyBrightAngularRate how you feel about this object on each of the scales below.Figure 2.8 An example of the semantic differential scale, where items that measure the threemost important dimensions—evaluation, potency, and activity—are separated.Team LRN
  • 86. words that appeared together most frequently and smaller numbers forpairs of words that did not co-occur often in the Usenet discussions.Words that occurred very rarely were removed. The result of this pro-cess was a 70,000 × 70,000 matrix—still too big to use. Burgess and Lundconcatenated the words’ column vectors to their row vectors, so thateach word was followed by the entire set of the association strengths ofwords that preceded it and ones that followed it in the corpus. Some ofthe words were not really words, at least they did not turn up in a stan-dard computer dictionary; also, some of the words had very little vari-ance and did not contribute much to the analysis. By removing the col-umns that contributed little, the researchers found that they could trimthe matrix down to the most important 200 columns for each of the70,000 words.The effect of this is that each word has 200 numbers associated withit—a vector of dimension 200, each number representing the degree towhich a word appears in the proximity of some other word. A first mea-sure that can be taken then is association, which is simply shown by thesize of the numbers. Where target word A is often associated with word B,we will find a big number in B’s column of A’s row. A second and more in-teresting measure, though, is possible, and that is the Euclidean distancebetween two words in the 200-dimensional space:DAB = −∑( )a bi i2where ai is the ith number in A’s row vector and bi is the ith number in B’srow vector. A small distance between two words means that they are of-ten associated with the same other words: they are used in similar con-texts. When Burgess and his colleagues plotted subsets of words in twodimensions using a technique called multidimensional scaling, theyfound that words that were similar in meaning were near one another inthe semantic space (see Figure 2.9).Similar work was going on simultaneously at the University of Colo-rado, Boulder, where Thomas Landauer was working with Susan Dumais,a researcher for Bellcore, in New Jersey (Landauer and Dumais, 1997).Their paradigm, which they call Latent Semantic Analysis (LSA), isimplemented somewhat differently, in that word clusters in high-dimensional space are found using a statistical technique called singularvalue decomposition, but the result is essentially identical to Burgess andLund’s—a fact acknowledged by both groups of researchers.The view that depicts meaning as a position in a hyperspace is verydifferent from the usual word definitions we are familiar with. A58 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 87. dictionary, for instance, purports to explain the meanings of words byconsidering them as discrete symbols and providing denotative lists ordescriptions of their attributes. Plotting words in a high-dimensionalcontextual space, though, depicts the words’ meanings connotatively,from the inside out. Landauer and Dumais point out that the averageseventh-grade child learns approximately 10 to 15 new words every day.These are not typically learned by having their definitions explained,and who ever saw a seventh-grader look up anything in a dictionary? Itseems clear that humans learn word meanings from contexts, just as HALand LSA do. The view of high-dimensional semantic space is a bottom-upview of language, as opposed to the top-down imposition of definitionalrules in the usual dictionary reference.Landauer and Dumais have suggested that the LSA/HAL model couldbe presented as a neural network. In fact the connectionist paradigm ingeneral supposes that many aspects of human cognition can be repre-sented as structure in a high-dimensional space. Human cognition anddevelopment then can be seen as a gradual process of optimizationwithin a complex space; our thesis is that this process is collaborative innature.High-Dimensional Cognitive Space and Word Meanings 59kittenpuppydogcowcatfrancepanamachinaamericaeuropeshoulderhandtoeheadeartoothfootlegeyeFigure 2.9 Burgess and Lund’s multidimensional scaling analyses find that words with similarmeanings are associated with the same regions of semantic space, here collapsed totwo dimensions. (From Burgess, 1998, p. 191.)Team LRN
  • 88. People do not consciously compute with matrices of thousands andthousands of words all at once, yet somehow the interrelationships ofmany thousands of words are understood, and meaningful verbal com-munication is possible. A child’s vocabulary is small, and errors are com-mon; learning to map concepts into the appropriate region of the cogni-tive space occurs gradually and takes a long time. So how do people learnthe meanings of words? How do they learn to form connections betweenconcepts found in different regions of the semantic hyperspace? Theeasy and obvious answer (and a central theme of this book) is that theylearn from other people. The symbols themselves are arbitrary, and sodifferent cultures can use different sounds and glyphs to mean the samething. Yet within a culture meanings are agreed upon. The “swarm”model we are building toward considers the movements of individualswithin a commonly held high-dimensional space. Thus the psychologi-cal issue is one of adaptation and optimization, of finding our waythrough labyrinthine landscapes of meaning and mapping related re-gions appropriately.Two Factors of Complexity: NK LandscapesIn cases where variables are independent of one another, optimization isa relatively simple case of figuring out which way to adjust each one ofthem. But in most real situations variables are interdependent, and ad-justing one might make another one less effective. Language is possiblythe penultimate example of this; as we have seen, the meaning of a worddepends on its context. For instance, what do the words “bat” and “dia-mond” mean? If we have been talking about caves, they might refer toflying mammals and found treasures, but if we have been talking aboutbaseball, these words probably mean a sturdy stick and a playing field.The interrelatedness of linguistic symbols is a necessary part of their use-fulness; we note that most people learn to use language fluently, but de-spite decades of striving no programmer has succeeded at writing a pro-gram that can converse in fluent natural language.Stuart Kauffman (1993, 1995) has argued that two factors affect thecomplexity of a landscape, making a problem hard. These factors are N,the size of a problem, and K, the amount of interconnectedness of the el-ements that make it up. Consider a small network of five binary nodes.We start with an ultimately dull network where there are no connectionsbetween nodes (see Figure 2.10). In such a simple network, each node has60 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 89. a preferred value; that is, it is “better” for it to take on the value of one orof zero. In Kauffman’s research, a table of fitness values is determined byrandom numbers. For instance, if node 1 has a state value of 1, its fitnessis 0.123, and if it is zero, its fitness is 0.987. We want to maximize fitness,so we see in this case that it is better, or more optimal, for this node tohave a value of zero. A fitness table for this overly simple graph is shownin Table 2.1.Since these nodes are binary, we can write the state of the network as abitstring; for instance, 10101 means that the first node has a state valueTwo Factors of Complexity: NK Landscapes 61Table 2.1 Fitness table for simple NKnetwork where K = 0.Node Value if 0 Value if 11 0.987 0.1232 0.333 0.7773 0.864 0.9234 0.001 0.0045 0.789 0.321Figure 2.10 NK network with K = 0 and N = 5.Team LRN
  • 90. of one, the second is zero, the third is one, the fourth is zero, and the fifthis one. Looking at Table 2.1, we note that the best possible state of thenetwork is 01110. This pattern of node states results in the highest fitnessat each site in the graph and thus produces the highest sum over theentire graph.Now that is not much of a graph, we agree. While there are five nodes,which we will write as N = 5, they don’t interact; none has any effect onthe other. Kauffman says that such a system has K = 0, where K stands forthe average number of inputs that each node receives from other nodes.Obviously, with K = 0 it is very easy to find the global optimum of thenetwork. We can simply pick a site at random and flip its sign from zeroto one or vice versa; if the network’s total score increases, we leave it inthe new state, otherwise we return it to its original state (this is called agreedy algorithm). We only need to perform N operations; once we havefound the best state for each node, the job is done.If we increase K to 1, each node will receive input from one othernode, and the size of the table doubles. When K = 1, the fitness of a nodedepends on its own state and the state of the node at the sending end ofan arrow pointing to it. A typical network is shown in Figure 2.11.In this connected situation, we have to conduct more than N opera-tions to find the optimal pattern. It is entirely possible that reversingthe state of a node will increase its performance but decrease the62 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorFigure 2.11 When K = 1, the better state of each node depends on the state of the node thatdepends on it and the state of the node that it depends on.Team LRN
  • 91. performance of the node it is connected to, or the opposite, its perfor-mance will decrease while the other’s increases. Further, while we mightfind a combination that optimizes the pair, the state of the receivingnode affects the node it is connected to, too, and it is entirely likely thatthe state of that node is best when this node is in the opposite state—andso on.When K = 2, the fitness of each node depends on its own state (zeroor one) and the states of two other nodes whose arrows point to it (seeFigure 2.12). The size of the lookup table increases exponentially as K in-creases; its size is N2K+1. Reversing the state of a node directly affects thefitness of two other nodes, perhaps changing their optimal state, andtheir states affect two others, and so on. K can be any number up to N −1, at which point every node is connected to every other node in thewhole system.It is easy to see why Kauffman has theorized that the two parametersN and K describe the complexity of any system. First, as N, the dimen-sionality of the system, increases, the number of possible states of thesystem increases exponentially: remember that there are 2N arrange-ments of N binary elements. This increase is known as combinatorial ex-plosion, and even if it seems obvious, this is a significant factor in deter-mining how hard it will be to find an optimal configuration of elements.Two Factors of Complexity: NK Landscapes 63Figure 2.12 As K increases, the better state of each node depends on the state of the nodes thatreceive its inputs, and the states of the nodes that provide inputs to it. Here, N = 5and K = 2.Team LRN
  • 92. Each new binary element that is added doubles the patterns of node acti-vations. It quickly becomes impossible to test all possible combinations,and so it is necessary to find a way to reduce the size of the search; weneed a good algorithm for the optimal allocation of trials.The second factor, K, is also known as epistasis. This term has beenborrowed, perhaps inaccurately, from genetics, where it is often seen thatthe effect of a gene at one site on the chromosome depends on the statesof genes at other sites. Kauffman has shown that when K becomes higherrelative to N, landscapes become irregular and eventually random. WhenK is high, the highest peaks are poorer, due to conflicting constraints,and the paths to peaks on the landscape are shorter. When K = 0, there isone peak, and as we saw, that peak is found when each node takes on itsbetter value. The path to it is simple and direct: a Fujiyama landscape.Combinatorial OptimizationOptimizing problems with qualitative variables, that is, variables withdiscrete attributes, states, or values, is inherently different from optimi-zation of quantitative or numerical problems. Here the problem is one ofarranging the elements to minimize or maximize a result. In some caseswe might want to eliminate some elements as well, so that the number ofthings to be rearranged is itself part of the problem. Combinatorial opti-mization is particularly addressed within the field of traditional artificialintelligence, especially often where the task is to arrange some proposi-tions in such a way that they conform to some rules of logic. “Productionrules,” if-then statements such as “if A is true then do C,” are evaluatedand executed in series, to obtain a valid logical conclusion or to deter-mine the sequence of steps that should be followed in order to accom-plish some task. Arranging the order of the propositions is a combina-torial optimization problem.The statement of the goal of an optimization problem is the objectivefunction, sometimes called a “cost function.” For example, consider aclassic combinatorial problem called the traveling salesman problem(TSP). In this problem, a map is given with some number of cities on it,and the objective function to minimize is the distance of a route thatgoes through all the cities exactly once, ending up where it started.The simple permutations of some elements are all the possible order-ings of an entire set. In a very small traveling salesman problem, wemight have three cities, called A, B, and C, and their simple permutationswould be ABC, ACD, BAC, BCA, CAB, and CBA. Many combinatorial64 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 93. optimization problems require finding the best simple permutation ofsome objects. There are n factorial (written as n!) simple permutations ofn elements. Thus with our example of three things, there are 1 × 2 × 3 =6 permutations. The point to realize here is that this number, which rep-resents the size of the search space or set of possible solutions to the prob-lem, gets big quickly. For instance, with only 10 elements the searchspace contains 3,628,800 possible simple permutations. Any problemthat is likely to exist in the real world, for instance, a complicated sched-uling or routing problem or cognitive model, will almost certainly havemore elements in it than that; we need to find ways to search for an an-swer without trying every possible solution. Optimal allocation of trials,again.Simple permutation problems are often represented, conceptually atleast, as trees. The three-element problem is just too trivial, so let’s con-sider a simple permutation problem with four elements (see Figure 2.13).The first decision is where to start, and it has four options, A, B, C, or D.Once we’ve selected one of these, say, A, then we are faced with the nextdecision: B, C, or D? Each step of the process reduces the number of op-tions by one. Say we choose B, now the decision is between C and D. Se-lecting C, we have no real choice, but end at D. We see that there are in-deed n! paths through the decision tree—and so of course it is impossibleto actually draw a graph when the problem is big. But the conceptualiza-tion is often useful to keep in mind. In an optimization situation the taskis to find the path through the decision tree that gives the optimalCombinatorial Optimization 65AB DCBA DCCA DBDA CBC D B D B C B C A C A BD C D B C B C B C A B AFigure 2.13 Simple permutations of four elements depicted as a tree. The dotted line indicateswhere branches were omitted due to lack of space.Team LRN
  • 94. evaluation or result, whether this is the shortest path from a start to a fin-ish, or any feasible path from a premise to a conclusion, or some otherkind of problem.There are two general ways to search through such a tree data struc-ture. Breadth-first search involves going into the tree one layer at a time,through all branches, and evaluating the result of each partial search. Inour four-element example, if we are searching for the shortest path fromstart to end, we might ask, for instance, starting on A, what is the dis-tance to B, to C, to D? If we started on B, what is the distance to A, to C, toD? and so on. It may be that we can eliminate some routes at this point,just because a step is so large that we are sure not to have the shortestroute. Depth-first search comprises going from the start all the way to theend of a path, evaluating one entire proposed solution. Once the end isreached, depth-first search goes back to the first decision node and thensearches to the end of the next path, and so on.Heuristics are shortcuts in the search strategy that reduce the size ofthe space that needs to be examined. We proposed a simple heuristic forbreadth-first search in the previous paragraph: abandoning a search assoon as there is reason to expect failure. If we were using a depth-firststrategy, we might establish a rule where we abandon a path when its dis-tance becomes longer than the shortest we have found so far. Maybe af-ter a few steps into the tree we will be able to eliminate a proposed pathor subtree by this method. There are many clever general heuristics andvery many ingenious heuristics that apply only to specific problems.While “order matters” in permutations, the term combination refers tothe situation where some number of elements is selected from a uni-verse of possibilities, without regard to their order. Thus, a universe of {A,B, C, D} contains these combinations: {A}, {B}, {C}, {D}, {A, B}, {A, C}, {A,D}, {B, C}, {B, D}, {C, D}, {A, B, C}, {A, B, D}, {A, C, D}, {B, C, D}, {A, B, C, D}.Since order doesn’t matter in combinations, the combination {A, B, C} isidentical to {B, C, A}, {C, A, B}, and so on.In many situations the problem is defined in such a way that somepermutations or combinations are inadmissible; for instance, nodes areconnected only to certain other nodes in the graph (there is no road con-necting city B with city F). In these cases the size of the search space is ofcourse reduced, though the problem may be more difficult than a simplepermutation problem if at each step the appropriateness or correctnessof a branch in the tree needs to be validated.We can muse upon combinatorial aspects of the great stochastic sys-tems. Even though it is a simple-sounding problem of ordering geneticproteins, natural evolution could not have gone from bacterium directly66 Chapter Two—Symbols, Connections, and Optimization by Trial and ErrorTeam LRN
  • 95. to human (which is not to suggest that we are the ultimate achievementof evolution—we just make a good reference point for the present). Thesearch of the space of possibilities that led down to human intelligenceneeded to discover RNA, DNA, nucleated cell structures, multicellularity,breathing out of water, spines and centralized nervous systems, and soforth in order to reach the social, linguistic primate called Homo sapiens.At the same time the environment had to change and evolve, affectingand being affected by the new forms that inhabited it. It may have been“technically” possible to jump from bacterium to human in one hugemutational leap, but the probability of that happening was miniscule.Evolution can be seen as a kind of search of the space of possible life-forms, where various adaptations are tried, continuing into the future.An important aspect of this search is that it is conducted in parallel; thatis, a number of mutations and genetic combinations are proposed andtested simultaneously throughout the populations of species. Since traitscannot be passed from one species to another, evolutionary search at themacro level can be conceptualized as a branching tree.Minds as well search their space of ideas in parallel, but the search isnot well represented as a tree since the nodes at the ends of the branchesare connected to one another. Various cultures explore their own regionsof a cognitive parameter space, and within those cultures particular indi-viduals search particular regions of the vast search space of beliefs, atti-tudes, and behaviors. Here and there a culture hits a dead end, and cul-tures interact, passing heuristical information back and forth to oneanother, and out on the little branches individuals convey informationto one another that shapes their searches.A human born into this world alone, to make sense of what WilliamJames called the “buzzing, blooming confusion” of the world, would notbe able in a lifetime to understand very much of it at all—if indeed theword “understand” even has meaning for an asocial hominid. Informa-tion sharing is our main heuristic for searching the large space of possibleexplanations for the world.Binary OptimizationA frequently used kind of combinatorial optimization, which we havealready seen in NK landscapes and some Hopfield networks, occurswhen elements are represented as binary variables. The binary encodingscheme is very useful, for a number of reasons. First of all, as theBinary Optimization 67Team LRN
  • 96. versatility of the digital computer indicates, almost anything can be rep-resented to any degree of precision using zeroes and ones. A bitstring canrepresent a base-two number; for instance, the number 10011 repre-sents—going from right to left and increasing the multiplier by powers oftwo—(1 × 1) + (1 × 2) + (0 × 4) + (0 × 8) + (1 × 16) = 19. A problemmay be set up so that the 19 indicates the 19th letter of the alphabet (e.g.,S) or the 19th thing in some list, for instance, the 19th city in a tour.The 19 could be divided by 10 in the evaluation function to represent1.9, and in fact it can go to any specified number of decimal places.The segment can be embedded in a longer bitstring, for instance,10010010111011100110110, where positions or sets of positions areevaluated in specific ways.Because zero and one are discrete states, they can be used to encodequalitative, nonnumeric variables as well as numeric ones. The sites onthe bitstring can represent discrete aspects of a problem, for instance, thepresence or absence of some quality or item. Thus the same bitstring10011 can mean 19, or it can be used to summarize attendance at a meet-ing where Andy was present, Beth was absent, Carl was absent, Denisewas present, and Everett was present. While this flexibility gives binarycoding great power, its greatest disadvantage has to do with failure to dis-tinguish between these two kinds of uses of bitstrings, numeric and qual-itative. As we will see, real-valued encodings are usually more appropri-ate for numeric problems, but binary encoding offers many advantagesin a wide range of situations.The size of a binary search space doubles with each element added tothe bitstring. Thus there are 2n points in the search space of a bitstring ofn elements. A bitstring of more than three dimensions is conceptualizedas a hypercube. To understand this concept, start with a one-dimensionalbitstring, that is, one element that is in either the zero or the one state: abit. This can be depicted as a line segment with a zero at one end and aone at the other (see Figure 2.14). The state of this one-dimensional “sys-tem” can be summarized by a point at one end of the line segment. Nowto add a dimension, we will place another line segment with its zero endat the same point as the first dimension’s zero, rising perpendicular tothe first line segment. Now our system can take on 22 possible states:(0,0), (0,1), (1,0), or (1,1). We can plot the first of these states at the ori-gin, and the other points will be seen to mark the corners of a squarewhose sides are the (arbitrary) length of the line segments we used. The(1,1) point is diagonally opposed to the origin, which is meaningfulsince it has nothing in common with the (0,0) corner and thus shouldnot be near it. Corners that do share a value are adjacent to one another;those that don’t are opposed.68 Chapter Two—Symbols, Connections, an