Digital immortallity


Published on

Published in: Spiritual, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Digital immortallity

  1. 1. DIGITAL IMMORTALITY: SELF OR 0010110? LIZ STILLWAGGON SWAN* and JOSHUA HOWARD Department of Philosophy, University of Colorado, Denver, Campus Box 179, P.O. Box 173364, Denver, CO 80217-3364, USA * In this paper, we explore from several angles the possibility, and practicality, of one of the major tenets of the transhumanist movement ÀÀÀ the intention to upload human minds to computers. The ¯rst part of the paper assumes that mind-uploading is possible and will become quite commonplace in the near (21st century) future a là Ray Kurzweil and cohorts. This assumption allows us to explore several of its problematic implications for personal identity, especially the e®ects it will have on questions of duty, responsibility, interpersonal relationships, and culp- ability in the case of crime. In the second part of the paper, we take a deeper and more critical look at whether mind-uploading is indeed metaphysically possible, and o®er some neurobiolo- gically-inspired arguments against its feasibility. Keywords: Transhumanism; mind-uploading; Kurzweil. $Circa 2075 $ Janet, my wife, died three weeks ago today, and again three hours ago ÀÀÀ which is why I sit in a jail cell awaiting sentence, pleading, but she was already dead! " and she's just a duplicate!" I know it was wrong, but I couldn't take another moment of meaninglessness. I'm sick of living the same life over and over again, pretending that duplicates are people and listening to others bemoan the fact that Elvis isn't reco- verable. A year before her death, we had our minds stored in a containment facility. They have these giant machines at the facility that take a chunk of your soul and save it forever in a computer. Anytime one needs a reprogram or someone dies, they can take the stored memories and reestablish the soul. A few months after we had the process done our son, Jason, died ¯ghting overseas in the war. He was a naturalist and hadn't had the process done before being deployed for moral reasons. My wife grieved non-stop. The idea crossed her mind and she did it: she returned to the containment facility and had them return her soul to its place a few months back, that place where her son was still alive in a hostile environment. This was great except that Jason was dead and I had to deal with it alone. International Journal of Machine Consciousness Vol. 4, No. 1 (2012) 245À256 #.c World Scienti¯c Publishing Company DOI: 10.1142/S1793843012400148 245 by89.177.13.58on08/14/13.Forpersonaluseonly.
  2. 2. We went on our second honeymoon to Paris on our 25th anniversary a month later even though Janet swore it was three months early and we had a wonderful time ÀÀÀ the best vacation ever. I accepted Jason's death and Janet's denial. I fell in love with my wife again. Life was beautiful. Then she died in a car wreck. I saw her dead blue eyes. I felt her cold, hardening skin. I felt a pain I had never before known. They told me not to worry: she had the process done, she wasn't really dead. Everything would be okay. Just go home, she'd be home late ÀÀÀ like a bad day at the o±ce. I was at my wits' end. A friend told me it's no di®erent than before, cells come and go, they die and are replaced. Duplicates can do anything a human does ÀÀÀ better in fact. That's why they were granted all the same rights humans have. So her duplicate came home late that night. She walked like Janet, talked like Janet, felt and smelt just like her. She didn't remember Jason's death, wanted to buy winter coats, and talked about being able to use her French on the trip to Paris we already took, but she hasn't experienced. How can I explain to her that we went, but she doesn't remember; that the trip took place before she rewound herself three months? That really she died and has been replaced by a dupe? We had our minds duplicated for her bene¯t ÀÀÀ I did this to ease her stress, I never wanted her to be like this. How long can I keep up the charade she chose for us? Our trip meant so much to me and to the real her, but for this her it hasn't happened yet. Her soul has and hasn't experienced it. Just like Jason's death and the last eight months of renewed love. I have all of it and she has none of it. She's partially the woman I love. But she doesn't kiss right. Having her dupe around isn't even as good as looking at a picture and remembering. I prayed for dreams that would make reality seem meaningful again. If I could just reach her soul we could be together. This duplicate is not really Janet, but it thinks it is ÀÀÀ it's a confused being. So I did what any sensible person would do: I beat her to death with a shovel. 1. Uploading: What are the Issues? In the short vignette above, several complicated issues arise regarding new notions of personal identity that we have good reason to believe will be part and parcel of a future reality wherein dupes", or arti¯cially duplicated persons, live among and interact with organic persons. Though the details are still quite fuzzy (and necessarily so, since the core ideas of transhumanism remain for now theoretical), the human desire to achieve digital immortality has been explored in science ¯ction (e.g., Star Trek's beam me up, Scotty"), Arti¯cial Intelligence, the fringes of psychology, and contemporary philosophy. Derek Par¯t's teletransporter thought experiment is a good example. In this story, people travel back and forth between Earth and Mars with ease, as the temporary storage and subsequent reproduction of one's complete cellular state has become feasible. The idea is rather akin to taking a snapshot of one's cellular information matrix on Earth and developing the snapshot ÀÀÀ in a replicated physical host 246 L. S. Swan & J. Howard by89.177.13.58on08/14/13.Forpersonaluseonly.
  3. 3. body ÀÀÀ on Mars. Par¯t draws out the storyline to some problematic implications that give one philosophical pause: what if the original host body is destroyed and all that remains is one's replica? Would this make any di®erence? Why? Are we just the sum total of the information in our cells, or is there something more to us ÀÀÀ something that cannot be copied and uploaded into a computer? Does the mere conceivability of this scenario necessitate mindÀbody dualism? Let's for a moment consider our story of the man who, in a futuristic kind of crime of passion, kills his ‘wife' duplicate. What are the circumstances that lead to this morbid conclusion for Janet's duplicate? Which issues ÀÀÀ legal, moral, personal, practical, or otherwise ÀÀÀ come to the fore in this very brief glimpse into a hypo- thetical future? And how does all this di®er from how a common homicide might go over today? In what follows, we examine a few issues that we believe are charac- teristic of this hypothetical scenario in a theoretical future wherein uploading minds to computers (‘uploading' for short) is a reality. 1.1. Personal Identity: The multiple me's" scenario In our story about Janet, Jason, and the troubled narrator, it is di±cult to determine how many people are actually involved ÀÀÀ and indeed whether all of them count as people". Janet we know has been duplicated at least once. After she died, a replica of her was created. But even before she dies, she decides to have the replica-facilitators return her to where she was ÀÀÀ metaphysically speaking ÀÀÀ three months prior, that is, before her son Jason was killed overseas. But does being restored to one's previous state count as being fully replicated, partially replicated, or does it represent another matter entirely, say, being transported through time but not replicated in any robust sense? Then there's the matter of Jason, the son in the story, who was killed overseas in the war. It is explained that he himself declined mind-storage, but when he was brought back to life" so to speak, in other words when his mother restores herself to an earlier time period when he is still alive, is he in some abstract sense, replicated? Presumably there may be people in the future who, even if the technology is available and a®ordable, will decline uploading. Will this decision make them more genuine people than those who have replicated themselves, perhaps multiple times? Replicas are supposed to be identical in every way that counts (excepting of course numerical identity which is inconsistent with the notion of replication) so why does our intui- tion tell us they would be somehow sub-human? Is this just a naïve early 21st century bias that will go away in time? Lastly, there is the narrator of the story, who has been stored" for future repli- cation purposes, but has not yet been replicated. Does this make him a real person plus potential person? Kind of like one and half persons? If, for some reason, the stored soul" of the person gets deleted, was it ever a potential person in the ¯rst place? How do we discover the answers to these riddles of personal identity? Can they indeed be discovered, or must they be decided arbitrarily? Digital Immortality: Self or 0010110? 247 by89.177.13.58on08/14/13.Forpersonaluseonly.
  4. 4. Essentially, the moral of the story is that when and if uploading becomes a reality, personal storylines will undoubtedly get a lot more complicated. Already we have the technological oddities of someone's outgoing voicemail message, or pro¯le on Face- book, surviving his or her death. The person is gone, but their persona, or at least their online pro¯le, lives on. But the reality of uploading minds ÀÀÀ when and if it becomes possible ÀÀÀ will complexify the scenario in ways we cannot anticipate. One thing is clear however: rather than there being a temporary asynchronous relation- ship between the person and his or her technological pro¯le, uploading will create a radically new scenario wherein there are multiple (in¯nite?), contemporary replicas of the person, possibly living alongside the original. The possibilities alone make one's head spin! 1.2. Ethics: What is the value of uploaded life? Along with the reality of uploading would come the reality of novel ethical dilemmas that are complex in new ways. Murder is very clearly seen as wrong in our society. But what about in a future society where replicas abound perhaps as prevalently, if not more so, than genuine people? Would replicas be granted the same legal, ethical, social, civil and so forth, liberties and privileges that were granted to their progenitors? If so, why? After all, a copy of a Picasso is not a Picasso ÀÀÀ it is a copy, with all that entails, including its poster price tag of $12 in the art museum gift shop. So why should people be any di®erent? You may have an intuition as to whether a human life or an original Picasso painting has more worth, but it seems at least intuitively plausible that copies of either human beings or paintings are worth about the same, which is to say, not much. And if that is the case ÀÀÀ that replicas, no matter what they are replicas of, are not worth very much ÀÀÀ then how could eliminating one matter all that much? Would it even be a case of murder at all? Or would it be more like deletion" than murder"? We do not typically feel any regret when deleting old ¯les from our laptop, so why should deleting a digital replica of a human be any di®erent? If we entertain for a moment the opposite intuition ÀÀÀ i.e., that uploaded minds are worth the same as genuine human beings ÀÀÀ we quickly ¯nd that here too we face a tumult of problems in multiple facets of life. How would we be able to tell a lover from her duplicate? If they are two genuine instances of the same person, then don't they both deserve our attention, love, and time? If this is not feasible (since there's only one of you) will jealousy and resentment develop between them? Is it possible to be jealous of one's duplicate? Does that even make sense? What about duplicates in the workplace? If they are deserving of equal treatment, then an employer would be bound to hire one's duplicate after a job-related accident killed the original. But what if the mind-uploading was done before the employee held this job, and thus she knows nothing of her duties and has to relearn them all over again? Is the employer nevertheless bound to keep the employee on sta®? A great uproar would likely ensue about who could and who could not upload, followed by a reactionary ¯ght for civil rights that could trump that of the 248 L. S. Swan & J. Howard by89.177.13.58on08/14/13.Forpersonaluseonly.
  5. 5. 20th century. Should we upload petty criminals? Pedophiles? Rapists? Murderers? The chronically impoverished? The mentally handicapped? Schizophrenics? Who decides and why them? 1.3. The Self: Will I still be me in digital immortality? At ¯rst glance it seems that if I were duplicated, then the original me would be the primary me. But regardless of which of us felt like the real me", each of us would nevertheless be a me" and thus a subject of desires, needs, and emotions. My duplicate would a priori know himself to be me, would have my memories with their particular dream-like qualities of haziness and misunderstandings, and worst of all, would be faced with the same existential choices and issues. Indeed, could we truthfully say that this replica who experiences himself as me, as my consciousness, is not my consciousness, or least part-owner of my consciousness? Kurzweil ÀÀÀ alluding to the ship of Theseus paradox ÀÀÀ says about his duplicate, . . . while he is not absolutely identical to Old Ray, neither is the current version of Old Ray, since the particles making up my biological brain and body are constantly changing. It is the patterns of matter and energy that are semipermanent (that is, changing only gradually), while the actual material content changes constantly and very quickly. Viewed in this way, my identity is rather like the pattern that water makes when rushing around a rock in a stream. The pattern remains relatively unchanged for hours, even years, while the actual material constituting the pattern ÀÀÀ the water ÀÀÀ is replaced in milliseconds" [Kurzweil, 2000]. If Kurzweil is right, then my consciousness is just the energetic workings of my brain. That means that the substrate really does not matter in determining my consciousness, but rather that the way the substrate functions is our consciousness. So, if this functionality is equally reproduced in an arti¯cial substrate, then there would be no di®erence between my duplicate and me, other than our divergent chemical make-up. But, this seems to point out another problem: I will know I am me and the duplicate(s) will know that they are me, but how will I know that they are me? These new possibilities of Self comprise a new version of the others minds problem". And these possibilities matter for the question of digital immortality. If I will continue on as me, just in my duplicate, then death (for me) should not be a big deal.1 But if the duplicate were not the same as me ÀÀÀ say there were even a subtle di®erence between us ÀÀÀ then he would not be me and thus I would not attain digital immortality, which was the whole point in the ¯rst place. It seems that we cannot prove whether the duplicate is or is not me because he would know that he is me and would have the same memories and personality as me and experience them in the same way. The concept of me would be fundamentally altered in the sense that me would have no 1 This is in fact Derek Par¯t's position on the matter. Digital Immortality: Self or 0010110? 249 by89.177.13.58on08/14/13.Forpersonaluseonly.
  6. 6. meaning beyond the subjective experience of the individual me. And in this case I would cease to be me because me would always be we. If there were multiple beings who knew" themselves to be the same being, then we are talking about plural consciousnesses whose plural existence might, and probably would, entail divergent experiences. This possibility brings up many social and ethical issues. As a matter of fact, the troubled narrator from the opening story was not entirely truthful with you. He did have the replication process completed and so the he" who awaits sentencing as we speak, is in fact the narrator's duplicate. The narrator's original, we are sorry to report, intended to murder Janet's duplicate ÀÀÀ she was driving him mad. But his resolve to commit murder, to kill his own wife, even if it was only her duplicate", was too much for him and he killed himself. But his dutiful duplicate, cognizant of his progenitor's intention (which had of course become his own intention) carried out his will: he killed the Janet duplicate, and now awaits his fate. He will probably be given a life sentence for murder ÀÀÀ or would it be intent to murder? And will probably die in jail ÀÀÀ but wait, he is already dead. . . And will his punishment vindicate society? He did commit a crime, after all. Or did he? 2. Uploading: Is It Even Possible? So far in the paper we have been concerned with the problematic fallout from mind- uploading, assuming a future wherein it is in fact a reality. But the very idea of uploading minds to computers deserves more attention. In what follows we examine the idea of uploading from the perspective of a scienti¯cally-informed philosophy of mind. Speci¯cally, we take a critical look at three presumptions underlining the uploading hypothesis: (1) mindÀbody dualism; (2) mind as a static entity; and (3) multiple realizability. 2.1. Does mind-uploading entail dualism? Whether we are talking about the Beam me up Scotty" in science ¯ction, Derek Par¯t's teletransporter machine in philosophy [Par¯t, 1984], or mind-uploading of the sort transhumanists dream about, a fundamental presumption is at hand, namely that mind and body are separate, and separable, entities. In these scenarios, the mind always comes out on top as the entity that can be and is transferred into a new body, uploaded into a computer, or what have you, while the original host body is dis- carded. Philosophy of mind, particularly in the second half of the 20th century, has gotten a lot of mileage from thought experiments portraying one version or another of disembodied yet conscious experience. The fundamental assumption is that the mind is abstractable" from the body; so long as the abstract (disembodied and atemporal) mind is stored somehow in some physical substrate, it lives on as the mind it always was. What's wrong with this view? Quite a lot. The philosophical tides have begun to turn in the past decade or so as the naïvete of GOFAI (Good Old-Fashioned 250 L. S. Swan J. Howard by89.177.13.58on08/14/13.Forpersonaluseonly.
  7. 7. Arti¯cial Intelligence2 ) has become overrun with a growing philosophical recognition of the importance of embodiment and embeddedness to understanding mind, thanks to projects in philosophy (e.g., Lako® and Johnson [1999]), cognitive science (e.g., Clark [1997]), and robotics (e.g., Brooks [2000]). The biological facts that minds are not free-°oating entities or abstractable software have slowly begun to leak into our contemporary discussions of mind. Philosophy will probably never completely let go of thought experiments depicting various versions of disembodied consciousness (after all, they are fun!) and there are those who will never let go of their Strong AI hopes and dreams. But there is nevertheless a growing consensus among mind sci- entists that it is time to face facts; speci¯cally, mindedness is a biological phenom- enon that happens in organisms with su±ciently complex nervous systems, which amounts to recognizing that mindedness requires embodiment and embeddness in a world. But just because mindÀbody dualism, in theory, is slowly falling out of favor with mind scientists, isn't it nevertheless possible that it might work in reality, given adequate technological means? Just because we do not like something philosophically or ¯nd it ethically questionable, this does not negate its conceivability. Perhaps transhumanists, via mind-uploading, will prove that dualists were on to something all along. All they have to do, really, is ¯nd a way to record the sum total of all cellular and other information occurring in one's body at a given time, and then ¯nd a way to store it for later download and use. 2.2. Dynamic vs. static views of mind The very notion that we might be able to record all of the information in something at any given time, and store it for future use or reference, suggests that something is a static entity ÀÀÀ which the mind is not. If there is anything philosophers have learned about human mindedness from neuroscience over the last decade, it is the over- whelming complexity of the human brain. And the related fact that this complexity comes not just from sheer volume (of approximately 100 billion neurons with 100þ trillion interconnections among them) but also the incredibly dynamic nature of the human mindÀbrain. Recent neuroscienti¯c research on memory suggests that mem- ories are in fact not static records that are stored somewhere in our brain tissue (as held by the old ¯ling cabinet model); rather, memories are assembled by the brain each and every time that particular event or person or whatever is called to mind. This theory explains why memories fade and change over time; speci¯cally, the memory content is continually altered over time by the experiences one has in the interim. There are much more intuitive problems (independent of supporting research) that we face in mind-uploading. For instance, each of us knows that we are in essence the same person we were at age ¯ve; in other words, there is continuity throughout 2 John Haugeland's nickname for the Strong AI program, more con¯dent in the 1970'sÀ80's [Haugeland, 1997]. Digital Immortality: Self or 0010110? 251 by89.177.13.58on08/14/13.Forpersonaluseonly.
  8. 8. our lives that allows us to live by a narrative, a story of who we are. But we also admit that we change so much over time, especially, for example, from childhood to ado- lescence, and likewise from adolescence to mature adulthood. So the question arises: if we were to upload our selves for future use and reference, which self would it be? And would we feel con¯dent that self would be the one we want to be in the future? In addition to the mind changing over time, as described above, the brain is also a very dynamic phenomenon from one moment to the next. What we experience as ideas, beliefs, and emotions, manifest in the brain as lightning-fast neural patterns that change subtly each time they are executed, sometimes utilizing this set of neur- ons, and sometimes that set (though the two sets will overlap considerably). So even if it were possible, as we explained early on in the paper, to take a snapshot of the sum total of one's cellular information at any given time in order to download it into some retrievable storage facility, would the biological facts about the brain get in the way? In other words, would we be destined to think about everything, which is to say, everything, in exactly the same way we have at some prior time thought about it before (i.e., at the time of recording)? Would duplicates then be unable to learn? Might they be capable of thinking, albeit forever stagnant thinking? And because the human ability to learn and adapt and grow is so central to who we are, would such duplicates even strike us as humanlike without such capabilities? 2.3. Multiple realizability A great many people, including upload enthusiasts, feel they have a justi¯ed belief that the mind is the functioning of the brain. In fact, this is the fundamental assumption of the argument for uploading. The enthusiasts say uploading will soon be possible because the mind is not an inherent facet of the biological brain substrate; rather, the mind equates to the brain's physical functionality, which can be repro- duced in any substrate so long as the substrate can identically duplicate the physical functions of the brain. We do not have the technological capabilities yet, but the soon approaching future when we should is known as the Singularity. Kurzweil writes that, This idea is consistent with the philosophical notion that we should not associate our fundamental identity with a set of particles, but rather with the pattern of matter and energy that we represent [Kurzweil, 2005]. This means that in order for my duplicate to be (essentially) me, it would have to be an exact duplicate of my brain's physical functioning ÀÀÀ every neuron, synapse, dendrite, etc., would have to be replicated in such a way that the arti¯cial substrate worked in exactly the same way as my biological brain. But this is only possible because Kurzweil et al. assume that the mind is nothing above and beyond brain processes. This notion that our mind is just the way our brain physically functions is why the proposed methods of mind uploading ÀÀÀ nano-replacement, the Moravec, freezing/scanning, X-ray, etc. ÀÀÀ depend on precise duplication of the brain's physical functioning in a di®erent substrate. 252 L. S. Swan J. Howard by89.177.13.58on08/14/13.Forpersonaluseonly.
  9. 9. But if we look at the literature about mind uploading, we see something very curious: . . . it is important to note that once a computer achieves a human level of intelligence, it will necessarily soar past it. A key advantage of non- biological intelligence is that machines can easily share their knowledge. . . When one computer learns a skill or gains an insight, it can immediately share that wisdom with billions of other machines [Kurzweil, 2001]. And Moravec writes about the abilities of biological humans vis-à-vis AI dupli- cates to function in completely arti¯cial realities, We might then be tempted to replace some of our innermost mental pro- cesses with more cyberspace-appropriate programs purchased from the AIs, and so, bit by bit, transform ourselves into something much like them. Ultimately our thinking procedures could be totally liberated from any traces of our original body, indeed of any body. But the bodiless mind that results, wonderful though it may be in its clarity of thought and breadth of understanding, could in no sense be considered any longer human [Moravec, 1992]. Kurzweil says, Ultimately software-based humans, albeit vastly extended beyond the severe limitations of humans as we know them today, will live out on the web, projecting bodies whenever they need or want them, including virtual bodies in diverse realms of virtual reality, holographically projected bodies, physical bodies comprised of nanobot swarms, and other forms of nano- technology [Kurzweil, 2001]. These quotations epitomize the transhumanist argument: once we reach the singularity, arti¯cial intelligence and mind uploads will greatly surpass our own thinking power and biological humans will most likely continue the transhuman project of making ourselves more and more computerized to keep up with the duplicates and AI. What this means, though, is that my upload will have di®erent functionality ÀÀÀ he can do things that I cannot, and potentially do things much better than I can. If Kurzweil et al. are correct, it seems that duplicates can permanently live in fully virtual worlds and the duplicate will naturally be more adept than biological humans in such a virtual reality. Plus, if Kurzweil's theory of knowledge swapping is valid, then the duplicate will have nearly unlimited ability to be anything and to know anything. However, rep- resentational theory of mind must hold for the duplicates because unless information is stored as a ¯le that can be accessed by others a priori, memory or skill swapping is impossible. In order for knowledge swapping to happen two beings must have the ability to read the same ¯le in the same way ÀÀÀ which means either they must have Digital Immortality: Self or 0010110? 253 by89.177.13.58on08/14/13.Forpersonaluseonly.
  10. 10. exactly the same context or that the knowledge ¯les are context-free and contain only the knowledge. If a duplicate were to get knowledge ¯les that were attached to speci¯c thoughts and memories of another being, then they would lack the context for them and be very confused (in the same way that inside jokes confuse outside people) or would believe themselves to have had experiences that neither they, nor their biological counterpart, ever had. For this reason, mental ¯le swapping entails that the ¯les have only the speci¯c memories or skills wanted and no other miscellaneous thoughts or experiences attached; but, is this possibility even coherent? This is cer- tainly not how we understand human memory. We do not learn things in a vacuum or divorce our thoughts from other thoughts. As noted in Sec. 2.2, current neuroscience has shown that the brain does not store memories; rather, memories are re-¯red and thus re-experienced every time. When memories are re-called the experiencing of a memory is not like watching a movie in our heads; it will never be the same as the original experience because of the constant development and change happening in the brain. This is also straightforwardly shown in our phenomenological experience: memories of sad things that become funny, old memories that are fuzzy, blank spots in memory, even why we are able to recall the ideas of a book we have read, but not the exact words in it. Rather than having packets of speci¯c knowledge, we have integrated knowledge that uses pre- vious knowledge to make sense of current and future information. Complex tasks like learning foreign languages and even simple tasks like under- standing a movie involve integration of our previous knowledge and understanding. This is to say that learning is not something that simply happens in a vacuum; the things that I learn inform my future experience and allow me a greater level of understanding and freedom, but are also framed by my preexisting understanding. To learn German I have to become immersed in German language and practice, which connects German to all of my other thoughts and if I stop using German, then I will forget parts of it. It is this cohesion of thought, which allows us to think as we do in an interconnected way. The duplicate will be a being with all of my capabilities as well as the capabilities that are inherent facets of being fully computerized like full integration into arti¯cial reality, knowledge swapping, faster and better hardware. This means that the upload will have di®erent functionality from the biological human he/she mimics: the duplicate will have abilities that the biological human will never have access to because those abilities arise from being embodied in a computerized substrate. However, if it is the case that there are innate di®erences between a biological human and his duplicate, then this directly contradicts the original uploading prin- ciple that the arti¯cial brain would need to physically function exactly like a speci¯c human's brain to be that speci¯c human's mind. So, a duplicate will never be the same simply because it is realized in a di®erent substrate and is given di®erent sets of limitations and freedoms by the repercussions of being realized in that particular substrate. And we can say that they are 254 L. S. Swan J. Howard by89.177.13.58on08/14/13.Forpersonaluseonly.
  11. 11. functionally similar to biological humans in the way that a plane is functionally similar to a car ÀÀÀ both are means of transportation that are driven by an engine, have seats inside, are made of metal and have windows, and even have rubber wheels ÀÀÀ but they have di®erent repercussions, costs, and values based on their being what they are and it just seems wrong to say that a plane is a car. 3. Concluding Thoughts In this paper, we have taken a look at both the possibility of mind-uploading and its attendant hypothetical complications. Clearly, we are skeptical about mind- uploading's becoming a reality for all of the reasons explored in the paper. However, we feel the deeper philosophical issue in need of critical attention is why this desire for digital immortality is so strong. Of course there are some who believe Kurzweil et al. are way behind the times, and that we are already living in a simulation [Barbalet, 2008]. Aren't we already uploaded in the sense of being immortalized, digitally, through our personal webpages, publications, public records, etc.? After all, your reading of this paper right now is a demonstration of fully virtual communication of ideas and identity. Acknowledgments Liz Swan would like to thank the Biota Live workgroup, in particular Tom Barbalet and Dick Gordon, for fruitful discussion in preparation of this manuscript, and for providing many helpful references. References Barbalet, T. S. [2008] Welcome to the simulation, in Divine Action and Natural Selection: Science, Faith and Evolution, eds. Seckbach, J. and Gordon, R. (World Scienti¯c, Singa- pore), Barbalet, T. S., Trumbule, J. and VanNuys, D. [2007] Shrink Rap Radio Live #3: Arti¯cial Life and Arti¯cial Intelligence, Shrink Rap Radio, Retrieved 21, July 2008, from http:// Brooks, R. [2000] Intelligence without representation, in Mind Design II, ed. Haugeland, J. (MIT Press, Cambridge, MA), pp. 395À420. Clark, A. [1997] Being There: Putting Brain, Body, and World Together Again (MIT Press, Cambridge, MA). Haugeland, J. (ed.) [1997] What is Mind Design? Mind Design II (MIT Press, Cambridge, MA). Kurzweil, R. [2005] The Singularity is Near: When Humans Transcend Biology (Penguin Books, New York). Kurzweil, R. [2001] The law of accelerating returns, in Psychology Today, Available via DIALOG. Cited 7 Jan- uary 2010. Kurzweil, R. [2000] Live forever ÀÀÀ uploading the human brain. . . closer than you think, in Articles. Psychology Today. Available via DIALOG. http://www.psychologytoday. com/articles/200001/live-forever. Cited 7 January 2010. Digital Immortality: Self or 0010110? 255 by89.177.13.58on08/14/13.Forpersonaluseonly.
  12. 12. Lako®, G. and Johnson, M. [1999] Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought (Basic Books, New York). Moravec, H. [1992] Pigs in Cyberspace, in Publications. Hans Moravec's Carnegie Mellon Home Page. Available via DIALOG.$hpm/pro- ject.archive/general.articles/1992/CyberPigs.html. Cited 9 January 2010. Par¯t, D. [1984] Reasons and Persons (Oxford, New York). 256 L. S. Swan J. Howard by89.177.13.58on08/14/13.Forpersonaluseonly.