Can being part machine make us more human

1,374 views

Published on

If your body contains technology it might help you to hear better, it may reduce seizures or prevent heart attacks – but does it make you more human? What if the technology in your body was a computer? Does the integration of computer with the human body spell a threat to our humanity, or will it enable us to return to being more fully ourselves?

Click here for the video
http://www.metanomics.net/show/december_6_can_being_part_machine_make_us_more_human/

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,374
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Can being part machine make us more human

  1. 1. METANOMICS: CAN BEING PART MACHINE MAKE US MORE HUMAN?<br />DECEMBER 6, 2010<br />ANNOUNCER: Metanomics is owned and operated by Remedy and Dusan Writer's Metaverse. <br />ROBERT BLOOMFIELD: Hi. I'm Robert Bloomfield, professor at Cornell University's Johnson Graduate School of Management. Today we continue exploring Virtual Worlds in the larger sphere of social media, culture, enterprise and policy. Naturally, our discussion about Virtual Worlds takes place in a Virtual World. So join us. This is Metanomics. <br />ANNOUNCER: Metanomics is filmed today in front of a live audience at our studios in Second Life. We are pleased to broadcast weekly to our event partners and to welcome discussion. We use ChatBridge technology to allow viewers to comment during the show. Metanomics is sponsored by the Johnson Graduate School of Management at Cornell University. Welcome. This is Metanomics. <br />ROBERT BLOOMFIELD: Welcome, everyone, to Metanomics. We've got a fascinating show today. My working title for it is From Cyborg to Borg, for reasons that should soon become obvious. Our guest is Michael Chorost, a science writer, former visiting professor at Gallaudet University and author of Rebuilt: How Becoming Part Computer Made Me a Better Human. He is also the author of the upcoming book World Wide Mind: The Coming Integration of Humans and Machines. Mike, welcome to Metanomics. <br />MICHAEL CHOROST: Thank you, Rob, I'm glad to be here. <br />ROBERT BLOOMFIELD: I gave a very abbreviated biography for you just then because I felt like I couldn't do justice to your story. Tell us about how you came to write these books. <br />MICHAEL CHOROST: Sure. I was born deaf, in 1964, and I was not diagnosed as deaf until three and a half, which is when I got hearing aids. So even with hearing aids, I was a person who had to struggle to communicate. Hearing reshapes my world view. I wore hearing aids until I was 36, and then I abruptly lost all my remaining hearing, in one day, in July 2001. And, on that day I had two thoughts. My first thought was, "Well, I'll get a cochlear implant." My second thought was, "I can probably write a book about the experience I'm about to have." <br />So Rebuilt became that book. I literally wrote my way through the process of losing my hearing and gaining it back again in an entirely new way. So Rebuilt is what I call a scientific memoir, very much a story of my life, but it's also the story of a life bound up with machines, specifically with computers. I was able to explore all sorts of fascinating topics in what happens when you put a computer inside a person, what is that experience like. <br />That book came out back in 2005, and it gave me the opportunity to have conversations with scientists and engineers all over the world about the future of technology, and that planted the seed of my next book, which is coming out in February World Wide Mind. And World Wide Mind is, in some ways, a sequel to Rebuilt. It picks up that personal story that I told in Rebuilt and brings it to completion. And it also pushes forward that line of thought that I started in Rebuilt: What is the future of the connection between humans and machines? But I also asked the question: Where is the future of the connection between humans and humans? How do we use technology to make us more human in an age where we are saturated by interactions with screens? So I'll stop there, and I'll let you pick up some questions from that point on, but I think that's the basic story of how I came to write these books. <br />ROBERT BLOOMFIELD: That's great. I do want to spend most of our time on your new book World Wide Mind, but first let me jump to the chase on Rebuilt. How did becoming part computer make you a better human? <br />MICHAEL CHOROST: Well, that's a great question. First, I want to say that I wasn't saying that getting the cochlear implant, or rather not having a cochlear implant makes you less human. So I just wanted to make that clear. The intention of the title was that for me going deaf was such a profound experience, and it caused me to reexamine a lot of my very basic assumptions: what did I want to do with my life, how did I want to interact with people. In the process of learning how to hear all over again was kind of symbolic for me of a quest of learning how to connect with other people more richly and more deeply. In other words, I wanted not just to hear better, I also wanted to connect better. <br />The experience of learning to hear again in a way gave me permission to pursue that quest to learn how to communicate better. That's what the title means. It was about connecting better with people. And, in the book, I talk a lot about my dating life, which was the most painful part of my life up until then. I had always really struggled. My dating life abruptly got a lot better once I allowed myself to start that process of trying to get used to this new body that I lived in, to learn how to hear better, to rethink the way I communicate with people. And it was such a profound learning experience that that's what I had decided to do in the book, to try to take the reader through that process with me. <br />ROBERT BLOOMFIELD: At the very beginning there, when you talked about cochlear implants, your first reaction seemed to be to make very clear that you weren't casting any aspersions on people who choose not to and so on. I know this is really a hotbutton issue in the community. I came across a review of Rebuilt, by Stephen Chaikind, professor of economics and finance at the Department of Business at Gallaudet University, where you were there for one year as a visiting professor. I'd just like to read briefly from his review and get your thoughts. He writes, "In reading this book, one can't help wonder if a world with vastly improving cochlear implant technology is providing adequate information about other choices available to deaf individuals, especially the options of learning ASL," that's American Sign Language, "and joining the deaf community. Chorost does not mention whether alternatives were discussed with him when he was investigating an implant. And one wonders whether they're discussed in detail with parents seeking implants for their children, an increasing percentage of whom are receiving implants at an early age and some of whom we meet in this book." <br />He goes on to say that you yourself have adapted well to it, and yet, and here I'm quoting his review, "There's still a note of sadness when he gives his impression of what the future holds." And here he quotes from your book Rebuilt, "My joy then was mixed with longing and sorrow. My own body is a battleground where competing visions of life are at war with each other. On the one hand, life dominated by the hyperrational structures of technology; on the other, by the warmth of human community. In the vision that I had increasingly grown to distrust was winning, for if a young, deaf child hears well enough with the cochlear implant to learn spoken language, why learn ASL." <br />So could you elaborate a little bit on your views of the debate over cochlear implants? <br />MICHAEL CHOROST: Yes, certainly. That's an excellent question. And the first thing I want to say is that I've come a long way since I wrote that back in 2004. It's now six years later, and I've had the experience and living for a year at Gallaudet and being exposed much more richly to American Sign Language. To me the question is not English versus ASL because I think that certainly people can and should use both. I come from this viewpoint now of linguistic diversity. So every language encodes a unique perspective on the world. <br />My concern is that, if ASL disappears, which I think is a possibility, what happens to that perspective on the world as encoded by a uniquely visual language and uniquely physical kind of language. So this concern is not just an issue with ASL. In fact, in the past, I don't know, the past 20 years you have many hundreds of other languages have gone extinct. So we are rapidly converging on almost monolingual kind of society, where just a few large languages dominate the planet. You have basically English, Mandarin and a few other languages. And niche languages all over the world are dying out. So this is not just a problem with ASL.<br />What I do see, as in the quote that you've read, that ASL is uniquely about the body and is uniquely about communication, uniquely about relationships, and I think that's an incredibly important perspective that has to be retained. The question is how is that going to happen. You take a look at the example of other minority languages, like Welsh and Hebrew, which have been sustained and actually strengthened by these determined efforts to sustain those languages by providing educational supports, by having government supports for the language. Welsh and Hebrew are thriving languages nowadays. <br />One suggestion that I have made a number of times, as the book came out, is that the signingdeaf community needs to undertake an effort like that. That it needs to figure out where the pressure points in our society are that respond to that desire of minority languages to continue, to figure out how to get funding, how to create educational institutions, how to figure out how to just supply something to the world in the way that Hebrew now does, for example, and I think that community has to figure that out. They can't just assume that the language will continue because it always has continued. It is, in fact, under assault. <br />It's not that anybody's trying to diminish it. It's just that every time parents decide to implant their kid, they are putting the kid on an Englishonly track, for the most part. So I do see a serious threat to the language. I think that the solution to that is ultimately political, to articulate a case for the language and to carry through that case. <br />ROBERT BLOOMFIELD: I guess everything I know about your activities are primarily science writing and educational. Have you gotten politically involved in any of these issues? <br />MICHAEL CHOROST: Okay. You're getting so soft, Rob, I'm having some difficulty hearing you. Trying to click on your icon to increase your volume. <br />ROBERT BLOOMFIELD: Yes, that's exactly the way to go, and I can try to speak up as well. Once you confirm that you can hear me well, I will ask that question again. <br />MICHAEL CHOROST: Okay. I can hear you, but you're quite soft, but it's okay. I can hear you. <br />ROBERT BLOOMFIELD: Okay. So the question was, I know of your activities in science writing and education. Are you also politically active on issues regarding the deaf? <br />MICHAEL CHOROST: Well, I would not say that I am. I call myself a sympathetic observer in the signingdeaf community rather than a participant in it. That was something that really came home to me during my year at Gallaudet because, firstly, I found sign language extremely difficult to learn so it was a real struggle for me that year. I took ASL I. I took ASL II. I spent lots of time trying to communicate. It really came home to me that it would take many more than just one year of practice to become a member of that community. The linguistic barrier for me was very steep. So I remain a sympathetic outsider to the signingdeaf community. <br />As for the oraldeaf community, I'm not really a member of that community either. I wouldn't say that I'm politically active, but I certainly have given a number of speeches where I say things that are very much like what I'm saying now, that it's important to retain ASL. But I wouldn't say that I'm part of any kind of sustained political effort. No. <br />ROBERT BLOOMFIELD: Okay, thank you. That's very interesting. I'd like to move on to your new book World Wide Mind, and toward the beginning of that you talk about the pushpull dynamic of technological innovation, and you write here, I'm quoting from the book, "This is the way evolution works. Increases in complexity and power are not accidental. They are automatic." What is the pushpull dynamic, and what is this automatic process that you're referring to? <br />MICHAEL CHOROST: Okay. This pushpull dynamic, let's give a very simple example, and that's computers themselves. Every time computers get faster, that creates pressure to design more capable software for them. Like Second Life, for example, which I can see is a very computationintensive kind of environment so that creates pressure to create even faster computers. Engineers then buy even bigger applications. So you get this intense, mutually enforcing drive to create better and better technologies. We see the dynamic all the time in the technological world, but it's also very much an evolutionary force so it's the eternal battle between predators and preying. Every time predators develop better method of predation, prey invent better defenses and better methods of evasion, which lead predators to become more capable, which lead prey to become more competent at evading. <br />And so again, you get this inexorable drive toward greater and greater complexity and greater sophistication. The argument, as you say, it's not accidental when it's built into the very nature of life itself. It's a fundamental, evolutionary aspect of life, and we see it both in nature and in technology, which is a really crucial parallel because it suggests that these are not distinct realms. We can't say there's nature on the one side and technology on the other, that they are really part and parcel with the same evolutionary push toward greater and greater complexity. <br />ROBERT BLOOMFIELD: You write in World Wide Mind about the possibility of an electronic corpus callosum that would, I guess as a result of this pushpull dynamic and this inexorable improvement in technology, you say, "What if we built an electronic corpus callosum to bind us together? What if we eliminated the interface problem, the slow keyboards, the sore fingers, the tiny screen, the clumsiness of point and click by directly linking the internet to the human brain? It would become seamlessly part of us, as natural and simple to use as our own hands." And so this is really pushing the notion of Rebuilt, the computerhuman connection into the realm of the humanhuman connection using computer technology as the medium. <br />And, to me, this really sounds a lot light Ray Kurzweil, who has famously written about the increasingly rapid development of technology and the singularity. Now I admit I've never quite understood what people mean when the talk about the singularity, other than that it's some awesomely cool world in which cyborg technology rules, brains are uploaded into computers, and people live forever. So my impression is, you're not quite the futurist that Kurzweil is, but you're definitely a futurist. So maybe to use academic speech, Mike, could you compare and contrast your views with Ray Kurzweil's? <br />MICHAEL CHOROST: Sure, I'd be happy to. You've picked the sentence in the book that articulates the thesis of the book: What if the internet was as natural and simple to use as our own hands? So let me just backtrack a little bit and just relate this a bit to my personal history. Like every other cochlearimplant user, I'm a guy who has several hundred thousand transistors in my head, and 32 electrodes. I have two cochlear implants so there's 32 electrodes altogether. <br />I myself have direct experience of the physical integration of computers with the human body so I know that it works. I also know that it's not perfect, that there are issues with it. So that is, I guess, what you might say is what authorizes me to write World Wide Mind because I've got some of that bodily experience to begin with. I've read Kurzweil, and I've been very inspired by him in a number of ways, but I also disagree with him in a number of ways, so let me give you some of that compare and contrast. <br />Kurzweil, his whole idea of the singularity is basically that, at some point, computers are going to achieve selfawareness and then become able to redesign themselves. Because computers work so much faster than neurons, they'll be able to evolve far faster than humans and leave all of us poor, organic humans in the dust while they go off and form this elite, intensely intelligent society. So that's the idea of the singularity. <br />Now, to me, first of all, it's a fairly antihuman philosophy because it basically implies that computers are going to become our betters and that humans will no longer be important or necessary. I think there's a significant problem with that because it assumes that the body is not really important for interacting with the world and assumes that all you really need is silicon. But you look at how organic creatures evolved. They didn't do it just by thinking. They did it by interacting intensely with their surroundings. They did it by creating relationships with their fellow creatures. They evolved in an intense web and network of relationships. <br />Those relationships were mediated by sight, by vision, by hearing, by touch, by language so the body is so crucial to the way we interact with each other, but it's also so crucial to the way species evolve. I think Kurzweil is making a mistake in essentially ignoring that process. He's basically saying that the body isn't important, that it can be dispensed with. So World Wide Mind tries to suggest an alternative kind of future where I say that, instead of leaving the body behind, we can actually integrate the body with the internet so that when someone interacts with us, instead of just seeing something on a screen, kind of like when I'm seeing your avatar in Second Life or seeing text on a screen, I have some way of actually perceiving what you're saying or how you're acting in some way that feels to me as a meeting and real as if I'm actually experiencing it with my body. <br />There's only one way to do that, as far as I can see, and that is to physically integrate the internet with the body in such a way that you can implant some kind of device into the human that directly alters brain activity when you do something and allows you to perceive my brain activity in your brain, in some way or another, when I interact with you. The advantage of that is, it will allow us to interact at a distance just like we're interacting now in Second Life, but in a much more visceral way, in a much more bodily kind of way. <br />So to put this argument in another way, there's all this angst being articulated now in books like Nicholas Carr's The Shallows, which I haven't readI've read enough reviews about it to feel like I've read itand, in a whole bunch of other books, like Jaron Lanier's book You Are Not a Gadget, which all articulate this fear that the internet is alienating us from each other. I think there's a lot of truth in that. For example, whenever I pick up my iPhone and start dinking away at it, I'm not paying attention to other people. I'm not paying attention to the world around me. And it goes the other way around. When I communicate with someone face to face, I'm then isolated from the internet. I think it's interesting that a lot of people find it almost intolerable. You see people who are so intensely attached to their iPhones, to their BlackBerries, that they are riveted by these little things. <br />Right now these worlds are mutually exclusive, the world of computer communication and the human life world, the physical facetoface interactions. I think, as a society, we are really struggling with that, and we're really suffering from all sorts of issues with that. This computer distraction that people talk about, it impacts relationships, it impacts people's experience of the world around them. People feel this hunger to look at their iPhones all the time. My wife will sometimes comment that I spend a lot of time looking at iPhone. <br />So the suggestion that I try to articulate in World Wide Mind is, well, the solution to this is not to say, well, we need to stop using our iPhones or our BlackBerries because that's not going to happen. But rather, try to bring these two currently completely separate worlds together so that there is a seamless link between the electronic world and the physical world, and then that particular problem becomes less of an issue. <br />ROBERT BLOOMFIELD: You actually have a great example in your book of the benefits of that seamless link. Actually let me pick my pedestrian technology here. I have threedollar reading glasses from T.J.Maxx, and I'm going to put these on to read the hard copy of your book I have. So this is World Wide Mind, and you're describing meeting a professor at Gallaudet; you were taken to visit a math class, and you got her name on a handout. You say, "I unholstered my BlackBerry, held it under the desk at an angle, called up Google and stealthily typed her name into it. I scrolled down the results with a thumb wheel: Ph.D. in statistics from Stanford, postdoc at McGill on analyzing FMRI data, progressive hearing loss. And she was a science writer too. She had just done a story on hybrid cochlear implants." And then you go on, "Now I knew her background, her history, her interests. It gave her depth, dimension, a local habitation and a name. I looked at her, thinking, 'Wow! A deaf science writer just like me.'" <br />And then this is the part I really like, you write, "Nosy, invasive, perhaps just a little, but I was a visitor from the other side of the country. Knowing something about her would help me smooth my way into a conversation. Anyway," and I have to say this is my favorite line in the preface, "Anyway, I figured the day was coming when it would be considered rude not to Google someone upon meeting them. One could discover mutual interests so much more quickly that way." How far do you think we are from that day? <br />MICHAEL CHOROST: Well, I think we're getting a lot closer to it because I thing people do Google each other on their iPhones and BlackBerries all the time. But it was a really neat example of how the computer facilitated our meeting because, if I didn't know anything about her, class was over, she had to run off to a class. My guy was going to whisk me off somewhere else. It allowed us to start talking about what connected us much more quickly than would have been possible otherwise. It was a great gift to be able to do that at that time. So it was a really neat kind of example of that kind of thing. <br />Now, I think when a lot of people think about, well, what could you do if you physically connect the people with the internet, and I think a lot of them would think, well, you can type back and forth in each other's minds. And I say, well, no, not really. You can do that just fine by talking or by using texting or whatever. So what I suggest is that new forms of interaction would be fundamentally different from anything that we have now. This is how the history of technology has always worked. <br />If you think back, for example, to the invention of printing, okay, so printing didn't just allow people to do what scribes did, faster, it allowed for them to create an entirely new method of communication, to create an author, to create something that speaks with authority of voice. It includes ideas that are then rapidly broadcast through the entire population, which, all of a sudden, had an impetus to become literate, where it didn't before. <br />You look at the telephone, which fundamentally changed the way people communicate. People used the telephone to talk in ways that they don't talk in face to face. The same thing with email. We do things with email that we never did with letters or telegrams before. So the point I'm making is that technology doesn't just allow you to do all kinds of communication better, it allows you to do new kinds of communication. So in the book World Wide Mind, I try to go beyond, well, what would physical braintobrain communication let you do. It wouldn't just let you text to each other. <br />And I say, "Well, what's missing today from electronic communication?" The big answer is feelings. You don't know what someone else is feeling because you can't see their face, you don't see their body language. You don't get that kind of information. So what I suggested is that it might be possible to get that kind of information directly from the person's brain so that you know, from their brain state, whether they are happy, whether they're sad, whether they're attentive, whether they're not paying attention. You get that kind of information in a way that is visceral. That would allow not just improvement on existing forms of communication or really not even an improvement on existing forms of communications, but entirely new kinds of communication. <br />I've adopted a word; it's not my invention, it's the word telempathy, the idea being that you could use emerging technologies to feel what other people are feeling, not just read what they are typing or hear what they are saying. That would allow entirely new kinds of communication and new kinds of relationships to emerge. What I'm trying to do in that book is to imagine a new kind of communication. So that story being told about me and Regina [AUDIO GLITCH] at the beginning of the book was really just a launching point in saying, "Well, my BlackBerry allowed me to start a conversation with her in a way I really couldn't have before." I used it as a launching point to say, "How much further could this go?" <br />ROBERT BLOOMFIELD: As you set up in your book, as you explained to the reader what you're going to be talking about, you begin by comparing what you are trying to do to Jules Verne's From the Earth to the Moon, and you write, "Because it was grounded in real science, Verne's novel was conceptually plausible, in the same way recent advances in neuroscience and neurotechnology make it possible to write a conceptually plausible account of how brains could be read and linked [AUDIO GLITCH] experiment." And so I thought I'd ask you to walk us through that thought experiment, and I'd like to start with the first section of the book. As you describe it, you say, "It discusses existing technologies for detecting brain activity and the algorithms used to interpret the resulting data." So you're not just a futurist, you're grounding this in what's happening now. Tell us a little about that. What is available right now in technology? <br />MICHAEL CHOROST: Well, you've landed on a really important distinction that I make in that first chapter, that the book is a thought experiment. I've not made the claim in the book that mindtomind communication or telempathy or however you describe it is plausible now. What I'm trying to do is, I'm trying to show that it's become possible to talk about it in a concrete way that was not possible before. Really, if you look back on science fiction, people have been creating fantasies for decades, probably even hundreds of years, about this kind of mindtomind communication. You see it in Star Trek's Borg, for example, and, in that certain Star Trek, it's just a pure fantasy. Star Trek has no idea how this kind of communication could actually happen, has no idea how you could actually make one human aware of what 10,000 other humans are doing at any particular instant. So it's just fantasy. <br />What I do in the book is, I say it is now becoming possible to take this out of the realm of fantasy and talk about it in terms of technologies that actually exist now and which have just come into existence actually, particularly optogenetics. I know you're planning to ask me about that late so I won't get into that right now. But there are new neuroscience technologies that make it possible to observe brain activity with an unprecedented level of detail, and they make it thinkable to isolate an individual thought in a human mind. And they make it thinkable to evoke a similar thought in someone else's mind. Now, up until about five years ago, that's been pure fantasy. There's been no imaginable engineering path to making such a thing happen. <br />What I try to do in the book is, I say we are now beginning to be able to imagine that kind of engineering path, and so in Chapter Eight in the book, I lay out that path in some detail. I don't claim that I have answers to all the questions. I don't claim that I can tell you exactly how it would be done. The only claim that I make is that it's beginning to become possible to talk concretely about it. So that's why I brought up the Jules Verne reference. <br />Jules Verne he wrote the book From the Earth to the Moon in 1865, which was just about one hundred years before it actually happened. And Verne imagined launching astronauts to the moon by firing a spaceship out of a giant cannon. Okay? This huge cannon called the Columbiad, which you would pack a ton of gunpowder in it, stick the capsule in it, light the cannon, and it would shoot the capsule out of the atmosphere toward the moon. Of course, if we actually did that, the astronauts would be pulps. Okay? So that wouldn't work, but Verne got the math right. He correctly showed how long it would take such a spaceship to get to the moon. He gave the actual figures for how fast such a spaceship would have to go to get out of the earth's gravitational field to reach the moon. He correctly calculated how long it would take. <br />So Verne didn't have the all the technological tools at hand, but he got the basic idea. He said, "If you can part enough velocity to a projectile, you can make it reach the moon, with people inside." So that's kind of what I'm trying to do in World Wide Mind. I'm trying to bring it into that realm of conceptual possibility, just as, in 1865, Jules Verne brought going to the moon into that realm of conceptual possibility. I found that really just enormously exciting, just the fact that I could begin to say we can begin to talk about this. Using actual numbers, using actual technologies is really just such an exciting experience to be able to do that kind of thing. <br />ROBERT BLOOMFIELD: Let's jump into those technologies and start with, as you mentioned, the existing technologies for detecting brain activity and algorithms used to interpret the resulting data. What are you referring to? <br />MICHAEL CHOROST: Okay. I start by talking about functional MRI. So MRI, Magnetic Resonance Imaging. To use it, you have to stick the person inside this enormous magnet that's the size of a walkin closet. The machine costs a couple million dollars. Big, huge technology. It's in just about every hospital in the western world these days. You can use the MRI machine to watch which parts of the brain are sucking up oxygen in any particular instant. You can ask a person to engage in some mental activity, say, deciding whether to add or to subtract a particular set of numbers. You can see that when the brain's decides to add, you get a certain pattern of oxygen consumption in a part of the brain. And when the brain decides instead to subtract those numbers, you get a different pattern of oxygen consumption in a certain part of the brain. <br />So it is now actually possible to stick someone into a functional MRI machine and to be able to tell whether they've chosen to add or to subtract two numbers. You can do this without asking them what their decision is. You can do it without looking in their face. You can do it just by looking at imagering of what's going on inside their brain. So in a very real sense, we have mindreading machines. Now I haste to say that these technologies are very limited. <br />For one thing, you can only know what a brain is thinking if you have already seen that activity before. In other words, you only know whether a brain has decided to add or subtract if you have repeatedly asked it to make that thought over and over and figure it out where a pattern activity corresponds to that mental intention. So you can't know if that person has decided to multiply, unless you've identified that particular pattern of oxygen consumption before. <br />I use that example to say today we actually do have mindreading machines. They're just extremely crude. The technology at this point is very limited, but they do exist. If you have a couple million dollars and a lot of software and all sorts of technicians, you can tell whether someone is making that kind of decision. You can tell whether they're seeing George Clooney's face as opposed to Jennifer Anniston's face. You can actually take it further. You can, in some cases, know which face they're looking at out of a given set of faces. So we really do have the ability at this point to read minds, in some sense. <br />But I make it very clear, that's really limited. There's not a lot you can do with that. So I asked the question, "Well, are there emerging technologies that would let you go further?" And the answer seems to be, to some extent, yes. So I start with a technology called nanowires, which is a very experimental kind of technology. It's certainly not used in people. But it's possible, in principle, to thread very tiny wires into the brain's capillaries, going through the bloodstream, almost like you're doing an angiogram, except that you're going to the brain rather than the heart. <br />You can snake these tiny, little wires so that each little wire ends up in a different part of the brain, and those wires will then relay information about what a few neurons in that part of the brain are doing, whether they are firing or not. So in principle, you could put several hundred or several thousand of these tiny wires into a brain, and that would give you much more finelygrained information about what neurons in the brain are doing, and that gives you more predictive power if you have the right algorithms. Because functional MRI is a very crude technology. It only tells you what thousands or tens of thousands or millions of neurons are doing. It doesn't let you see what, say, ten or fifteen neurons are doing. It doesn't allow you to focus on the neuromachinery that corresponds to one particular thought. It doesn't allow you to zoom in like that. So technology, like nanowires, begins to allow us to think about zooming in on particular parts of specific neural circuits. <br />I bring up nanowires just to say it is becoming thinkable to do this, but then I point out that nanowires too have major limitations. It's very difficult to figure out just mechanically how you would push several hundred or several thousand wires into a brain, get them all into different capillaries. So it's a scaffolding technology for me. I bring it up just to show the reader that you can begin to think about this kind of thing. <br />Then I move on to optogenetics, which is much more advanced and much more exciting technology. I talk about it at length in Chapter Eight in the book, but I'll just give you the overview here. Basically you can genetically modify neurons to make them responsive to light. In other words, you could put genes into them that will make them fire when you shine a light of a certain wavelength at them. And you can put other genes that will make them stop firing when you shine light of a different wavelength at them. Now, I'd been thinking, "Well, what do you mean shine the light on the brain? It's inside this sealed container. How do you get light in there?" <br />Well, one way, which is actually done now is, you just pipe it through the skull with fiberoptic cables, and this is being done in animal experiments now. So people are putting genes into the brains of mice and other animals, genetically modifying those neurons so they can control the firing or inhibit the firing of specific neurons with specific wavelengths of light. So this technology's allowing for a tremendous degree of control over what certain neurons do, and it can also be used to observe what certain neurons do. I go into some detail about the molecular biology of how to do that. <br />So I say, well, optogenetics allows us, in principle, to do these kinds of things. If you can get that much information out of the brain, it becomes possible to think about writing algorithms that will correlate brain activity with, I guess you might say, more sophisticated states of mind rather than just choosing add or subtract or just seeing one face versus another face, to allow us to get some insight into the conscious activity of a mind. Now, there's all sorts of issues that that statement opens up, but I'll just stop at that point, and let's go on to the next question and see where you want to take this conversation. <br />ROBERT BLOOMFIELD: What you've talked about is essentially accessing data from the brain, but I know in your book you move from there to what you call a communications protocol that allows that information to be transmitted eventually from brain to brain. I guess here now we are beyond existing technology, and you're sketching out a possible direction for this work. So what is it that you see that a communications protocol would do, and how would it have to be structured? <br />MICHAEL CHOROST: Okay. I'll be completely frank in saying that answering this question leads me to the weakest point in the book, and I'm frank about it in the book as well. I'm sure a lot of people out there in the audience know this very old cartoon by Sidney Harris, where it shows this mathematician writing this equation on a blackboard, and, in the middle of this equation is this part, "And then a miracle happens," and then the equation continues to the answer. Okay? So there is a point in the book where I basically do that, where I just say, "Okay. Well, at this point I think we have to assume some miracle happening, and then we can get to where we're going." I want to talk about what that miracle is. <br />We already know from functional MRI that we can know what a brain is doing if we have already seen that brain's activity before. But we also know that that's a very limited kind of information. In order to actually know what a brain is thinking, I think it's very difficult at this particular time to imagine an algorithm is sophisticated enough to look at a brain's activity and know the conscious experience of that brain. I don't think there is any algorithm that exists that can do that. <br />So what I basically say is, you would have to develop some form of artificial intelligence, and this is a part of the book where I do come into closer alignment with Kurzweil. I say that, if at some point you could develop algorithms that have some sense of interiority, they're able to look at a brain and observe a person's behavior and start building up a set of correlations, say, where the algorithm learns to say, "When the brain does this, the person does that. When the brain does that, the person does this." And if it could build up a large enough database, eventually algorithm learns to figure out that when this bunch of neurons fires, you're thinking about apples. Okay. And when that bunch of neurons fire, you're feeling unhappy. And when this bunch of neurons fire, you're thinking about your significant other. <br />This is the part of the book where I just say we can't do this now, but, if that kind of algorithm ever does become possible, then we would be able to build devices which would implement a communication protocol between brains. Let me talk about what the protocol would be. Think for a minute about the way the idea of an apple is represented in your neurons versus the way it's represented in my neurons. Now we know that, on a gross level, brains are roughly the same. You've got hippocampus. I've got a hippocampus. You've got a medulla. I've got a medulla. You've got a forebrain. I've got a forebrain. So on a gross level, our brains are pretty much the same, but it's pretty clear that on the individual neuro level, the way your brain represents an apple is not going to be the same way my brain represents an apple. There is no mapping at that level between brains. There is no single set of neurons in your brain that corresponds to an identical set of neurons in my brain. The similarity between brains just doesn't go that far. <br />So the suggestion that I articulate in the book is that, if your onboard computer or whatever you've got knows when your brain is thinking of an apple, it can then just send one simple piece of information to my onboard computer just the word apple. Then my computer knows which neurons in my brain, when fired, make me think about or experience an apple in some way and can then fire those neurons to make me think, in some sense, of an apple. So that's what I mean by communications protocol. This is the part where I just deliberately say, "Yeah, you've got to have algorithms that can do that kind of thing. If you don't, then this kind of thing will never be possible. But, if it is possible, if it ever becomes possible, this kind of protocol becomes thinkable. <br />Now, you might be asking yourselves, "Well, what good is that really? You don't really need to set up this elaborate technology to let me know that you're thinking about or seeing an apple. All you need to do is say to me the word apple." That's the point where I say, "Well, the point is not that you use the technology to do things that we do already, with our existing modes of communication." You use it to allow people to do new things, to allow me to know, in some visceral sense, what you're actually seeing and what you're actually feeling, to, in some degree, read information off your visual cortex and make my visual cortex do something similar. <br />If you are seeing a particular person walking towards you, then it may be possible to evoke equivalent activity in my visual cortex to make me perceive something similar, assuming that we both know that person and that we both have a set of neurons in our heads that represents in some way the activity of something coming closer to us. And we do know that the visual cortex encodes light and shadow and motion and shape, in certain very distinctively identifiable ways, in various layers of the visual cortex. And those are, in principle, identifiable. And, with optogenetics, they are, in principle, sharable. <br />So the overall point that I drive at is not that this be a perfect communications method by any means. But I say that language is not perfect either. When you tell me a story, it does not make me see the exact, same scene that you see when you tell that story. So if you tell the story of seeing a dog in your neighborhood in Brooklyn or wherever you live, I don't see the same neighborhood that you see. I don't see the same dog that you see, but I substitute my own memories of Brooklyn neighborhoods, of dogs, and I see an acceptable analog in my mind of what you're telling me aboutlet me explainfeel like I'm seeing when you are communicating to me. This would essentially allow a new kind of language, if you were to tell me stories and to convey feelings and impressions to me in a new kind of way, which really is not possible now. So I try to imagine that kind of possibility in the book. <br />ROBERT BLOOMFIELD: You actually talk about specific examples of the kinds of collective communication, basically new activities people might engage in, as you said, beyond just doing what we already do just differently. These are words that I'm hoping you can explain to us: telempathy, which you mentioned earlier; synthetic perception; synthetic memory; and one that I thought sounded very tantalizing: dream brainstorming. <br />MICHAEL CHOROST: One of the hardest parts of the book was just trying to imagine what you could do with this kind of technology. It was like trying to imagine what you could do with email before email existed. Again, that's very hard to do because you just don't have the cultural, mental hardware to do it. But I try. I give it a shot. When you connect a computer to a network, you don't just connect it to one other computer, you connect it to all of the others computers in the network. You allow that computer to get aggregated information from many, many other computers in the network. So you allow for the collection of many bits of data to be brought together to bring useful information to one computer in somewhat the same sense that I can create a Twitter feed that tells me what everybody is saying, that one particular topic on a particular day, like the Twitter feed Metanomics. Okay? <br />The thing I imagine is this: We already know that you can walk into a room and get a sense of the mood of that room. You can tell like people excited, are they bored, are they happy, are they busy, are they quiet, what's going on in that room. So we already have an advanced biological technology for gathering that kind of information. Every teacher knows that classes have moods, every course is different because of the unique interactions of people with each other. But today we only get that information when we are face to face with a group of people. So the whole point of networking dreams together would be to allow people to get that kind of information from people that they don't see, from people that they can see, to be able to get a sense of the moods of a distributive group, an entire country, a town, a group of people. <br />I offer a number of scenarios in the book where I just make this concrete, like I imagine a military team working together to do a drug bust basically, where people on the team need to know where everyone else is and what they are actually feeling so that you know if anybody's been injured just by getting a sense of what kind of pain sensations their brain is getting. That is what I try to do to imagine that kind ofto use these wordstelempathy, synthetic perception, dream brainstorming. <br />So synthetic perceptionI actually talked about it. I didn't use the phrase, but I talked about it. It is where you evoke activity in someone's visual cortex to get them a synthetic analog of what another brain is seeing. So by the same token, you can aggregate information together, to give people a synthesized piece of information about the moods or the actions of anything from a small work group to a nation of many millions of people, in some concrete kind of way. <br />ROBERT BLOOMFIELD: As we continue our progression toward more and more unusual outcomes here, the last one that you promise us in the book is an account of how a collective mind might emerge out of these interactions. You refer to it as possibly a hive mind. I'll just go back to my original reference at the top of the show, to Star Trek's Borg, this one hive mind all working together. One of the goals of writing a book like this and trying to ground it in, as you've called it, a thought experiment is to place some structure on these science fiction notions. I'm hoping that you can tell us what you mean by a hive mind and, in particular, how all of this analysis that you use to get us there sort of draws a sharper picture of what it is that you're talking about when you talk about it. <br />MICHAEL CHOROST: Sure. Let me just check on something because I notice that it's 3:57. <br />ROBERT BLOOMFIELD: Yeah. We only have a few. <br />MICHAEL CHOROST: So for me to answer the question, this will take a lot longer than three minutes. What's our timeframe here? <br />ROBERT BLOOMFIELD: We have about three minutes so you'll have to do what you can. I'm just giving you practice for when you get on broadcast TV, and they give you three minutes to talk about the whole book. So I'm being gentle. <br />MICHAEL CHOROST: Sure. Okay. Basically, there is an analogy that science fiction movies make all the time, where the internet becomes "intelligent." I'm making air quotes with my fingers as I say that so your avateer(?) can represent that. So like in Terminator, Skynet becomes intelligent and it decides to wipe out the inconvenient human species. So the question that I ask in the book is, "Could that happen?" And my answer is, if you just looking at the internet, the answer is absolutely not because the internet, in and of itself, doesn't have an evolutionary imperative to go in that direction. It's just a collection of resources. It's just a bunch of computers and a communications protocol. So I just say that's just a big mistake. The internet itself has no reason to become intelligent. <br />But I do say that when you add human activity into the picture, when you connect you and the computers together, the entire picture changes because human beings do have evolutionary needs, and they do respond to evolutionary imperatives. And, if you physically connect them, then all of a sudden you have an organism that is physically a single, continuous organism of the internet and human brains that will respond to evolutionary pressures and needs, to threats, to mobilize, take advantage of opportunities. I try to suggest how a cautiousness can emerge out of that. <br />Now, I don't go far as to say that you would get a consciousness because I think this is one of the big questions that nobody's really succeeded in answering. Kurzweil doesn't answer it. He never explains, so far as I can see, how you get a selfaware entity out of accelerating numbers of computers and increased numbers of [nodes?] on the internet. He keeps saying that it will happen. He never explains how it will happen. So the piece that I try to add to this discussion is, you can't just think about computers in and of themselves. You also need to think about the combination of humans and computers. In other words, the sociology of humans and the internet. And it's in looking at that that you begin to get some glimmers as to how a collective consciousness could arise, which is more intelligent than any individual human. <br />ROBERT BLOOMFIELD: Okay. That's very interesting and brings us right up to the top of the hour. So I would like to thank you for joining us, Michael Chorost, who is the author first from Rebuilt: How Becoming Part Computer Made Me A better Human and now the upcoming book World Wide Mind. I think I just said "part computer made me a better computer." So Part Computer Made Me a Better Human. And author of the upcoming book World Wide Mind: The Coming Integration of Humans and Machines. It's been a fascinating discussion, Mike, and I wish you the best of luck on your publication and sales. <br />MICHAEL CHOROST: Thank you so much. You've been a great host. I really appreciate it. <br />ROBERT BLOOMFIELD: Okay. We will be back next week for what is probably our seasonclosing episode, before the holidays, of Metanomics, metanomics.net. You can see us on Facebook and on the web and, of course, in Second Life. So thanks, everyone, and see you next week. Bye bye. <br />Document: cor1097.doc<br />Transcribed by: http://www.hiredhand.com<br />

×