Transhumanism and brain
According to the website Futurism, machines will be endowed with an introspective
awareness in 2029 and, in 2099, a robot will be able to dream, innovate and invent
(​https://futurism.com/images/the-dawn-of-the-singularity/​). If these technological
exploits come true, human beings will have serious reasons to worry about their
safety, their dignity, and all what make them so specific, so different from all the rest
of creation. Indeed, for the first time of its history, humankind will meet creatures
which can compete with it in every facet of intelligence. Are these fears
well-founded? Are all the artificial powers described in this picture workable? What is
a matter for science fiction and for reality? Our short presentation will try to answer to
these questions, by showing to what extent an “after singularity” period will be
possible.
First, we will show what are current technologies that serve as an interface between
human brain and artificial intelligence. Then, we will see what brain computer
interface technologies seem realizable for tomorrow. At last, we will suggest a more
philosophical approach to deal with the subject about transhumanism and brain.
I/ Current technologies ​(Ambroise and Julien)
The first links established between the nervous system and informatics have
appeared with new treatment following spinal cord injuries. One example involves a
patient with cervical damage who had lost movement of his hand. He underwent the
implementation of a microelectrode array in his cortex after an RMI had found which
area was involved when he would be trying to move his hand. The electric impulse
caught by the electrodes was then transmitted to a PC where the signal was treated
and sent back to the forearm directly to the muscle. To interpret the brain messages,
a correlation is drawn between neural activity and the wanted output force to the
hand. A software was used to clean the data and keep over 80% of the original
signal into the network. This way the signal could be tweaked to get the appropriate
response rate from the signal to the hand. This shows how current BCI technology is
already able to interpret some brain signals in a comprehensible manner.
More generally, by using simpler method a lost leg or arm can be replaced by a
bionic member. Those members use electrical mechanism as replacements to actual
muscles. Such replacements have been made widely commercially available.
Electrodes are located on the upper part over the lost member and receive the
electrical signal from sinews. The data received can then be treated through a piece
of software and sent back to the bionic arm or leg.
However it has appeared that this methodology didn’t prove to always interpret
accurately the movement wanted. For example, the complex gestures made by a
hand and fingers may prove hard to replicate with a simple signal analysis. This is
why machine learning has imposed itself as a viable solution to this issue.
In this perspective, a study whose results were published at the beginning of last
year underpins how visual based assistance technology can improve dramatically
the “handling” of the bionic hand.
The research conducted aimed at training a market available hand prosthesis to
grasp and pinch objects in different manners. The camera used on those hands
would offer the system to better appreciate distances, sizes and others of the objets
to catch. After training the system over a vast bench of trials and data sample, once
used by a real amputated human, the bionic hands would reach a very high rate of
successful grasping even though the amount of visual information available was
diminished significatively. This technology illustrates how BCI can be further
enhanced with machine learning.
Starting a few years ago, several research groups have been working on human
cowbayes to understand the formation of memories and to establish a computer link
to the hippocampus. One such trial has been conducted by a medical team in
California is looking for a way to cure epilepsy seizures.
The team was looking into the brain to understand where the seizures originated
from, and this study as a turn could be used to understand how brain signals can be
interpreted. Such an understanding would help to model an interaction between
human brain and an AI. Kernel, the company behind this research aims at designing
a neuroprosthesis using less invasive technology, like genetically modified neurons
or little injected devices. They try to codify images and memories with the sensors
inside a computer and then send it back to understand whether the algorithm
developed works. Then they will try to do the same thing with memories spanning
over a lap of time. Understanding how the brain forms memories and being able to
interpret those memories digitally is key element to transhumanistic features.
A major brake to a better understanding of the brain is the difficulty arising from the
capture of electrical signals within the brain. Electrodes are inserted in patients or
animals brains to understand how the neurons interact with each other. However a
new way of observing neural networks inside rats brains has been designed, this
new technology represents a major breakthrough which puts the basis of an
extended less invasive live study of brains. Former implants using polymer flexible
probe would indeed most often lead to gliosis, which is a cell heap caused by a
immune reaction.
New, better syringe injected implants with ultra flexible open mesh electronics lead to
a minimal immune response, moreover the neural network do not get damaged and
neurons are well balanced along all the mesh. This offers better durability, a larger
and more accurate contact surface and an overall better tool to study neural
networks. Now with this technique, a mesh injected inside the brain enables to
observe neural networks with a higher resolution even at the deepest points into the
brain. This new technology offers new perspectives for less intrusive brain studies
and help to design long lasting and efficient BCI.
II/ What technologies seem attainable in future ​(Valentin and Julien)
1. Mini-antennae
(https://futurism.com/new-mini-antennae-could-pave-the-way-for-brain-comput
er-interfaces/)
A team of researchers from Northeastern University (Boston) has succeeded
in developing mini-antennae that are one hundred times smaller than current
antennae. This prototype gives ideas to scientists who want to develop
implants that can read neural activity and send it through the mobile network.
In other words, as the article says, this invention paves the way for brain
computer interfaces that need distance connection to work correctly. For
example, we can imagine that neurological disorders will be signaled on real
time by BCI machines through this technology.
2. Telepathy
Some researchers of the university of Washington experimented prototypes
that will enable us to communicate instantly through brain-to-brain telepathy
via the Internet
(​http://www.kurzweilai.net/first-brain-to-brain-telepathy-communication-via-the-
internet​). Their experiment took the form of a question-and-answer game. An
“inquirer” sent a question (like “Is it a cat?”) which calls for yes or no to a
“respondent”. The respondent, who saw the question on a screen, had to
choose to focus on one of the two lights there were in front of him. One light
ment “yes”, whereas the other one ment “no”. By focusing his eyes and his
mind on one light, the respondent, who wore an electroencephalogram
helmet, sent through the Internet a signal corresponding to his answer. This
signal then activated a magnetic coil placed behind the inquirer’s head. This
magnetic coil, by stimulating the inquirer’s cortex, made him see a light by
phosphene phenomenon. Phosphene phenomenon enables us to see light
even if this light doesn’t exist in reality. In other words, this phenomenon leads
our brain to build a mental representation of light signs. We can imagine that
telepathic communication is attainable in the immediate future, because it
came true in a
laboratory.(​http://www.telegraph.co.uk/news/worldnews/northamerica/usa/110
77094/Brain-to-brain-telepathic-communication-achieved-for-first-time.html​).
3. Neuroreality
(https://futurism.com/neuroreality-the-new-reality-is-coming-and-its-a-brain-co
mputer-interface/)
Neuroreality is a concept that puts an end to virtual reality’s era. Indeed,
during virtual reality’s era, people could distinguish virtual world from reality.
Now, neuroreality aims to make us unable distinguish these two worlds. It will
be enabled by BCI technologies that will create virtual worlds for us, in which
we will be able to control elements through thought. There are two sorts of
neuroreality technologies: the invasive ones and the non-invasive ones. The
invasive neuroreality BCI technologies require transplant operations in brain,
what is not the case for the second ones.
This is for example EyeMynd’s project, run by physicist Dan Cook, who says:
“When you’re in the virtual world—whether you’re playing a game or
something else—you don’t want to have to keep thinking about what you’re
doing with your hands. It’s much better to have pure brainwave control. It will
be a much more satisfying experience and will allow for a much greater level
of immersion. You can forget about your live human body, and just focus on
what’s going on in front of you.” These virtual worlds are expected to simulate
how we experience our own dreams: “In a dream, you can run around without
moving your physical legs. That dreaming and imagining creates brain signals
that we can read.”, Dan Cook says. EyeMynd’s system is a non-invasive BCI
technology, because it doesn’t require transplants in brain. Only a headset
with an electroencephalogram is needed. (video
https://www.youtube.com/watch?time_continue=101&v=7bROnoryZ_k​).
III/ Philosophically ​(Aurélien)
1. The notion of transhumanism
Transhumanism is a concept gathering a wide range of technological
achievements, from the mere graft of a member which is technically
possible today, to a complete merge between men and robots, between
biology and electronics. Elon Musk says that because we use smartphones
and because electronics are getting more and more used in medicine, we
are already cyborgs and at some extent he is right. But can we state
that man is just a machine and that the only limit that separates men
from robots is computational power, the brain staying the more
elaborated kind of 'intelligence' that exists today ?
2. A language issue
We commonly say that a machine is able to 'think', to 'learn', and to
'remember'. But is it really true ? Or is it just an analogy ? From what
is known today about the way our brain work, what are the limits of this
analogy ? Point out the differences between the operations of a
processor (fetch, decode, execute) and those of a brain.
Is not the term AI itself an analogy ? The expression AI started to be
used in the US (probably in MIT during the 60s, TBC. As usual Wikipedia
is our friend). Those who first used the term AI at a time when computer
science was at its very beginning had certainly a philosophical
conception of the world in mind, that influenced the way how AI has
evolved and is used nowadays.
3. Essential differences between nature and technique.
See the attached notes of a philosophy lecture on Aristotle and St
Thomas Aquinas. This needs to be applied to our problematic : the way
our brain work compared to robots and computers. The key thing to
understand is that a robot is always moved because of the action of man,
never by itself!! And notice I say itself, not himself:)

Transhumanism and brain

  • 1.
    Transhumanism and brain Accordingto the website Futurism, machines will be endowed with an introspective awareness in 2029 and, in 2099, a robot will be able to dream, innovate and invent (​https://futurism.com/images/the-dawn-of-the-singularity/​). If these technological exploits come true, human beings will have serious reasons to worry about their safety, their dignity, and all what make them so specific, so different from all the rest of creation. Indeed, for the first time of its history, humankind will meet creatures which can compete with it in every facet of intelligence. Are these fears well-founded? Are all the artificial powers described in this picture workable? What is a matter for science fiction and for reality? Our short presentation will try to answer to these questions, by showing to what extent an “after singularity” period will be possible. First, we will show what are current technologies that serve as an interface between human brain and artificial intelligence. Then, we will see what brain computer interface technologies seem realizable for tomorrow. At last, we will suggest a more philosophical approach to deal with the subject about transhumanism and brain. I/ Current technologies ​(Ambroise and Julien) The first links established between the nervous system and informatics have appeared with new treatment following spinal cord injuries. One example involves a patient with cervical damage who had lost movement of his hand. He underwent the implementation of a microelectrode array in his cortex after an RMI had found which area was involved when he would be trying to move his hand. The electric impulse caught by the electrodes was then transmitted to a PC where the signal was treated and sent back to the forearm directly to the muscle. To interpret the brain messages, a correlation is drawn between neural activity and the wanted output force to the hand. A software was used to clean the data and keep over 80% of the original signal into the network. This way the signal could be tweaked to get the appropriate response rate from the signal to the hand. This shows how current BCI technology is already able to interpret some brain signals in a comprehensible manner. More generally, by using simpler method a lost leg or arm can be replaced by a bionic member. Those members use electrical mechanism as replacements to actual muscles. Such replacements have been made widely commercially available. Electrodes are located on the upper part over the lost member and receive the electrical signal from sinews. The data received can then be treated through a piece of software and sent back to the bionic arm or leg.
  • 2.
    However it hasappeared that this methodology didn’t prove to always interpret accurately the movement wanted. For example, the complex gestures made by a hand and fingers may prove hard to replicate with a simple signal analysis. This is why machine learning has imposed itself as a viable solution to this issue. In this perspective, a study whose results were published at the beginning of last year underpins how visual based assistance technology can improve dramatically the “handling” of the bionic hand. The research conducted aimed at training a market available hand prosthesis to grasp and pinch objects in different manners. The camera used on those hands would offer the system to better appreciate distances, sizes and others of the objets to catch. After training the system over a vast bench of trials and data sample, once used by a real amputated human, the bionic hands would reach a very high rate of successful grasping even though the amount of visual information available was diminished significatively. This technology illustrates how BCI can be further enhanced with machine learning. Starting a few years ago, several research groups have been working on human cowbayes to understand the formation of memories and to establish a computer link to the hippocampus. One such trial has been conducted by a medical team in California is looking for a way to cure epilepsy seizures. The team was looking into the brain to understand where the seizures originated from, and this study as a turn could be used to understand how brain signals can be interpreted. Such an understanding would help to model an interaction between human brain and an AI. Kernel, the company behind this research aims at designing a neuroprosthesis using less invasive technology, like genetically modified neurons or little injected devices. They try to codify images and memories with the sensors inside a computer and then send it back to understand whether the algorithm developed works. Then they will try to do the same thing with memories spanning over a lap of time. Understanding how the brain forms memories and being able to interpret those memories digitally is key element to transhumanistic features. A major brake to a better understanding of the brain is the difficulty arising from the capture of electrical signals within the brain. Electrodes are inserted in patients or animals brains to understand how the neurons interact with each other. However a new way of observing neural networks inside rats brains has been designed, this new technology represents a major breakthrough which puts the basis of an extended less invasive live study of brains. Former implants using polymer flexible probe would indeed most often lead to gliosis, which is a cell heap caused by a immune reaction. New, better syringe injected implants with ultra flexible open mesh electronics lead to a minimal immune response, moreover the neural network do not get damaged and neurons are well balanced along all the mesh. This offers better durability, a larger
  • 3.
    and more accuratecontact surface and an overall better tool to study neural networks. Now with this technique, a mesh injected inside the brain enables to observe neural networks with a higher resolution even at the deepest points into the brain. This new technology offers new perspectives for less intrusive brain studies and help to design long lasting and efficient BCI. II/ What technologies seem attainable in future ​(Valentin and Julien) 1. Mini-antennae (https://futurism.com/new-mini-antennae-could-pave-the-way-for-brain-comput er-interfaces/) A team of researchers from Northeastern University (Boston) has succeeded in developing mini-antennae that are one hundred times smaller than current antennae. This prototype gives ideas to scientists who want to develop implants that can read neural activity and send it through the mobile network. In other words, as the article says, this invention paves the way for brain computer interfaces that need distance connection to work correctly. For example, we can imagine that neurological disorders will be signaled on real time by BCI machines through this technology. 2. Telepathy Some researchers of the university of Washington experimented prototypes that will enable us to communicate instantly through brain-to-brain telepathy via the Internet (​http://www.kurzweilai.net/first-brain-to-brain-telepathy-communication-via-the- internet​). Their experiment took the form of a question-and-answer game. An “inquirer” sent a question (like “Is it a cat?”) which calls for yes or no to a “respondent”. The respondent, who saw the question on a screen, had to choose to focus on one of the two lights there were in front of him. One light ment “yes”, whereas the other one ment “no”. By focusing his eyes and his mind on one light, the respondent, who wore an electroencephalogram helmet, sent through the Internet a signal corresponding to his answer. This signal then activated a magnetic coil placed behind the inquirer’s head. This magnetic coil, by stimulating the inquirer’s cortex, made him see a light by phosphene phenomenon. Phosphene phenomenon enables us to see light even if this light doesn’t exist in reality. In other words, this phenomenon leads our brain to build a mental representation of light signs. We can imagine that telepathic communication is attainable in the immediate future, because it came true in a laboratory.(​http://www.telegraph.co.uk/news/worldnews/northamerica/usa/110 77094/Brain-to-brain-telepathic-communication-achieved-for-first-time.html​).
  • 4.
    3. Neuroreality (https://futurism.com/neuroreality-the-new-reality-is-coming-and-its-a-brain-co mputer-interface/) Neuroreality isa concept that puts an end to virtual reality’s era. Indeed, during virtual reality’s era, people could distinguish virtual world from reality. Now, neuroreality aims to make us unable distinguish these two worlds. It will be enabled by BCI technologies that will create virtual worlds for us, in which we will be able to control elements through thought. There are two sorts of neuroreality technologies: the invasive ones and the non-invasive ones. The invasive neuroreality BCI technologies require transplant operations in brain, what is not the case for the second ones. This is for example EyeMynd’s project, run by physicist Dan Cook, who says: “When you’re in the virtual world—whether you’re playing a game or something else—you don’t want to have to keep thinking about what you’re doing with your hands. It’s much better to have pure brainwave control. It will be a much more satisfying experience and will allow for a much greater level of immersion. You can forget about your live human body, and just focus on what’s going on in front of you.” These virtual worlds are expected to simulate how we experience our own dreams: “In a dream, you can run around without moving your physical legs. That dreaming and imagining creates brain signals that we can read.”, Dan Cook says. EyeMynd’s system is a non-invasive BCI technology, because it doesn’t require transplants in brain. Only a headset with an electroencephalogram is needed. (video https://www.youtube.com/watch?time_continue=101&v=7bROnoryZ_k​). III/ Philosophically ​(Aurélien)
  • 5.
    1. The notionof transhumanism Transhumanism is a concept gathering a wide range of technological achievements, from the mere graft of a member which is technically possible today, to a complete merge between men and robots, between biology and electronics. Elon Musk says that because we use smartphones and because electronics are getting more and more used in medicine, we are already cyborgs and at some extent he is right. But can we state that man is just a machine and that the only limit that separates men from robots is computational power, the brain staying the more elaborated kind of 'intelligence' that exists today ? 2. A language issue We commonly say that a machine is able to 'think', to 'learn', and to 'remember'. But is it really true ? Or is it just an analogy ? From what is known today about the way our brain work, what are the limits of this analogy ? Point out the differences between the operations of a processor (fetch, decode, execute) and those of a brain. Is not the term AI itself an analogy ? The expression AI started to be used in the US (probably in MIT during the 60s, TBC. As usual Wikipedia is our friend). Those who first used the term AI at a time when computer science was at its very beginning had certainly a philosophical conception of the world in mind, that influenced the way how AI has evolved and is used nowadays. 3. Essential differences between nature and technique. See the attached notes of a philosophy lecture on Aristotle and St Thomas Aquinas. This needs to be applied to our problematic : the way our brain work compared to robots and computers. The key thing to understand is that a robot is always moved because of the action of man, never by itself!! And notice I say itself, not himself:)