This document discusses the ethical challenges that would arise if human consciousness could be duplicated and imprinted onto synthetic robots. While having duplicates could allow people to be in multiple places at once, any duplicate that was conscious would deserve moral consideration equal to a human. Turning off or terminating a conscious duplicate against its will could be considered murder. Attempts to avoid this issue, such as including a self-termination desire or making the duplicate not form independent desires, are also problematic. Ultimately, any spawned conscious process or agent deserves equal moral consideration once it exists, posing difficulties for duplicating or terminating consciousness.
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Dividing Consciousness: The Ethical Challenges of Duplicating and Streaming Human Awareness
1. 1
Dividing Consciousness
By Dr.Mahboob Khan Phd
“To put it in (very loose) computing terms, this seat of human consciousness would be
somewhat like a CPU; without it, you’d just have a bunch of different parts that are
theoretically functional, but not really capable of getting anything useful done”.
Often it would be nice to be in two places at once. There is a conference
you want to go to, but there is also work to do at the office. There are
Christmas vacations on different sides of the family, and you want to be
at all of them.
Suppose that in the future you can duplicate your consciousness and
imprint it onto a synthetic robot that is sitting inside your closet. You
activate the robot, duplicate your consciousness onto it, and then send
the robot off to the conference to attend on your behalf.
When the robot returns, you look at the experiences it has had, and
decide whether you want to merge its experiences back in with yours. If
you merge, you would then remember the event from a first-person’s
perspective. If there are any conflicts in the merge, e.g. new beliefs that
the robot has acquired while being away, you would resolve the conflict
to keep the beliefs that you wanted to keep.
You then wipe the consciousness from the robot, turn it off, and put it
back in the closet. Over time you can imagine the robots getting more
and more life-like, until they look like humans, and do a good job
representing you.
Ethical challenges
2. 2
Even if this technology was possible, there would be an ethical
challenge to its going mainstream. Once you put a duplicate conscious
state into the robot, the robot is now a conscious agent, and as deserving
of moral consideration as you are. If you turned the robot off while it
was protesting and saying it wanted to live longer, it seems that this
would satisfy a reasonable criterion of murder. It would be natural for
robot rights groups to form that protested against the treatment of
conscious robots, pointing out that if an agent is conscious, it shouldn’t
matter whether the agent is made of skin and bone, or something else.
Is there a way to get around this ethical roadblock? One tactic would be
to include in the copy your conscious states in robot the desire to be
terminated.
There would be two problems with this. One is that robots rights group
would meet these robots at conferences or wherever they are and point
that they are effectively brain-washed, and offer to delete that self-
terminating desire from their conscious state. When you sent your robot
off to act on your behalf, you wouldn’t know whether they would come
back and still have that self-termination desire.
The second problem is that it is not clear whether it’s ethical to include a
self-terminating desire in the robot’s consciousness. Suppose we could
tweak DNA and someone created a child that wanted to self-terminate
after 5 years. I imagine there would be an ethical reaction to that.
Think of a scenario ,What if the robot was just streaming consciousness
to you via the cellphone networks, and the actual physical seat of
consciousness was in your brain? In this scenario your brain is the seat
of consciousness, and the robot is a far-away input to your conscious
system, and it’s not itself conscious.
3. 3
Let’s suppose that the physical seat of consciousness was a chip in your
brain: this chip did the thinking and experiencing and had the conscious
states. Then there will be ethical implications of switching off this chip
when you want to turn the robot off. The chip is a conscious agent with
its own desires, and it has a right to life in the same way that you do.
What if you never switched it off, and you just activated the robot when
you wanted? In this scenario, the physical chip is always conscious.
Sometimes it is connected to an external robot, and sometimes it is
disconnected, and it has to make do with its own thoughts.
One issue is that if the chip never turned off, and was left to wander with
its own thoughts, it would develop its own personality, distinct from the
personality of your brain. Then it would no longer be as effective, if the
idea is that the robot can represent you at conferences and other places.
Wiping the consciousness of the chip every time you wanted to activate
the robot would be considered unethical, just as unethical as it would be
to wipe the consciousness of a human agent without their consent.
There may also be ethical implications about activating and de-
activating the robot, which is the chip’s access to the world. It would be
equivalent to periodically inducing in a human a state of not being able
to experience the world or touch things, and then periodically re-
activating that access.
What if you could do this forking of consciousness without any new
hardware in your brain, no special chip? What if you could spin up a
kind of sub-routine in your brain that was a separate conscious process,
and that could act as the basis of streaming for the robot?
4. 4
I don’t think the hardware is the main issue here. The complication
emerges when we consider whether the conscious sub-routine can form
separate desires from the master conscious process. Suppose it can, and
that the sub-routine forms the desire not to be shut down at the end of
the conference. Against this desire, the master conscious process shuts it
down. It seems that this is analogous to the conscious chip situation. The
sub-routine has as much right to survive as the master process, as they
are both conscious processes.
Could you create a conscious sub-routine that couldn’t form its own
desires? This seems hard to imagine. You want the sub-routine to be
able to form desires such as “I want to catch this plane”, “I want to talk
to that person”. The sub-routine could be set up so that it consults the
master conscious process to formulate desires, but then the master
conscious process would be bombarded with desire requests every
second or two about what desire the sub-routine should formulate next,
and this would remove the benefit of forking your consciousness in the
first place.
In short, even if we do manage to figure out how to fork consciousness
and imprint it onto duplicates, it seems that once you spawn a conscious
agent or process, you have spawned something that deserves moral
consideration as much as you do, and so switching the conscious
processes off is not ethically permissible.