Methodological Issues in the Ethics of Human-Robot Interactions Rafael Capurro (Stuttgart Media University) Michael Nagenborg (University of Karlsruhe) Jutta Weber (Universität Duisburg-Essen) Christoph Pingel (Center for Art and Media)
For example: We should avoid abstract discussions of the agency or intentionality of agents and robots and reflect whether they are helpful to work out the contest on the future development and use of agents and robots.
T he massive use of robots will change society probably in a similar way as cars and airplanes (and in former times: ships etc.) did and it already changed society – think of industrial robots in the workplace who are an important factor with regard to the growing unemployment in Europe.
This broad view of societal changes and consequently of the view(s) of ourselves, including our (moral) values, is fundamental There may be a re-definition of what it means to be human For instance the EU Charter of Human Rights is human centered. The massive use of robots may challenge this anthropocentric perspective.
Why do we want to live with robots? What do we live with robots for? There are different levels of reflection when answering these questions, starting with the trivial one that robots can be very useful and indeed indispensable for instance in today’s industrial production or when dealing with situations in which the dangers for humans are big.
2. Epistemological, ontological, and psychoanalytic implications
The relation between humans and robots can be conceived as an envy relation in which humans either envy robots for what they are or they envy other humans for having robots that they do not have. In the first case, envy can be positive in case the robot is considered either as a model to be imitated or negative in case the relationship degenerates into rivalry.
This last possibility is exemplified in many science fiction movies and novels in which robots and humans are supposed to compete. Robots are then often represented as emotion-free androids, lacking moral sense and therefore less worth than humans. Counter examples are for instance 2001: A Space Odyssey (Stanley Kubrick 1968) or Stanislaw Lem’s novel “Golem XIV” (Lem 1981).
The “mimetic conflict” (René Girard) arises not only by the fact of imitating what a robot can do but more basically of imitating what ‘it’ is supposed to desire. But a robot’s desires are paradoxically our own since we are the creators. The positive and negative views of robots shine back into human self-understanding leading to the idea of enhancing human capabilities for instance by implanting artificial devices in the human body
When robots are used by humans for different tasks, a situation arises in which the “mimetic desire” is articulated either as a question of justice (a future robot divide ) or as new kind of envy. The object of envy is not the robot itself but the other human using/having it.
The foundational ethical dilemma with regard to robots is thus not just the question of their good or bad use but the question of our relation to our own desire with all its creative and destructive mimetic dynamism that includes not only strategies such as envy, rivalry and model but also their trivial use as a tool that eventually turns to be a question of social justice.
In a mythical sense robots are experienced by our secularized and technological society as scapegoat for what is conceived the humanness of humanity whose most high and global expression is the Universal Declaration of Human Rights . From this mythical perspective, robots are the bad and the good conscience of ourselves.
An ethical reflection on robots must take care of these pitfalls particularly when considering the dangers of the mimetic desire with regard to human dignity, autonomy or data protection. It must reflect the double bind relationship between humans and robots.
How do we live in a technological environment? What is the impact of robots on society? How do we (as users) handle robots? What methods and means are used today to model the interface between man and machine?
In AI and robotics we can often find a sloppy usage of language which supports anthropomorphising agents. This language often implies the intentionality and autonomy of agents – for example when researcher speak of learning, experience, emotion, decision making (and so on) of agents. How are we in science and in our social practices going to handle this problem?
Recent research on social robots is focussing on the creation of interactive systems that are able to recognise others, interpret gestures and verbal expressions, which recognize and express emotions and that are capable of social learning. A central question concerning social robotics is how "building such technologies shapes our self-understanding, and how these technologies impact society" (Breazeal 2002, 5).
Some main questions are: What concepts of sociality are translated into action by social robotics? How is social behaviour conceptualised, shaped, or instantiated in software implementation processes? And what kind of social behaviours do we want to shape and implement into artefacts?
There is a tendency to develop robots modeling some aspects of human behavior instead of developing an android (Arnall 2003). Relative autonomy is a goal for physical robots as well as for softbots . What is the meaning of the concept of autonomy in robotics? What are the affinities and differences between the robotic discourse and the philosophical discourse?
Obviously, we can experience a strong bidirectional travel of the concept of autonomy (as well as that of sociality, emotion and intelligence) between very diverse discourses and disciplines. How does the concept transfer between the disciplines and especially the strong impact of robotics change the traditional meanings of concepts like autonomy, sociality, emotion and intelligence
But the importance of robot-human-integration goes beyond the level of the single individual, and address the question about how society or community could and should look like in which bots are integrated.
Interaction with bots may build new forms of communities. Close attention should be paid to what groups of individuals are likely to interact with certain kind of bots in a certain context while at the same time keeping the perspective on the impact of the specific interactions on the communities and societies in which this specific forms of interactions take place.
All three forms of human-bot integration may include aspects of violation as well as fostering of human rights and dignity. It may not even ruled out that one and the same technology may do have both positive and negative effects. Surveillances infrastructures may be considered harmful with regard to privacy, but they may also enable us to create new kinds of communities.
Such kind of enhancements might be considered a benefit to an individual but also raise new questions such as whether only an elite might be able to transform themselves into cyborgs or – a worst case scenario – whether the unemployed would be forced to have some sorts of implants to enable them to do certain jobs.
Robots are less our slaves – which is a projection of the mimetic desire of societies in which slavery was permitted and/or promoted – than a tool for human interaction. This throws questions of privacy and trust (Arnall 2003, 59) but also of the way we define ourselves as workers in industry, service and entertainment.
This concerns different kinds of cultural approaches to robots in Europe and in other cultures that may have different impact in a global world. Different cultures have different views on autonomy and human dignity.