Symposium Ethics and Robotics


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Symposium Ethics and Robotics

  1. 1. Symposium Ethics and Robotics University of Tsukuba Japan October 3, 2009
  2. 2. Ethics and Robotics An Intercultural Perspective Rafael Capurro Steinbeis Transfer Institute – Information Ethics Germany Last update: July 13, 2009
  3. 4. Content <ul><li>Introduction </li></ul><ul><li>EU Project ETHICBOTS (2005-2008) </li></ul><ul><li> </li></ul><ul><li>Wallach & Allen on Moral Machines </li></ul><ul><li>Isaac Asimov Three Laws of Robotics </li></ul><ul><li>Korean Robot Ethics Charter </li></ul><ul><li>European Robotics Research Network (EURON) </li></ul><ul><li>Roboethics </li></ul><ul><li>AAAI 2005 Symposium on Machine Ethics </li></ul><ul><li>Workshop on Roboethics, ICRA 2009, Kobe, May 2009 </li></ul><ul><li>ECAP (European Computing and Philosophy) 2007 </li></ul><ul><li>SPT (Society for Philosophy and Technology) 2009 </li></ul><ul><li>Machine Ethics Consortium </li></ul><ul><li>AP-CAP 2009 </li></ul><ul><li>Being-in-the-world with robots </li></ul>
  4. 5. Introduction <ul><li>“ Ethics and robotics are two academic disciplines, </li></ul><ul><li>one dealing with the moral norms and values underlying implicitly or explicitly human behaviour </li></ul><ul><li>and the other aiming at the production of artificial agents, mostly as physical devices, with some degree of autonomy based on rules and programmes set up by their creators.” </li></ul><ul><li>(Capurro/Nagenborg 2009) </li></ul>
  5. 6. Introduction <ul><li>“ Since the first robots arrived on the stage in the play by Karel Čapek (1921) visions of a world inhabited by humans and robots gave rise to countless utopian and dystopian stories, songs, movies, and video games.” </li></ul><ul><li>(Capurro/Nagenborg 2009) </li></ul>
  6. 7. Karel Capek: R.U.R. (Rossum's Universal Robots) (1922)
  7. 8. Introduction <ul><li>“ Robots are and will remain in the foreseeable future dependent on human ethical scrutiny as well as on the moral and legal responsibility of humans.” </li></ul><ul><li>(Capurro/Nagenborg 2009) </li></ul>
  8. 9. Introduction <ul><li>“ Human-robot interaction raises serious ethical questions right now that are theoretically less ambitious but practically more important than the possibility of the creation of moral machines that would be more than machines with an ethical code.” </li></ul><ul><li>(Capurro/Nagenborg 2009) </li></ul>
  9. 10. ETHICBOTS <ul><li>EU Project ETHICBOTS on “Emerging Technoethics of Human Interaction with Communication, Bionic and Robotic Systems” (2005-2008). </li></ul><ul><li>The project aimed at identifying crucial ethical issues in these areas such as </li></ul><ul><li>the preservation of human identity, and integrity; </li></ul><ul><li>applications of precautionary principles; </li></ul><ul><li>economic and social discrimination; </li></ul><ul><li>artificial system autonomy and accountability; </li></ul><ul><li>responsibilities for (possibly unintended) warfare applications; </li></ul><ul><li>nature and impact of human-machine cognitive and affective bonds on individuals and society. </li></ul>
  10. 12. ETHICBOTS <ul><li>Following issues were analyzed: </li></ul><ul><li>(a) Human-softbot integration, as achieved by AI research on information and communication technologies; </li></ul><ul><li>(b) Human-robot, non-invasive integration, as achieved by robotic research on autonomous systems inhabiting human environments; </li></ul><ul><li>(c) Physical, invasive integration, as achieved by bionic research. </li></ul>
  11. 13. ETHICBOTS
  12. 14. Ethics and Robotics <ul><li>R. Capurro and M. Nagenborg (Eds.): Ethics and Robotics. Heidelberg: </li></ul><ul><li>Akad.Verlagsgesellschaft 2009 (ISBN 978-3-89838-087-4 (AKA) and 978-1-60750-008-7 (IOS Press) </li></ul><ul><li>P. M. Asaro: What should We Want from a Robot Ethic? </li></ul><ul><li>G. Tamburrini: Robot Ethics: A View from the Philosophy of Science </li></ul><ul><li>B. Becker: Social Robots - Emotional Agents: Some Remarks on Naturalizing Man-machine Interaction </li></ul><ul><li>E. Datteri, G. Tamburrini: Ethical Reflections on Health Care Robotics </li></ul><ul><li>P. Lin, G. Bekey, K. Abney: Robots in War: Issues of Risk and Ethics </li></ul><ul><li>J. Altmann: Preventive Arms Control for Uninhabited Military Vehicles </li></ul><ul><li>J. Weber: Robotic warfare, Human Rights & The Rhetorics of Ethical Machines </li></ul><ul><li>T. Nishida: Towards Robots with Good Will </li></ul><ul><li>R. Capurro: Ethics and Robotics </li></ul>
  13. 15. Wallach & Allen on Moral Machines / (Oxford Univ. Press 2009)
  14. 16. Wallach & Allen / (Oxford Univ. Press 2009) <ul><li>„ Three questions emerge naturally from the discussion so far. Does the world need AMAs? Do people want computers making moral decisions? And if people believe that computers making moral decisions are necessary or inevitable, how should engineers and philosophers proceed to design AMAs?“(Introd.) </li></ul>
  15. 17. Wallach & Allen / (Oxford Univ. Press 2009) <ul><li>„ We take the instrumental approach that while full-blown moral agency may be beyond the current or future technology, there is nevertheless much space between operational morality and “genuine” moral agency. This is the niche we identified as functional morality in chapter 2.“(Introd.) </li></ul>
  16. 18. Wallach & Allen / (Oxford Univ. Press 2009) <ul><li>„ The top-down and bottom-up approaches emphasize the importance in ethics of the ability to reason. However, much of the recent empirical literature on moral psychology emphasizes faculties besides rationality. </li></ul>
  17. 19. Wallach & Allen / (Oxford Univ. Press 2009) <ul><li>Emotions, sociability, semantic understanding, and consciousness are all important to human moral decision making, but it remains an open question whether these will be essential to AMAs, and if so, whether they can be implemented in machines.“ (Introd.) </li></ul>
  18. 20. Wallach & Allen / (Oxford Univ. Press 2009) <ul><li>„ The field of machine morality extends the field of computer ethics beyond concern for what people do with their computers to questions about what the machines do by themselves. (In this book we will use the terms ethics and morality interchangeably.) We are discussing the technological issues involved in making computers themselves into explicit moral reasoners.“ (Introd.) </li></ul>
  19. 21. Isaac Asimov: Three Laws of Robotics (1940) <ul><li>A robot may not injure a human being or, through inaction, allow a human being to come to harm </li></ul><ul><li>A robot must obey orders given it by human beings except where such orders would conflict with the First Law </li></ul><ul><li>A robot must protect its own existence as long as such protection does not conflict with the First or Second Law </li></ul>
  20. 22. Superman-mechanical-monster
  21. 23. Korean Robot Ethics Charter See: Shim (2007)
  22. 24. European Robotics Research Network (EURON)
  23. 25. EURON: Roboethics Atelier
  24. 26. ROBOETHICS
  25. 27. EURON Roboethics Roadmap <ul><li>Roboethics (this term was coined in 2002 by G. Veruggio) taxonomy: </li></ul><ul><li>Humanoids </li></ul><ul><li>Advanced production systems </li></ul><ul><li>Adaptive robot servants and intelligent homes </li></ul><ul><li>Network Robotics </li></ul><ul><li>Outdoor Robotics </li></ul><ul><li>Health Care and Life Quality </li></ul><ul><li>Military Robotics </li></ul><ul><li>Edutainment </li></ul><ul><li>See: </li></ul>
  26. 28. EURON Roboethics Roadmap <ul><li>Ethical issues shared by Roboethics and Information Ethics: </li></ul><ul><li>Dual-use technology </li></ul><ul><li>Anthropomorphization of the Machines </li></ul><ul><li>Humanisation of the Human/Machine relationship </li></ul><ul><li>Technology Addiction </li></ul><ul><li>Digital Divide </li></ul><ul><li>Fair Access to technological resources </li></ul><ul><li>Effects of technology on the global distribution of wealth and power </li></ul><ul><li>Environmental impact of technology </li></ul><ul><li>See : </li></ul>
  27. 29. EURON Roboethics Roadmap <ul><li>Principles to be followed in roboethics </li></ul><ul><li>Human dignity and human rights </li></ul><ul><li>Equality, justice and equity </li></ul><ul><li>Benefit and harm </li></ul><ul><li>Respect for cultural diversity and pluralism </li></ul><ul><li>Non-Discrimination and non-stigmatization </li></ul><ul><li>Autonomy and individual responsibility </li></ul><ul><li>Informed consent </li></ul><ul><li>Privacy </li></ul><ul><li>Confidentiality </li></ul><ul><li>Solidarity and cooperation </li></ul><ul><li>Social responsibility </li></ul><ul><li>Sharing of benefits </li></ul><ul><li>Responsibility towards the biosphere </li></ul><ul><li>See: </li></ul>
  28. 30. AAI 2005 Symposium on Machine Ethics <ul><li>AAAI Fall 2005 Symposium on Machine Ethics November 3-6, 2005 Hyatt Regency Crystal City Arlington, Virginia </li></ul>
  29. 31. AAAI 2005 Symposium on Machine Ethics <ul><li>Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. </li></ul>
  30. 32. AAAI 2005 Symposium on Machine Ethics <ul><li>Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitates this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. </li></ul>
  31. 33. AAAI 2005 Symposium on Machine Ethics <ul><li>We contend that research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. </li></ul>
  32. 34. AAAI 2005 Symposium on Machine Ethics <ul><li>Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about ethics. We intend to bring together interested participants from a wide variety of disciplines to the end of forging a set of common goals for machine ethics investigation and the research agendas required to accomplish them. </li></ul>
  33. 35. AAAI 2005 Symposium on Machine Ethics <ul><li>Topics of interest include, but are not restricted to the following: </li></ul><ul><li>Improvement of interaction between artificially and naturally intelligent systems through the addition of an ethical dimension to artificially intelligent systems </li></ul><ul><li>Enhancement of machine-machine communication and cooperation through an ethical dimension </li></ul>
  34. 36. AAAI 2005 Symposium on Machine Ethics <ul><li>Design of systems that provide expert guidance in ethical matters </li></ul><ul><li>Deeper understanding of ethical theories through computational simulation </li></ul><ul><li>Development of decision procedures for ethical theories that have multiple prima facie duties </li></ul><ul><li>Computability of ethics </li></ul><ul><li>Theoretical and practical objections to machine ethics </li></ul><ul><li>Impact of machine ethics on society </li></ul>
  35. 37. Workshop on Roboethics ICRA 2009, Kobe, May 2009
  36. 38. Workshop on Roboethics ICRA 2009, Kobe, May 2009 <ul><li>Topics (CfP): </li></ul><ul><li>Social (Robotics and job market; Cost benefit analysis etc.) </li></ul><ul><li>Psychological (Robots and kids; Robots and elderly, etc.) </li></ul><ul><li>Legal (Robots and liability, Identification of autonomously acting robots etc.) </li></ul><ul><li>Medical (Robots in health care and prosthesis etc.) </li></ul><ul><li>Warfare application of robotics (Responsibility, International Conventiuons and Laws etc.) </li></ul><ul><li>Environment (Cleaning nuclear and toxic waste, Using renewable energies, etc.) </li></ul>
  37. 39. ECAP 07 <ul><li>European Computing and Philosophy Conference, Enschede, The Netherlands, 2007: Philosophy and Ethics of Robotics </li></ul><ul><li>G. Veruggio: Roboethics: an interdisciplinary approach to the social implications of Robotics </li></ul><ul><li>Ishii Kayoko: Can a Robot Intentionally Conduct Mutual Interactions with Human Being? </li></ul><ul><li>Ronald C. Arkin: On the Ethical Quandaries of Practicing Roboticist: A First Hand Look </li></ul>
  38. 40. ECAP 07 <ul><li>Jutta Weber: Analysing Material, Semiotic and Socio-Political Dimensions of Artificial Agents </li></ul><ul><li>Daniel Persson: Ethics of Intelligent Systems – Artefacts, Producers and Users </li></ul><ul><li>Merel Noorman: Exploring the Limits to the Autonomy of Artificial Agents </li></ul><ul><li>Susana Nascimento: Autonomous Anthropomorphisms: Robot Narratives and Critical Social Theries </li></ul><ul><li>Peter Asaro: How Just Could A Robot War Be? </li></ul><ul><li>Edward H. Spence: Robot Rights: The Moral Life of Androids </li></ul>
  39. 41. Society for Philosophy and Technology (SPT), 2009 (Track 10) <ul><li>Mark Coeckelbergh: Living with Robots </li></ul><ul><li>Aimee van Wynsberghe: What Care Robots say about Care </li></ul><ul><li>Susana Nascimento: Self-operating Machines and (Dis)engagement in Human Technical Actions </li></ul><ul><li>Allan Hanson: Beyond the Skin Bag: On the Moral Responsibility of Extended Agencies </li></ul><ul><li>Scott Sehon: Robots and Free will </li></ul>
  40. 42. Society for Philosophy and Technology (SPT), 2009 <ul><li>Peter Asaro: The Convergence of Video Games & Military Robotics </li></ul><ul><li>Martinjntje Smits: Social Robots: How to bridge the Gap Between Fantasies and Practices? </li></ul><ul><li>Helena De Preester: The (Im)possibilities of Reembodiment </li></ul><ul><li>Guido Nicolosi: Restless Creatures </li></ul><ul><li>Gianmarco Veruggio: Ethical, Legal and Societal Issues in the Strategic Agenda for Robotics in Europe </li></ul>
  41. 43. Machine Ethics Consortium
  42. 44. Machine Ethics Consortium <ul><li>About Machine Ethics Consortium Machine Ethics is concerned with the behavior of machines towards human users and other machines. Allowing machine intelligence to effect change in the world can be dangerous without some restraint. Machine Ethics involves adding an ethical dimension to machines to achieve this restraint. Further, machine intelligence can be harnessed to develop and test the very theory needed to build machines that will be ethically sensitive. Thus, machine ethics has the additional benefits of assisting human beings in ethical decision-making and, more generally, advancing the development of ethical theory. </li></ul>
  43. 45. Machine Ethics Consortium <ul><li>Projects </li></ul><ul><li>EthEl: An Ethical Eldercare System Eldercare is a domain where we believe that, with proper ethical considerations incorporated, machine intelligence can be harnessed to aid an increasingly aging human population, with an expectation of a shortage of human caretakers in the future. </li></ul>
  44. 46. Machine Ethics Consortium <ul><li>We believe, further, that this domain is rich enough in which to explore most issues involved in general ethical decision-making for both machines and human beings.  EthEl (ETHical ELdercare system) is a prototype system in the domain of eldercare that takes ethical concerns into consideration when reminding a patient to take his/her medication. </li></ul>
  45. 47. Machine Ethics Consortium <ul><li>EthEl must decide when to accept a patient’s refusal to take a medication that might prevent harm and/or provide benefit to the patient and when to notify the overseer.  There is a further ethical dimension that is implicitly addressed by the system: In not notifying the overseer – most likely a doctor – until absolutely necessary, the doctor will be able to spend more time with other patients who could be benefited, or avoid harm, as a result of the doctor’s attending to their medical needs. </li></ul>
  46. 48. Machine Ethics Consortium <ul><li>We believe that EthEl is the first system to use an explicit ethical principle to guide its actions. </li></ul><ul><li>Dr. Michael Anderson Department of Computer Science University of Hartford West Hartford, CT 06117 </li></ul><ul><li>Dr. Susan Leigh Anderson Department of Philosophy University of Connecticut Stamford, CT 06901 </li></ul>
  47. 49. Machine Ethics Consortium <ul><li>Implementing Ethical Advisors </li></ul><ul><li>In order to add an ethical dimension to machines, we need to have an ethical theory that can be implemented. Looking to Philosophy for guidance, we find that ethical decision-making is not an easy task. It requires finding a single principle or set of principles to guide our behavior with which experts in Ethics are satisfied and will likely involve generalizing from intuitions about particular cases, testing those generalizations on other cases and, above all, making sure that principles generated are consistent with one another. </li></ul>
  48. 50. Machine Ethics Consortium <ul><li>We are developing prototype systems based upon action-based ethical theories that provide guidance in ethical decision-making according to the precepts of their respective theories— Jeremy , based upon Bentham's Hedonistic Act Utilitarianism, W.D ., based upon Ross' Theory of Prima Facie Duties, and MedEthEx , based upon Beauchamp's and Childress' Principles of Biomedical Ethics.  MedEthEx (see online demo ) uses an ethical principle discovered via machine learning techniques to give advice in a particular type of ethical dilemma in medical ethics. </li></ul><ul><li>Dr. Michael Anderson Peter Larson Department of Computer Science University of Hartford West Hartford, CT 06117 </li></ul>
  49. 51. Machine Ethics Consortium <ul><li>Machine Ethics Research Group </li></ul><ul><li>We are working on advancing Ethical Theory by making ethics precise enough to be programmed. We are, also, working on the problem of developing a decision procedure for determining the correct action in a multiple duty ethical theory such as W.D. Ross' Theory of Prima Facie Duties. Since we believe that such a decision procedure will come from abstracting from intuitions about particular cases, we are developing a database of ethical dilemmas and analyzing them according to Ross' theory. </li></ul><ul><li>Dr. Susan Leigh Anderson Rachel Brody Viktoriya Gelfand Ayelet Saul Department of Philosophy University of Connecticut Stamford, CT 06901 </li></ul><ul><li>ISP Machine Ethics Project </li></ul>
  50. 52. Machine Ethics Consortium <ul><li>This work involves elements of algorithms, AI and philosophy. We are exploring the implementation of various ethical theories, with dual purposes: (1) To shed new light on these theories, which is of particular interest to philosophers, and (2) To begin to address the need for an ethical dimension in software that is becoming increasingly autonomous. </li></ul>
  51. 53. Machine Ethics Consortium <ul><li>The project at hand for an ISP student is to research existing and proposed software systems, particularly in the biomedical field, in order to identify the degree of autonomy achieved and hence the potential ethical component. </li></ul><ul><li>Dr. Chris Armen Nick Bazin Jonathan Boreyko Department of Computer Science Trinity College, Hartford, CT </li></ul>
  52. 54. AP-CAP 2009 <ul><li>Hiroshi Ishiguro: &quot;Developing androids and understanding humans&quot; </li></ul><ul><li>Carl Shulman, Nick Tarleton, and Henrik Jonsson Which Consequentialism? Machine Ethics and Moral Divergence </li></ul><ul><li>Kimura Takeshi: Introducing Roboethics to Japanese Society: A Proposal </li></ul><ul><li>Carl Shulman, Enrik Johnsson, and Nick Tarleton: Machine Ethics and Supertintelligence </li></ul><ul><li>Soraj Hongladarom: An Ethical Theory for Autonomous and Conscious Robots </li></ul><ul><li>Keitz Miller, Frances Grodzinsky, Marty Wolf: Why Turin Shoudn’t Have to Guess </li></ul><ul><li>Gene Rohrbaugh: On the Design of Moral and Amoral Agents </li></ul>
  53. 55. Being-in-the-world with robots <ul><li>We should analyze </li></ul><ul><li>how robots are “in” the world in comparison to humans as well as to other living and non-living beings. </li></ul><ul><li>what does it mean for us to be “with” robots in contrast to our being “with” other human beings as well as with other living and non-living beings. </li></ul>
  54. 56. Being-in-the-world with robots <ul><li>„ One major difference between a „program“ and an „agent“ is, that programs are designed as tools to be used by human beings, while „agents“ are designed to interact as partners with human beings.“ (Nagenborg 2007, 2) </li></ul>
  55. 57. Being-in-the-world with robots <ul><li>„ An AMA [artificial moral agent, RC] is an AA [artificial agent, RC] guided by norms which we as human beings consider to have a moral content.“ (Nagenborg 2007, 3) </li></ul><ul><li>„ Agents may be guided by a set of moral norms, which the agent itself may not change, or they are capable of creating and modifying rules by themselves.“ (Nagenborg 2007, 3) </li></ul>
  56. 58. Being-in-the-world with robots <ul><li>„ Thus, there must be questioning about what kind of „morality“ will be fostered by AMAs, especially since now norms and values are to be embedded consciouslyinto the „ethical subroutines“. Will they be guided by „universal values“, or will they be guided by specific Western or African concepts.“ (Nagenborg 2007, 3) </li></ul>
  57. 59. Being-in-the-world with robots <ul><li>„ The concepts of autonomy, learning, decision etc. are analogies of the human agent deprived of its historical, political, societal, bodily and existential dimensions.“ (Capurro 2009) </li></ul>
  58. 60. Being-in-the-world with robots <ul><li>„ An ‘implanted’ morality in form of a moral code programmed in a microprocessor has nothing in common with the capacity of practical reflexion even in case there is a feed-back that mimicry (human) theoretical and/or practical reason. The evaluation and ‘decisions’ coming out of such programmes remain lastly dependent on the programmer himself.“ (Capurro 2009) </li></ul>
  59. 61. Being-in-the-world with robots <ul><li>„ It is cynical to speculate, and to spend public funds, on the supposed creation of artificial agents towards whom we would be morally (and legally) responsible (and vice versa!) given the present situation of some six billion human beings on this planet and the lack of such responsibility towards them. We might say that artificial agents are only prima facie agents. They are basically patients of human moral (and technical) agency.“ (Capurro 2009) </li></ul>
  60. 62. Being-in-the-world with robots <ul><li>„ In contrast, the question of what kind of transformation is being operated in human societies when millions (and soon also billions) of human beings interact in digital networks that are interwoven with their bodies is highly relevant today and in the future.“ (Capurro 2009) </li></ul>
  61. 63. Being-in-the-world with robots <ul><li>„ There is a common ground or a common life, so to speak, a basic interrelationship between all living beings, not dissimilar to what Kant writes that we are originally owners of the common earth. This original ownership can be reversed: natural and/or artificial beings are ‘owned’ originally by nature. Nature owns us.“ (Capurro 2009) </li></ul>
  62. 64. Being-in-the-world with robots <ul><li>„ ICT and biotechnology invite us to re-invent ourselves “practically” as moral agents and patients, not only “poietically” as technical ones, in an interplay with nature and technology becoming more and more aware of the interrelationship of all things, living and non living ones which is a key insight of Buddhist thinking. This kind of practical thinking on what can be good for our lives “as a whole” was called by Aristotle “practical philosophy” and by Kant “practical reason.” (Capurro 2009) </li></ul>
  63. 65. Being-in-the-world with robots <ul><li>“ What is it like to be a robot? Wittgenstein’s famous dictum that “if a lion could speak, we would not understand him” (Wittgenstein, 1984, p. 568) points to the issue, that human language is rooted in what he calls “forms of life.” Humans and lions have orthogonal forms of life, i.e., they construct their reality based on systemic differences. What is it like to be a human?” (Capurro & Nagenborg 2009) </li></ul>
  64. 66. Being-in-the-world with robots <ul><li>“ Intercultural roboethics is still in its infancy no less than intercultural robotics .“ (Capurro & Nagenborg 2009) </li></ul>
  65. 67. Ethics and Robots: East and West <ul><li>Rougly speaking: </li></ul><ul><li>Europe: Deontology (Autonomy, Human Dignity, Privacy, Anthropocentrism): Scepticism with regard to robots </li></ul><ul><li>USA (and anglo-saxon tradition): Utilitarian Ethics: will robots make „us“ more happy? </li></ul><ul><li>Eastern Tradition (Buddhism): Robots as one more partner in the global interaction of things </li></ul>
  66. 68. Ethics and Robots: East and West <ul><li>Morality and Ethics: </li></ul><ul><ul><li>Ethics as critical reflection (or problematization) of morality </li></ul></ul><ul><ul><li>Ethics is the science of morals as robotics is the science of robots </li></ul></ul>
  67. 69. Ethics and Robots: East and West <ul><li>Different ontic or concrete historical moral traditions, for instance </li></ul><ul><ul><li>in Japan: Seken (trad. Japanese morality), Shakai (imported Western morality) and Ikai (old animistic tradition) </li></ul></ul><ul><ul><li>In the „Far West“: Ethics of the Good (Plato, Aristotle), Christian Ethics, Utilitarian Ethics, Deontological Ethics (Kant) </li></ul></ul>
  68. 70. Ethics and Robots: East and West <ul><li>Ontological dimension: Being or (Buddhist) Nothingness as the space of open possibilities that allow us to critizise ontic moralities </li></ul><ul><li>Always related to basic moods (like sadness, happiness, astonishment, …) through which the uniqueness of the world and human existence is experienced (differently in different cultures) </li></ul>
  69. 71. Ethics and Robots: East and West <ul><li>A future intercultural ethics of robots (IER) should reflect on the ontic and ontological dimensions of creating and using robots in different cultural contexts and with regard to different goals. </li></ul>
  70. 72. Bibliography <ul><li>AP-CAP 2009: 5th Asia-Pacific Computing & Philosophy Conference </li></ul><ul><li> </li></ul><ul><li>Anderson, Michael & Anderson, Susan Leigh: Machine ethics: Creating an ethical intelligent agent. AI Magazine | December 22, 2007. </li></ul><ul><li>Capurro, Rafael (2009). Towards a Comparative Theory of Agents </li></ul><ul><li>Capurro, Rafael and Nagenborg, Michael (Eds.) (2009), Introduction. In: ibid.: Ethics and Robotics. Berlin: Akademische Verlagsgesellschaft (in print). </li></ul><ul><li>Capurro, Rafael: Ethics and Robotics (2007) http :// </li></ul><ul><li>Cerqui, Daniela; Weber, Jutta; Weber, Karsten (Guest Editors) (2006): Ethics in Robotics, International Review of Information Ethics </li></ul><ul><li> </li></ul>
  71. 73. Bibliography <ul><li>ETHICBOTS (2009). </li></ul><ul><li>Gates, Bill (2007). Roboter für jedermann </li></ul><ul><li>Floridi, L. and Sanders, J.W. (2004). On the Morality of Artificial Agents. In: Minds and Machines, 14, 3, 349-379. </li></ul><ul><li>Nagenborg, Michael (2007). Artificial moral agents: an intercultural perspective. In: International Review of Inforamtion Ethics: </li></ul><ul><li>Shim, H.B. (2007). Establishing a Korean Robot Ethics Charter. </li></ul><ul><li>Veruggio, Gianmarco & Operto, Fiorella: Roboethics: Social and Ethical Implications. In: Bruno Siciliano & Oussama Khatib (Eds.): Handbook of Robotics. Springer 2008, Part G, pp. 1499-1524. </li></ul>
  72. 74. Bibliography <ul><li>Veruggio, Gianmarco (2006): EURON Roboethics Roadmap. </li></ul><ul><li>Wallach, Wendell & Allen, Colin (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press. </li></ul>