With the increasing dependence on autonomous operating agents
and robots the need for ethical machine behavior rises. This paper
presents a moral reasoner that combines connectionism,
utilitarianism and ethical theory about moral duties. The moral
decision-making matches the analysis of expert ethicists in the
health domain. This may be useful in many applications, especially
where machines interact with humans in a medical context.
Additionally, when connected to a cognitive model of emotional
intelligence and affective decision making, it can be explored how
moral decision making impacts affective behavior.
Impacto social del desarrollo de la Inteligencia artificial(Ingles)kamh18
The document discusses the social impacts of developing artificial intelligence. It begins by outlining the methodology used, which involved searching for information on artificial intelligence from digital libraries, books, and websites. It then provides an overview of key concepts in artificial intelligence, including definitions of AI, different approaches to AI, the role of agents, and how agents can act intelligently using knowledge and beliefs. The document also gives examples of applications of AI in fields like medicine, geology, and aeronautics.
Artificial Intelligence, Design, and More-than-Human JusticeJosh Gellers
Advances in artificial intelligence (AI) have been met by rigorous analyses concerning their social and environmental impacts. However, AI ethics literature has curiously expended far less energy theorizing justice and interrogating its ontological boundaries. Meanwhile, environmental philosophy has taken justice seriously, although it has tended to focus exclusively on animals and nature. This chapter seeks to overcome the anthropocentric bias of AI ethics and the biocentric bias of environmental philosophy by developing an Anthropocene-informed theory of justice capable of accommodating technological beings. To accomplish this, I compare several theories of justice hospitable to non-humans—multispecies justice, planetary justice, and socio-ecological justice. After identifying optimal features drawn from these approaches, I devise a list of design principles and explain how they could be employed in the case of a personal delivery device. I conclude by calling for AI ethics to embrace complexity, interdisciplinarity, and epistemic humility in its pursuit of justice.
Can we morally justify the replacement of humans by artificial intelligence i...Kai Bennink
1) The document discusses whether artificial intelligence can morally replace humans in cancer treatment by analyzing the case of IBM Watson Oncology.
2) IBM Watson Oncology uses AI and machine learning to analyze patient data and provide treatment options to help doctors, achieving similar or better diagnosis rates than doctors.
3) However, some argue that AI systems like Watson are "black boxes" that we don't fully understand, and they could fail or make decisions in unexpected ways, so strict principles are needed to ensure AI aligns with human values and responsibilities.
You Name Here1. What is Moore’s Law What does it apply to.docxjeffevans62972
You Name Here
1. What is Moore’s Law? What does it apply to?
2. What is a microprocessor? What devices do you or your family own that contain microprocessors (and hence are impacted by Moore’s Law)?
3. Why is Moore’s Law important for managers? How does it influence managerial thinking?
4. What three interrelated forces threaten to slow the advancement of Moore’s Law?
5. What is the advantage of using computing to simulate an automobile crash test as opposed to actually staging a crash?
6. What are the two characteristics of disruptive innovations?
7. Make a list of recent disruptive innovations. List forms that dominated the old regime and firms that capitalized after disruption. Are any of the dominant firms from the previous era the same as those in the postdisruptive era? For those firms that failed to make the transition, why do you think they failed?
8. What is dynamic pricing, and why might this be risky?
9. What is the long tail? How does the long tail change retail economics? How does it influence shoppers’ choice of where to look for products? What firms, other than Amazon, are taking advantage of the long tail in their industries?
10. What is channel conflict, and how has Amazon been subject to channel conflict?
Module 1: Introduction to Ethical Theories
Topics
Introduction to Ethical TheoriesTeleology (Consequentialism)Deontology (Rights and Duties)Computer Ethics
Introduction to Ethical Theories
The concepts of ethics, character, right and wrong, and good and evil have captivated humankind since we began to live in groups, communicate, and pass judgment on each other. The morality of our actions is based on motivation, group rules and norms, and the end result. The difficult questions of ethics and information technology (IT) may not have been considered by previous generations, but what is good, evil, right, and wrong in human behavior certainly has been. With these historical foundations and systematic analyses of present-day and future IT challenges, we are equipped for both the varied ethical battles we will face and the ethical successes we desire.
Although most of you will be called upon to practice applied ethics in typical business situations, you'll find that the foundation for such application is a basic understanding of fundamental ethical theories. These ethical theories include the work of ancient philosophers such as Plato and Aristotle. This module introduces the widely accepted core ethical philosophies, which will serve to provide you with a basic understanding of ethical thought. With this knowledge, you can begin to relate these theoretical frameworks to practical ethical applications in today's IT environment.
Let's start with a fundamental question: "Why be ethical and moral?" At the most existential level, it may not matter. But we don't live our lives in a vacuum—we live our lives with our friends, relatives, acquaintances, co-workers, strangers, and fellow wanderers. To be ethical and moral all.
Conclusion Words For College Essay. Online assignment writing service.Sarah Meza
Gabriel, a child, was referred to the pediatric department for evaluation of a cough, fever, and possible lung infiltrate. He was seen by a doctor who discharged him home with a prescription for amoxicillin and Ventolin to treat his symptoms. He was asked to follow up with his general practitioner in a few days.
Responses1-LA1 The human race is structured in a way that diff.docxronak56
Responses
1-LA1 The human race is structured in a way that different individual have different opinions. Similarly, people might have some similar moral ethics while others differ. The Virtue ethical theory is universal in that attributes are universally recognized as good or bad (The Universal Moral Code). Kant’s and the Utilitarian theories are relative. First, Kant’s theory dwells on the fulfillment of a responsibility. Some responsibilities are accepted in some communities while others do not. Similarly, utilitarianism looks at the consequences of the actions, which differ according to the community.
Ethical relativism and universalism differ in more than one way. However, with the correct attitude towards a particular action, one will be able to distinguish whether it is beneficial or not. Activities that evoke difference in opinions should be minimized at all times. In addition, making sure that the actions are clear so that an individual is able to distinguish between right and wrong is also important. Furthermore, appreciating the different cultures help individuals to adapt to any change brought forward.
Reference
The Universal Moral Code. Retrieved from http://www.universalmoralcode.com/
1-LA2 This is a technological era that we expect more technological discoveries to continue coming in. One of such discoveries is the self-driving car, which makes the effort being put in by human beings almost negligible. Concerns about the vehicle come in when a decision has to be made in the case of an unexpected accident (Why Self-Driving Cars Must Be Programmed to Kill, 2015). Some people will choose to go over the ten people on crossing the road, while others will choose to have the individual by the sidewalk be a sacrifice.
It is very rare for an individual who bought the car to make himself or herself a sacrifice. This means that in the case of an accident, they are bound to run over other people. The self-driving car evokes different views by different people. However, one thing is clear, one has to either kill others or risk dying. In my opinion, I would rather stay die than see ten other people die.
Reference
Why Self-Driving Cars Must Be Programmed to Kill. (2015, October 22). Retrieved from https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/
2-LA1 From a teleological Virtue Ethic approach, supporters of this theory would conclude that morality is universal. According to Keith (2003), the universal moral code is separated into two sets of statements that involve “do no harm” and “do good.” This concept is based on people acting virtuously. An opposing view may argue the concept of relativism stating a moral code is relative to an individual’s or groups geographical location (Basilthegiant, n.d.). Using an example from Keith’s universal code such as do not murder is something that disproves the opposing view. Some may argue that there are times when murder can be justified such as war or se ...
bhusal2
Prepared by
Deepak Bhusal
CWID:50259419
To Professor: Dr. R. Daniel Creider
Table of Contents
Abstract 3
Introduction 4
Literature Review 5
AI for Justice 6
AI in Medical Teaching 8
Artificial Intelligence in human resource management 9
AI in Marketing 10
Artificial Intelligence in Real Estate 13
Real Estate Agent Selection 14
Artificial Intelligence in CRM 16
Artificial Intelligence in Banking 18
AI based Chatbots in Financial Institutions 19
Customization of Products 19
References 24
Artificial Intelligence: Formalizing Human CapabilitiesAbstract
Artificial Intelligence cannot replace three human abilities, in which human beings present an insurmountable advantage today, and they are empathy, leadership, and creativity. AI can quickly take over essential verbal and visual communication services, such as digital assistant-based customer service. However, our ability to empathize with the client and to carry out non-verbal communication based on emotions gives us an advantage that Artificial Intelligence can never replace. These qualities can make the difference between a misunderstood and dissatisfied customer versus an understood and loyal customer.
Gajane & Pechenizkiy (2017) stated that it is undeniable that AI will replace workers in essential economic-financial management, logistics, materials, human resources, and projects. Still, people have more advanced management capabilities that AI cannot return. The following two skills play a crucial role:
First is the ability to manage the growth of human groups. This is the ability to help members of the organization develop their skills and grow professionally through our innate leadership ability to set goals, motivate, lead by example, evaluate, delegate, and transmit experience.
Secondly, there is the ability to carry out the organization members' recovery management when they suffer problems derived from interpersonal relationships or other emotional reasons. It is based on the skills of understanding, counseling, care, and protection.
Yampolskiy (2019) found that AI can never replace the vision, invention, and original proposal of innovative and disruptive designs, not only applied to the individual as a genius but also the ability to carry out collective intelligence management focused on innovation, facilitating the appearance of new knowledge and wisdom. Besides, even more, difficult it will be able to replace the ability to implement new ideas in the organization, communicating attractively, persuading, and making the organization move smoothly to implement innovative ideas.
Keywords
Artificial Intelligence, Marketing, Human Resource Management, Medical Sciences, Nursing, Introduction
The possibility of thought in machines is a concern that has been raised for a long time; science fiction, as well as engineering and philosophy, have sought to provide an answer to the question "Can machines think?" Famous exponents of both affirmative answers, given by Turing or K ...
Impacto social del desarrollo de la Inteligencia artificial(Ingles)kamh18
The document discusses the social impacts of developing artificial intelligence. It begins by outlining the methodology used, which involved searching for information on artificial intelligence from digital libraries, books, and websites. It then provides an overview of key concepts in artificial intelligence, including definitions of AI, different approaches to AI, the role of agents, and how agents can act intelligently using knowledge and beliefs. The document also gives examples of applications of AI in fields like medicine, geology, and aeronautics.
Artificial Intelligence, Design, and More-than-Human JusticeJosh Gellers
Advances in artificial intelligence (AI) have been met by rigorous analyses concerning their social and environmental impacts. However, AI ethics literature has curiously expended far less energy theorizing justice and interrogating its ontological boundaries. Meanwhile, environmental philosophy has taken justice seriously, although it has tended to focus exclusively on animals and nature. This chapter seeks to overcome the anthropocentric bias of AI ethics and the biocentric bias of environmental philosophy by developing an Anthropocene-informed theory of justice capable of accommodating technological beings. To accomplish this, I compare several theories of justice hospitable to non-humans—multispecies justice, planetary justice, and socio-ecological justice. After identifying optimal features drawn from these approaches, I devise a list of design principles and explain how they could be employed in the case of a personal delivery device. I conclude by calling for AI ethics to embrace complexity, interdisciplinarity, and epistemic humility in its pursuit of justice.
Can we morally justify the replacement of humans by artificial intelligence i...Kai Bennink
1) The document discusses whether artificial intelligence can morally replace humans in cancer treatment by analyzing the case of IBM Watson Oncology.
2) IBM Watson Oncology uses AI and machine learning to analyze patient data and provide treatment options to help doctors, achieving similar or better diagnosis rates than doctors.
3) However, some argue that AI systems like Watson are "black boxes" that we don't fully understand, and they could fail or make decisions in unexpected ways, so strict principles are needed to ensure AI aligns with human values and responsibilities.
You Name Here1. What is Moore’s Law What does it apply to.docxjeffevans62972
You Name Here
1. What is Moore’s Law? What does it apply to?
2. What is a microprocessor? What devices do you or your family own that contain microprocessors (and hence are impacted by Moore’s Law)?
3. Why is Moore’s Law important for managers? How does it influence managerial thinking?
4. What three interrelated forces threaten to slow the advancement of Moore’s Law?
5. What is the advantage of using computing to simulate an automobile crash test as opposed to actually staging a crash?
6. What are the two characteristics of disruptive innovations?
7. Make a list of recent disruptive innovations. List forms that dominated the old regime and firms that capitalized after disruption. Are any of the dominant firms from the previous era the same as those in the postdisruptive era? For those firms that failed to make the transition, why do you think they failed?
8. What is dynamic pricing, and why might this be risky?
9. What is the long tail? How does the long tail change retail economics? How does it influence shoppers’ choice of where to look for products? What firms, other than Amazon, are taking advantage of the long tail in their industries?
10. What is channel conflict, and how has Amazon been subject to channel conflict?
Module 1: Introduction to Ethical Theories
Topics
Introduction to Ethical TheoriesTeleology (Consequentialism)Deontology (Rights and Duties)Computer Ethics
Introduction to Ethical Theories
The concepts of ethics, character, right and wrong, and good and evil have captivated humankind since we began to live in groups, communicate, and pass judgment on each other. The morality of our actions is based on motivation, group rules and norms, and the end result. The difficult questions of ethics and information technology (IT) may not have been considered by previous generations, but what is good, evil, right, and wrong in human behavior certainly has been. With these historical foundations and systematic analyses of present-day and future IT challenges, we are equipped for both the varied ethical battles we will face and the ethical successes we desire.
Although most of you will be called upon to practice applied ethics in typical business situations, you'll find that the foundation for such application is a basic understanding of fundamental ethical theories. These ethical theories include the work of ancient philosophers such as Plato and Aristotle. This module introduces the widely accepted core ethical philosophies, which will serve to provide you with a basic understanding of ethical thought. With this knowledge, you can begin to relate these theoretical frameworks to practical ethical applications in today's IT environment.
Let's start with a fundamental question: "Why be ethical and moral?" At the most existential level, it may not matter. But we don't live our lives in a vacuum—we live our lives with our friends, relatives, acquaintances, co-workers, strangers, and fellow wanderers. To be ethical and moral all.
Conclusion Words For College Essay. Online assignment writing service.Sarah Meza
Gabriel, a child, was referred to the pediatric department for evaluation of a cough, fever, and possible lung infiltrate. He was seen by a doctor who discharged him home with a prescription for amoxicillin and Ventolin to treat his symptoms. He was asked to follow up with his general practitioner in a few days.
Responses1-LA1 The human race is structured in a way that diff.docxronak56
Responses
1-LA1 The human race is structured in a way that different individual have different opinions. Similarly, people might have some similar moral ethics while others differ. The Virtue ethical theory is universal in that attributes are universally recognized as good or bad (The Universal Moral Code). Kant’s and the Utilitarian theories are relative. First, Kant’s theory dwells on the fulfillment of a responsibility. Some responsibilities are accepted in some communities while others do not. Similarly, utilitarianism looks at the consequences of the actions, which differ according to the community.
Ethical relativism and universalism differ in more than one way. However, with the correct attitude towards a particular action, one will be able to distinguish whether it is beneficial or not. Activities that evoke difference in opinions should be minimized at all times. In addition, making sure that the actions are clear so that an individual is able to distinguish between right and wrong is also important. Furthermore, appreciating the different cultures help individuals to adapt to any change brought forward.
Reference
The Universal Moral Code. Retrieved from http://www.universalmoralcode.com/
1-LA2 This is a technological era that we expect more technological discoveries to continue coming in. One of such discoveries is the self-driving car, which makes the effort being put in by human beings almost negligible. Concerns about the vehicle come in when a decision has to be made in the case of an unexpected accident (Why Self-Driving Cars Must Be Programmed to Kill, 2015). Some people will choose to go over the ten people on crossing the road, while others will choose to have the individual by the sidewalk be a sacrifice.
It is very rare for an individual who bought the car to make himself or herself a sacrifice. This means that in the case of an accident, they are bound to run over other people. The self-driving car evokes different views by different people. However, one thing is clear, one has to either kill others or risk dying. In my opinion, I would rather stay die than see ten other people die.
Reference
Why Self-Driving Cars Must Be Programmed to Kill. (2015, October 22). Retrieved from https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/
2-LA1 From a teleological Virtue Ethic approach, supporters of this theory would conclude that morality is universal. According to Keith (2003), the universal moral code is separated into two sets of statements that involve “do no harm” and “do good.” This concept is based on people acting virtuously. An opposing view may argue the concept of relativism stating a moral code is relative to an individual’s or groups geographical location (Basilthegiant, n.d.). Using an example from Keith’s universal code such as do not murder is something that disproves the opposing view. Some may argue that there are times when murder can be justified such as war or se ...
bhusal2
Prepared by
Deepak Bhusal
CWID:50259419
To Professor: Dr. R. Daniel Creider
Table of Contents
Abstract 3
Introduction 4
Literature Review 5
AI for Justice 6
AI in Medical Teaching 8
Artificial Intelligence in human resource management 9
AI in Marketing 10
Artificial Intelligence in Real Estate 13
Real Estate Agent Selection 14
Artificial Intelligence in CRM 16
Artificial Intelligence in Banking 18
AI based Chatbots in Financial Institutions 19
Customization of Products 19
References 24
Artificial Intelligence: Formalizing Human CapabilitiesAbstract
Artificial Intelligence cannot replace three human abilities, in which human beings present an insurmountable advantage today, and they are empathy, leadership, and creativity. AI can quickly take over essential verbal and visual communication services, such as digital assistant-based customer service. However, our ability to empathize with the client and to carry out non-verbal communication based on emotions gives us an advantage that Artificial Intelligence can never replace. These qualities can make the difference between a misunderstood and dissatisfied customer versus an understood and loyal customer.
Gajane & Pechenizkiy (2017) stated that it is undeniable that AI will replace workers in essential economic-financial management, logistics, materials, human resources, and projects. Still, people have more advanced management capabilities that AI cannot return. The following two skills play a crucial role:
First is the ability to manage the growth of human groups. This is the ability to help members of the organization develop their skills and grow professionally through our innate leadership ability to set goals, motivate, lead by example, evaluate, delegate, and transmit experience.
Secondly, there is the ability to carry out the organization members' recovery management when they suffer problems derived from interpersonal relationships or other emotional reasons. It is based on the skills of understanding, counseling, care, and protection.
Yampolskiy (2019) found that AI can never replace the vision, invention, and original proposal of innovative and disruptive designs, not only applied to the individual as a genius but also the ability to carry out collective intelligence management focused on innovation, facilitating the appearance of new knowledge and wisdom. Besides, even more, difficult it will be able to replace the ability to implement new ideas in the organization, communicating attractively, persuading, and making the organization move smoothly to implement innovative ideas.
Keywords
Artificial Intelligence, Marketing, Human Resource Management, Medical Sciences, Nursing, Introduction
The possibility of thought in machines is a concern that has been raised for a long time; science fiction, as well as engineering and philosophy, have sought to provide an answer to the question "Can machines think?" Famous exponents of both affirmative answers, given by Turing or K ...
The document discusses artificial intelligence and its capabilities compared to human abilities. It argues that AI cannot replace three key human abilities: empathy, leadership, and creativity. While AI can perform communication tasks, humans have advantages in emotional communication and understanding. The document also discusses how AI may replace some economic and management roles but cannot match advanced human skills like leading groups, counseling others, and innovating with new ideas. It reviews literature on defining thinking and the limits of machine capabilities.
Perspectives on Ethics of AI Computer Science∗ Benjamin .docxkarlhennesey
Perspectives on Ethics of AI: Computer Science∗
Benjamin Kuipers†
August 14, 2019
Abstract
AI is a collection of computational methods for studying human knowledge, learning, and behavior,
including by building agents able to know, learn, and behave. Ethics is a body of human knowledge, far
from completely understood, that helps agents (humans today, but perhaps eventually robots and other
AIs) decide how they and others should behave. The ethical issues raised by AI fall into two overlapping
groups.
First, potential deployments of AI raise ethical questions about the impacts they may have on human
well-being, just like other powerful tools or technologies such as nuclear power or genetic engineering.
Second, unlike other technologies, intelligent robots and other AIs have the potential to be considered as
members of our society. Since they will make their own decisions about the actions they take, it is
appropriate for humans to expect them to behave ethically. This requires AI research with the goal of
understanding the structure, content, and purpose of ethical knowledge, well enough to implement ethics
in artificial agents.
This chapter describes a computational view of the function of ethics in human society, and discusses its
application to three diverse examples.
∗Draft chapter for the Oxford Handbook of Ethics of AI, edited by Markus Dubber, Frank Pasquale, and Sunit Das, to appear,
2019.
†[email protected] Computer Science & Engineering, University of Michigan, Ann Arbor, Michigan 48109 USA
1 Why Is the Ethics of AI Important?
AI uses computational methods to study human knowledge, learning, and behavior, in part by
building agents able to know, learn, and behave. Ethics is a body of human knowledge that helps
agents (humans today, but perhaps eventually robots and other AIs) decide how they and others
should behave. The ethical issues raised by AI fall into two overlapping groups.
First, like other powerful tools or technologies (e.g., genetic engineering or nuclear power),
potential deployments of AI raise ethical questions about their impact on human well-being.
Second, unlike other technologies, intelligent robots (e.g., autonomous vehicles) and other AIs
(e.g., high-speed trading systems) make their own decisions about the actions they take, and thus
could be considered as members of our society. Humans should be able to expect them to behave
ethically. This requires AI research with the goal of understanding the function, structure, and
content of ethical knowledge well enough to implement ethics in artificial agents.
As the deployment of AI, machine learning, and intelligent robotics becomes increasingly
widespread, these problems become increasingly urgent.
2 What is the Function of Ethics?
“At the heart of ethics are two questions: (1) What should I do?, and (2) What sort of person
should I be?”1 Ethics consists of principles for deciding how to act ...
Perspectives on Ethics of AI Computer Science∗ Benjamin .docxssuser562afc1
Perspectives on Ethics of AI: Computer Science∗
Benjamin Kuipers†
August 14, 2019
Abstract
AI is a collection of computational methods for studying human knowledge, learning, and behavior,
including by building agents able to know, learn, and behave. Ethics is a body of human knowledge, far
from completely understood, that helps agents (humans today, but perhaps eventually robots and other
AIs) decide how they and others should behave. The ethical issues raised by AI fall into two overlapping
groups.
First, potential deployments of AI raise ethical questions about the impacts they may have on human
well-being, just like other powerful tools or technologies such as nuclear power or genetic engineering.
Second, unlike other technologies, intelligent robots and other AIs have the potential to be considered as
members of our society. Since they will make their own decisions about the actions they take, it is
appropriate for humans to expect them to behave ethically. This requires AI research with the goal of
understanding the structure, content, and purpose of ethical knowledge, well enough to implement ethics
in artificial agents.
This chapter describes a computational view of the function of ethics in human society, and discusses its
application to three diverse examples.
∗Draft chapter for the Oxford Handbook of Ethics of AI, edited by Markus Dubber, Frank Pasquale, and Sunit Das, to appear,
2019.
†[email protected] Computer Science & Engineering, University of Michigan, Ann Arbor, Michigan 48109 USA
1 Why Is the Ethics of AI Important?
AI uses computational methods to study human knowledge, learning, and behavior, in part by
building agents able to know, learn, and behave. Ethics is a body of human knowledge that helps
agents (humans today, but perhaps eventually robots and other AIs) decide how they and others
should behave. The ethical issues raised by AI fall into two overlapping groups.
First, like other powerful tools or technologies (e.g., genetic engineering or nuclear power),
potential deployments of AI raise ethical questions about their impact on human well-being.
Second, unlike other technologies, intelligent robots (e.g., autonomous vehicles) and other AIs
(e.g., high-speed trading systems) make their own decisions about the actions they take, and thus
could be considered as members of our society. Humans should be able to expect them to behave
ethically. This requires AI research with the goal of understanding the function, structure, and
content of ethical knowledge well enough to implement ethics in artificial agents.
As the deployment of AI, machine learning, and intelligent robotics becomes increasingly
widespread, these problems become increasingly urgent.
2 What is the Function of Ethics?
“At the heart of ethics are two questions: (1) What should I do?, and (2) What sort of person
should I be?”1 Ethics consists of principles for deciding how to act .
This document provides a summary of structural family theory, which examines the unspoken rules within families and how they affect the family's organization. It discusses key concepts of the theory like subsystems, boundaries, and rules. It also reviews literature applying structural family theory to divorced families, emphasizing the importance of clear parent-child roles and establishing a new family structure for the adolescent's well-being.
The Ethics of Machine Learning Algorithms Max Yousif
This document discusses the ethics of machine learning algorithms. It notes that algorithms can contain bias since they reflect the values of their developers and are designed with a purpose in mind. This can lead to unfair outcomes like discrimination in areas like advertising or pricing. The document also discusses how algorithms may take inappropriate actions based on inconclusive data, by extrapolating correlations without establishing causation. Overall, the document examines some of the ethical issues around algorithmic bias, discrimination, and actions based on uncertain data that machine learning applications need to address.
This document provides an overview of artificial intelligence including definitions, concepts, and applications. It defines AI as simulating human intelligence through machine learning and problem solving. Key points include:
- AI systems are designed to rationally achieve goals like humans through learning.
- Knowledge representation and organization is important for efficient searching and reasoning. Common methods include rules, frames, and ontologies.
- Knowledge-based systems combine a knowledge base with an inference engine to derive new understandings and solve complex problems. They are often used to replicate expert knowledge.
Virtue in Machine Ethics: An Approach Based on Evolutionary Computation Ioan Muntean
February 2015. Co-author: Don Howard, University of Notre Dame). Presented at the American Philosophical Association (APA Central). St. Louis, Missouri.
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...AJHSSR Journal
ABSTRACT : This study aims to debate and analyze the implementation of artificial intelligence (AI) in the Justice Age of the Future
Democracy and how it can affect civil and criminal investigation. To do so, a database of indexed scientific papers and conference materials
were "searched" to gather their findings. Artificial intelligence (AI), is a science for the development of intelligent machines and has its
roots in the early philosophical studies of human nature and in the process of knowing the world, expanded by neurophysiologists and
psychologists in the form of a series of theories, about the work of the human brain and thought. The stage of the development of the science
of artificial intelligence is the development of the foundation of the mathematical theory of computation - the theory of algorithms - and the
creation of computers, Anglin, (1995). "Artificial Intelligence" is a science that has theoretical and experimental parts. In practice, the
problem of the creation of "Artificial Intelligence" is, on the one hand, at the intersection of computer technology and, on the other, with
neurophysiology, cognitive and behavioral psychology. The Philosophy of Artificial Intelligence serves as a theoretical basis, but only with
the appearance of significant results will the theory acquire an independent meaning. Until now, the theory and practice of "Artificial
Intelligence" must be distinguished from the mathematical, algorithmic, robotic, physiological, and other theoretical techniques and
experimental techniques that have an independent meaning.
KEYWORDS: Artificial Intelligence; Hybrid Smart Systems (HIS); Computer Machines; Robotics; Test of Turing
Ethical by Design: Ethics Best Practices for Natural Language Processingantonellarose
Ethical considerations regarding NLP.
Jochen L. Leidner and Vassilis Plachouras
Thomson Reuters, Research & Development,
30 South Colonnade, London E14 5EP, United Kingdom.
{jochen.leidner,vassilis.plachouras}@thomsonreuters.com
Solved Discussion Paper Handout All Students Are RequirAngie Logan
Minisatellites are hypervariable regions of DNA defined by polymorphisms in the number of repeated nucleotide motifs ranging from 12-100 base pairs. They are found throughout eukaryotic genomes in both coding and non-coding regions. Variation in the number of repeats at minisatellite loci provides a tool for genetic analysis, DNA fingerprinting, and studying mutation rates due to strand slippage during DNA replication.
Domain ontology development for communicable diseasescsandit
This document discusses the development of a domain ontology for communicable diseases. The researchers developed an ontology with concepts like diseases, symptoms, and causes arranged in a taxonomy. They created over 600 concepts with properties and relations. The ontology development process included specification, conceptualization, creation of instances, and evaluation using a description logic reasoner to verify the concepts and relations were correctly represented. The ontology will be expanded to include more diseases and connections to related web content to provide information retrieval.
DOMAIN ONTOLOGY DEVELOPMENT FOR COMMUNICABLE DISEASEScscpconf
Web has become the very first resource to search for any kind of information. With the emergence of semantic web, our search queries have started generating more informed results.Ontologies are at the core of any semantic web application. They help in rapid development of
distributed systems by providing information on the fly. This key feature of distribution and
sharing of information has made ontologies as a new knowledge representation mechanism. A
mechanism which is strongly backed by a sound inference system. In this paper, we shall discuss the development, verification and validation of an ontology in a health domain.
The paper must have the following subheadings which is not include.docxoreo10
The paper must have the following subheadings which is not included in word count:
Introduction
Analysis
Rationale to support the response [1 and 2 separately]
Description of key job types
Conclusions
Week 11 Discussion 1
"The Future of Training" Please respond to the following:
From the first e-Activity, analyze the views of Cross and Jarche about the “Golden Age of Training” and its future. Then, assess the claims Miller makes about training in the article “Training is Not an Option.” Take a position on which views you agree with most. Provide a rationale to support your response.
From the second e-Activity, describe three key job types and competencies that professional organizations such as ISPI and ASTD claim that professionals in the field of organizational training and development should possess. Provide a rationale to support your response.
e-Activity Bottom of Form
Read the article by Cross and Jarche titled “The Future of the Training Department” published in Training Magazine (June 2009). Then, read the article titled, “Training is Not an Option,” by Adrian Miller. Be prepared to discuss.
Search the Internet for a professional organization (e.g., ISPI, ASTD) and review the primary job types and job competencies listed. Be prepared to discuss.
Article: “The Future of the Training Department”
URL: https://www.polleverywhere. com/blog/the-future-of-the- training-department/
Article: “Training is Not an Option,”
URL: http://ezinearticles.com/? Training-is-Not-an-Option&id= 157604
Post 1 AW
Referencing the Learning Resources for this module, choose any question in the research project list and answer it in relation to posthumanism. In other words treat posthumanism as a new technology or technological way of being.
Posthumanism is essentially the interlinking of humans and technology. This could range from artificial intelligence to a human that has prosthetics or technological enhancements fused into their bodies. But how did this term even come about? What is so wrong with humans and their ability to function that we need to incorporate such technology into our lives? What is the problem for which posthumanism is the solution?
The answer is everything. All aspects of our lives involve problems and solutions. This technology that is being referred to as posthumanism has the ability to solve a vast majority of the problems humans encounter and create. Steven Poole, although a strong supporter against posthumanism, discusses a few of these problems as well as new problems that could be created in his article “Slaves to the algorithm”. First referring to a chess match between world champions, then to vehicle automation, crime algorithms and psychotherapy applications, Poole is able to illustrate the involvement posthumanism already has in our present day. Before he argues that humans are quickly rationing off our conscious thoughts and judgements he recognizes the need for imp ...
This document summarizes the research of the Norms Evolving in Response to Dilemmas (NERD) research group regarding artificial ethics. It discusses three techniques used in their research: 1) Artificial Morality which models moral principles as game theoretic strategies, 2) Evolving Artificial Moral Ecologies which uses genetic programming and agent-based modeling to generate and test diverse agent types, and 3) NERD which is an experimental platform to test and refine ethical mechanisms using real world problems. The research found that reciprocal cooperation is key to stabilizing cooperation and mixed populations persist. NERD-I collected data on ethical decision making which provided insights while avoiding framing effects. NERD-II proposes using
Artificial intelligence needs ideas from philosophy to build human-level intelligence. For a robot to have common sense and learn from experience, it needs a general worldview to organize facts, which raises many philosophical problems. Some philosophical approaches are compatible with designing intelligent systems, such as accepting both science and common sense knowledge, treating mind as a set of features rather than an all-or-nothing concept, and using approximate concepts. Philosophers could help AI by clarifying useful concepts like belief, causality, and counterfactuals.
This document discusses nudges, which are ways of influencing people's behavior through subtle changes to their choice environment, without limiting options or economic incentives. It defines key concepts like choice architecture and libertarian paternalism. The document explains that nudges work by leveraging cognitive shortcuts and biases, and that they can be designed around defaults, error prevention, feedback, mapping choices to expected outcomes, and structuring complex choices simply. It provides examples of how nudges have been effectively used in communication campaigns to increase understanding and compliance.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses human rights issues related to artificial intelligence. It begins with definitions of key AI concepts like machine learning, deep learning, and algorithms. It then explains how AI can both help and potentially harm society. The document outlines how various human rights may be impacted by current and future applications of AI, such as privacy and non-discrimination. It concludes with recommendations for stakeholders to address human rights harms through approaches like data protection laws and increased research.
Hoe overheden en bedrijven je bespieden om je gedrag te manipuleren | Privacy...Matthijs Pontier
Hoe overheden en bedrijven je bespieden om je gedrag te manipuleren | Privacy symposium Usocia | Universiteit Utrecht | 2021-05-06
2. Vrij delen van informatie, kunst en cultuur Evidence-based policy met een lange-termijn visie Zelfbeschikking stimuleren zonder dat dit ten koste gaat van rechten Vertrouwen burgers vs Wantrouwen ‘macht' Enthousiast over tech, maar alert op risico’s Tech to empower people; niet om te onderdrukken Basis principes PPNL Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
3. Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
4. Amsterdam wil data verzamelen en delen, om: – Maatschappelijke vraagstukken oplossen – Dienstverlening verbeteren – Slimmer handhaven – Openbare ruimte beheren – Werkprocessen efficiënter – Zorg verbeteren – Patronen herkennen om gedrag te beïnvloeden “Amsterdam doet daarbij graag zaken met de markt”
5. Verkapte privatisering publieke diensten? Wie beheert AI in ‘slimme’ stad? Wie beheert de data die we samen produceren? Wie profiteert van de voordelen? Wie kampt met de nadelen?
6. Google ontwierp privacy policies voor buurten: - Mensen zouden getarget worden met producten - Zelfs getarget worden om stemgedrag te beïnvloeden Privacy adviseur Ann Cavoukian nam ontslag: "I imagined us creating a smart city of privacy, as opposed to a smart city of surveillance" Toronto: "The evaluation process will determine which parts of the proposal, if any, may be pursued further." Google Sidewalks Toronto Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
7. Privacy in ‘smart’ cities https://piratenpar tij.nl/zorgen-om- de-privacy-in-sli mme-steden/
8. Niets te verbergen? Functie in sociale relaties Mogen anderen ook niks te verbergen hebben? Snowden: “I don't care about privacy, 'because I have nothing to hide', is like I don't care about free speech, Because I have nothing to say" Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
9. Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
10. 15-12-16 SyRI https://www.youtube.com/watch?v=2GkXCzYdrBY • Allerlei data die je met overheid deelt wordt aan elkaar geknoopt. • Iedereen is bij voorbaat verdacht → bestempeld met risicoprofiel. • Op basis hiervan wordt je extra in de gaten gehouden, zonder dat je dit weet. • https://bijvoorbaatverdacht.nl/ Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
11. 15-12-16 SyRI in Rotterdam 25.000 bewoners Bloemhof en Hillesluis op fraude onderzocht • 50% Foutmarge • 10% werd onderzocht • 0(!!) fraudegevallen opgespoord Na ophef gestopt met experiment Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
12. https://nos.nl/artikel/2366864-fraud e-opsporen-of-gevaar-van-discrimin atie-gemeenten-gebruiken-slimme- algoritmes Toeslagenschandaal Toeslagenaffaire
Piratenpartij presentatie programma en kandidaten Tweede kamerverkiezingen 2021Matthijs Pontier
PirtatenpartijTK2021
1. Visie met lef
2. ● In meer dan 60 landen actief ● Honderden zetels op verschillende niveaus ● Belangrijkste oppositiepartij in Tsjechië, IJsland ● Eerste pan-Europese partij ● Verviervoudigd in EU sinds 2014 Internationale beweging Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
3. ● Vertrouwen op mensen vs wantrouwen macht ● Enthousiast over tech, maar alert op risico’s ● Tech om te empoweren; niet om te onderdrukken ● Vrij delen van informatie, kennis en cultuur ● Evidence-based beleid met lef en lange-termijn visie Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021 Basisprincipes
4. Empowerment / Decentralisatie van macht • Burgerrechten • Democratie • Transparantie • Vrijheid van informatie Kernpunten Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
5. Burgerrechten zijn grondvesten van onze beweging. Privacy, Zelfbeschikking, Vrije meningsuiting, Internettoegang, Gelijkwaardige behandeling • Stop microtargeting • Delete Big Brother: Sleepnet uit sleepwet • Transparantie en audits voor algoritmes • Tanden voor de toezichthouder: Versterk AP Burgerrechten Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
6. Open, transparante overheid is basisvoorwaarde voor een goede democratie • Stop de Rutte-doctrine! Streaming besprekingen • Transparantie kasboek, lobby, belastingafspraken • Ook bij semi-overheid en adviesorganen • Vrije, onafhankelijke media • Bescherming klokkenluiders Transparantie Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
7. Vrijheid van informatie Vrije toegang tot info, kennis en cultuur is een voorwaarde voor sociale, technologische en economische ontwikkeling. Auteursrecht en patentrecht creëren informatie monopolies die alleen grote bedrijven ten goede komen. Stel het in dienst van auteurs! Innovatie bevorderen: als je iets kopieert, verdubbel je waarde! Zo medicijnen en behandelmethoden direct wereldwijd verspreid Als doel monopolie geld verdienen is, beloon dan met geld Investeer in kennis, in plaats van het hek om de kennis Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
8. Democratie Niet eens in de vier jaar, maar het hele jaar door meebeslissen • bindend (p)referendum en volksinitiatief, zonder opkomstdrempels • e-democracy (zoals ons initiatief in Amsterdam) • burgerforum / burgerberaad / burgertop • eerlijke behandeling voor lokale politieke partijen • wetten toetsen aan de Grondwet • apolitieke benoemingen (semi-)overheid: stop de vriendjespolitiek Meer dan stemmen alleen. Democratie gaat over verhoudingen in samenleving Recht doen aan wat de meerderheid wil en rekening houden met de minderheid Daarom is combinatie met burgerrechten en stevige rechtsstaat zo belangrijk
The document discusses how AI will shift the distribution of power in society. It notes that large tech companies like Google are trying to privatize public services through smart city projects, which raises issues around transparency, data ownership, and who benefits. The use of personal data for targeted advertising and influencing elections is also discussed. Overall, the document argues that AI and new technologies could exacerbate existing inequalities and concentrations of power if issues around data ownership, privacy, transparency and the potential for discrimination are not adequately addressed.
More Related Content
Similar to Toward machines that behave ethically better than humans do
The document discusses artificial intelligence and its capabilities compared to human abilities. It argues that AI cannot replace three key human abilities: empathy, leadership, and creativity. While AI can perform communication tasks, humans have advantages in emotional communication and understanding. The document also discusses how AI may replace some economic and management roles but cannot match advanced human skills like leading groups, counseling others, and innovating with new ideas. It reviews literature on defining thinking and the limits of machine capabilities.
Perspectives on Ethics of AI Computer Science∗ Benjamin .docxkarlhennesey
Perspectives on Ethics of AI: Computer Science∗
Benjamin Kuipers†
August 14, 2019
Abstract
AI is a collection of computational methods for studying human knowledge, learning, and behavior,
including by building agents able to know, learn, and behave. Ethics is a body of human knowledge, far
from completely understood, that helps agents (humans today, but perhaps eventually robots and other
AIs) decide how they and others should behave. The ethical issues raised by AI fall into two overlapping
groups.
First, potential deployments of AI raise ethical questions about the impacts they may have on human
well-being, just like other powerful tools or technologies such as nuclear power or genetic engineering.
Second, unlike other technologies, intelligent robots and other AIs have the potential to be considered as
members of our society. Since they will make their own decisions about the actions they take, it is
appropriate for humans to expect them to behave ethically. This requires AI research with the goal of
understanding the structure, content, and purpose of ethical knowledge, well enough to implement ethics
in artificial agents.
This chapter describes a computational view of the function of ethics in human society, and discusses its
application to three diverse examples.
∗Draft chapter for the Oxford Handbook of Ethics of AI, edited by Markus Dubber, Frank Pasquale, and Sunit Das, to appear,
2019.
†[email protected] Computer Science & Engineering, University of Michigan, Ann Arbor, Michigan 48109 USA
1 Why Is the Ethics of AI Important?
AI uses computational methods to study human knowledge, learning, and behavior, in part by
building agents able to know, learn, and behave. Ethics is a body of human knowledge that helps
agents (humans today, but perhaps eventually robots and other AIs) decide how they and others
should behave. The ethical issues raised by AI fall into two overlapping groups.
First, like other powerful tools or technologies (e.g., genetic engineering or nuclear power),
potential deployments of AI raise ethical questions about their impact on human well-being.
Second, unlike other technologies, intelligent robots (e.g., autonomous vehicles) and other AIs
(e.g., high-speed trading systems) make their own decisions about the actions they take, and thus
could be considered as members of our society. Humans should be able to expect them to behave
ethically. This requires AI research with the goal of understanding the function, structure, and
content of ethical knowledge well enough to implement ethics in artificial agents.
As the deployment of AI, machine learning, and intelligent robotics becomes increasingly
widespread, these problems become increasingly urgent.
2 What is the Function of Ethics?
“At the heart of ethics are two questions: (1) What should I do?, and (2) What sort of person
should I be?”1 Ethics consists of principles for deciding how to act ...
Perspectives on Ethics of AI Computer Science∗ Benjamin .docxssuser562afc1
Perspectives on Ethics of AI: Computer Science∗
Benjamin Kuipers†
August 14, 2019
Abstract
AI is a collection of computational methods for studying human knowledge, learning, and behavior,
including by building agents able to know, learn, and behave. Ethics is a body of human knowledge, far
from completely understood, that helps agents (humans today, but perhaps eventually robots and other
AIs) decide how they and others should behave. The ethical issues raised by AI fall into two overlapping
groups.
First, potential deployments of AI raise ethical questions about the impacts they may have on human
well-being, just like other powerful tools or technologies such as nuclear power or genetic engineering.
Second, unlike other technologies, intelligent robots and other AIs have the potential to be considered as
members of our society. Since they will make their own decisions about the actions they take, it is
appropriate for humans to expect them to behave ethically. This requires AI research with the goal of
understanding the structure, content, and purpose of ethical knowledge, well enough to implement ethics
in artificial agents.
This chapter describes a computational view of the function of ethics in human society, and discusses its
application to three diverse examples.
∗Draft chapter for the Oxford Handbook of Ethics of AI, edited by Markus Dubber, Frank Pasquale, and Sunit Das, to appear,
2019.
†[email protected] Computer Science & Engineering, University of Michigan, Ann Arbor, Michigan 48109 USA
1 Why Is the Ethics of AI Important?
AI uses computational methods to study human knowledge, learning, and behavior, in part by
building agents able to know, learn, and behave. Ethics is a body of human knowledge that helps
agents (humans today, but perhaps eventually robots and other AIs) decide how they and others
should behave. The ethical issues raised by AI fall into two overlapping groups.
First, like other powerful tools or technologies (e.g., genetic engineering or nuclear power),
potential deployments of AI raise ethical questions about their impact on human well-being.
Second, unlike other technologies, intelligent robots (e.g., autonomous vehicles) and other AIs
(e.g., high-speed trading systems) make their own decisions about the actions they take, and thus
could be considered as members of our society. Humans should be able to expect them to behave
ethically. This requires AI research with the goal of understanding the function, structure, and
content of ethical knowledge well enough to implement ethics in artificial agents.
As the deployment of AI, machine learning, and intelligent robotics becomes increasingly
widespread, these problems become increasingly urgent.
2 What is the Function of Ethics?
“At the heart of ethics are two questions: (1) What should I do?, and (2) What sort of person
should I be?”1 Ethics consists of principles for deciding how to act .
This document provides a summary of structural family theory, which examines the unspoken rules within families and how they affect the family's organization. It discusses key concepts of the theory like subsystems, boundaries, and rules. It also reviews literature applying structural family theory to divorced families, emphasizing the importance of clear parent-child roles and establishing a new family structure for the adolescent's well-being.
The Ethics of Machine Learning Algorithms Max Yousif
This document discusses the ethics of machine learning algorithms. It notes that algorithms can contain bias since they reflect the values of their developers and are designed with a purpose in mind. This can lead to unfair outcomes like discrimination in areas like advertising or pricing. The document also discusses how algorithms may take inappropriate actions based on inconclusive data, by extrapolating correlations without establishing causation. Overall, the document examines some of the ethical issues around algorithmic bias, discrimination, and actions based on uncertain data that machine learning applications need to address.
This document provides an overview of artificial intelligence including definitions, concepts, and applications. It defines AI as simulating human intelligence through machine learning and problem solving. Key points include:
- AI systems are designed to rationally achieve goals like humans through learning.
- Knowledge representation and organization is important for efficient searching and reasoning. Common methods include rules, frames, and ontologies.
- Knowledge-based systems combine a knowledge base with an inference engine to derive new understandings and solve complex problems. They are often used to replicate expert knowledge.
Virtue in Machine Ethics: An Approach Based on Evolutionary Computation Ioan Muntean
February 2015. Co-author: Don Howard, University of Notre Dame). Presented at the American Philosophical Association (APA Central). St. Louis, Missouri.
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...AJHSSR Journal
ABSTRACT : This study aims to debate and analyze the implementation of artificial intelligence (AI) in the Justice Age of the Future
Democracy and how it can affect civil and criminal investigation. To do so, a database of indexed scientific papers and conference materials
were "searched" to gather their findings. Artificial intelligence (AI), is a science for the development of intelligent machines and has its
roots in the early philosophical studies of human nature and in the process of knowing the world, expanded by neurophysiologists and
psychologists in the form of a series of theories, about the work of the human brain and thought. The stage of the development of the science
of artificial intelligence is the development of the foundation of the mathematical theory of computation - the theory of algorithms - and the
creation of computers, Anglin, (1995). "Artificial Intelligence" is a science that has theoretical and experimental parts. In practice, the
problem of the creation of "Artificial Intelligence" is, on the one hand, at the intersection of computer technology and, on the other, with
neurophysiology, cognitive and behavioral psychology. The Philosophy of Artificial Intelligence serves as a theoretical basis, but only with
the appearance of significant results will the theory acquire an independent meaning. Until now, the theory and practice of "Artificial
Intelligence" must be distinguished from the mathematical, algorithmic, robotic, physiological, and other theoretical techniques and
experimental techniques that have an independent meaning.
KEYWORDS: Artificial Intelligence; Hybrid Smart Systems (HIS); Computer Machines; Robotics; Test of Turing
Ethical by Design: Ethics Best Practices for Natural Language Processingantonellarose
Ethical considerations regarding NLP.
Jochen L. Leidner and Vassilis Plachouras
Thomson Reuters, Research & Development,
30 South Colonnade, London E14 5EP, United Kingdom.
{jochen.leidner,vassilis.plachouras}@thomsonreuters.com
Solved Discussion Paper Handout All Students Are RequirAngie Logan
Minisatellites are hypervariable regions of DNA defined by polymorphisms in the number of repeated nucleotide motifs ranging from 12-100 base pairs. They are found throughout eukaryotic genomes in both coding and non-coding regions. Variation in the number of repeats at minisatellite loci provides a tool for genetic analysis, DNA fingerprinting, and studying mutation rates due to strand slippage during DNA replication.
Domain ontology development for communicable diseasescsandit
This document discusses the development of a domain ontology for communicable diseases. The researchers developed an ontology with concepts like diseases, symptoms, and causes arranged in a taxonomy. They created over 600 concepts with properties and relations. The ontology development process included specification, conceptualization, creation of instances, and evaluation using a description logic reasoner to verify the concepts and relations were correctly represented. The ontology will be expanded to include more diseases and connections to related web content to provide information retrieval.
DOMAIN ONTOLOGY DEVELOPMENT FOR COMMUNICABLE DISEASEScscpconf
Web has become the very first resource to search for any kind of information. With the emergence of semantic web, our search queries have started generating more informed results.Ontologies are at the core of any semantic web application. They help in rapid development of
distributed systems by providing information on the fly. This key feature of distribution and
sharing of information has made ontologies as a new knowledge representation mechanism. A
mechanism which is strongly backed by a sound inference system. In this paper, we shall discuss the development, verification and validation of an ontology in a health domain.
The paper must have the following subheadings which is not include.docxoreo10
The paper must have the following subheadings which is not included in word count:
Introduction
Analysis
Rationale to support the response [1 and 2 separately]
Description of key job types
Conclusions
Week 11 Discussion 1
"The Future of Training" Please respond to the following:
From the first e-Activity, analyze the views of Cross and Jarche about the “Golden Age of Training” and its future. Then, assess the claims Miller makes about training in the article “Training is Not an Option.” Take a position on which views you agree with most. Provide a rationale to support your response.
From the second e-Activity, describe three key job types and competencies that professional organizations such as ISPI and ASTD claim that professionals in the field of organizational training and development should possess. Provide a rationale to support your response.
e-Activity Bottom of Form
Read the article by Cross and Jarche titled “The Future of the Training Department” published in Training Magazine (June 2009). Then, read the article titled, “Training is Not an Option,” by Adrian Miller. Be prepared to discuss.
Search the Internet for a professional organization (e.g., ISPI, ASTD) and review the primary job types and job competencies listed. Be prepared to discuss.
Article: “The Future of the Training Department”
URL: https://www.polleverywhere. com/blog/the-future-of-the- training-department/
Article: “Training is Not an Option,”
URL: http://ezinearticles.com/? Training-is-Not-an-Option&id= 157604
Post 1 AW
Referencing the Learning Resources for this module, choose any question in the research project list and answer it in relation to posthumanism. In other words treat posthumanism as a new technology or technological way of being.
Posthumanism is essentially the interlinking of humans and technology. This could range from artificial intelligence to a human that has prosthetics or technological enhancements fused into their bodies. But how did this term even come about? What is so wrong with humans and their ability to function that we need to incorporate such technology into our lives? What is the problem for which posthumanism is the solution?
The answer is everything. All aspects of our lives involve problems and solutions. This technology that is being referred to as posthumanism has the ability to solve a vast majority of the problems humans encounter and create. Steven Poole, although a strong supporter against posthumanism, discusses a few of these problems as well as new problems that could be created in his article “Slaves to the algorithm”. First referring to a chess match between world champions, then to vehicle automation, crime algorithms and psychotherapy applications, Poole is able to illustrate the involvement posthumanism already has in our present day. Before he argues that humans are quickly rationing off our conscious thoughts and judgements he recognizes the need for imp ...
This document summarizes the research of the Norms Evolving in Response to Dilemmas (NERD) research group regarding artificial ethics. It discusses three techniques used in their research: 1) Artificial Morality which models moral principles as game theoretic strategies, 2) Evolving Artificial Moral Ecologies which uses genetic programming and agent-based modeling to generate and test diverse agent types, and 3) NERD which is an experimental platform to test and refine ethical mechanisms using real world problems. The research found that reciprocal cooperation is key to stabilizing cooperation and mixed populations persist. NERD-I collected data on ethical decision making which provided insights while avoiding framing effects. NERD-II proposes using
Artificial intelligence needs ideas from philosophy to build human-level intelligence. For a robot to have common sense and learn from experience, it needs a general worldview to organize facts, which raises many philosophical problems. Some philosophical approaches are compatible with designing intelligent systems, such as accepting both science and common sense knowledge, treating mind as a set of features rather than an all-or-nothing concept, and using approximate concepts. Philosophers could help AI by clarifying useful concepts like belief, causality, and counterfactuals.
This document discusses nudges, which are ways of influencing people's behavior through subtle changes to their choice environment, without limiting options or economic incentives. It defines key concepts like choice architecture and libertarian paternalism. The document explains that nudges work by leveraging cognitive shortcuts and biases, and that they can be designed around defaults, error prevention, feedback, mapping choices to expected outcomes, and structuring complex choices simply. It provides examples of how nudges have been effectively used in communication campaigns to increase understanding and compliance.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses human rights issues related to artificial intelligence. It begins with definitions of key AI concepts like machine learning, deep learning, and algorithms. It then explains how AI can both help and potentially harm society. The document outlines how various human rights may be impacted by current and future applications of AI, such as privacy and non-discrimination. It concludes with recommendations for stakeholders to address human rights harms through approaches like data protection laws and increased research.
Similar to Toward machines that behave ethically better than humans do (19)
Hoe overheden en bedrijven je bespieden om je gedrag te manipuleren | Privacy...Matthijs Pontier
Hoe overheden en bedrijven je bespieden om je gedrag te manipuleren | Privacy symposium Usocia | Universiteit Utrecht | 2021-05-06
2. Vrij delen van informatie, kunst en cultuur Evidence-based policy met een lange-termijn visie Zelfbeschikking stimuleren zonder dat dit ten koste gaat van rechten Vertrouwen burgers vs Wantrouwen ‘macht' Enthousiast over tech, maar alert op risico’s Tech to empower people; niet om te onderdrukken Basis principes PPNL Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
3. Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
4. Amsterdam wil data verzamelen en delen, om: – Maatschappelijke vraagstukken oplossen – Dienstverlening verbeteren – Slimmer handhaven – Openbare ruimte beheren – Werkprocessen efficiënter – Zorg verbeteren – Patronen herkennen om gedrag te beïnvloeden “Amsterdam doet daarbij graag zaken met de markt”
5. Verkapte privatisering publieke diensten? Wie beheert AI in ‘slimme’ stad? Wie beheert de data die we samen produceren? Wie profiteert van de voordelen? Wie kampt met de nadelen?
6. Google ontwierp privacy policies voor buurten: - Mensen zouden getarget worden met producten - Zelfs getarget worden om stemgedrag te beïnvloeden Privacy adviseur Ann Cavoukian nam ontslag: "I imagined us creating a smart city of privacy, as opposed to a smart city of surveillance" Toronto: "The evaluation process will determine which parts of the proposal, if any, may be pursued further." Google Sidewalks Toronto Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
7. Privacy in ‘smart’ cities https://piratenpar tij.nl/zorgen-om- de-privacy-in-sli mme-steden/
8. Niets te verbergen? Functie in sociale relaties Mogen anderen ook niks te verbergen hebben? Snowden: “I don't care about privacy, 'because I have nothing to hide', is like I don't care about free speech, Because I have nothing to say" Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
9. Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
10. 15-12-16 SyRI https://www.youtube.com/watch?v=2GkXCzYdrBY • Allerlei data die je met overheid deelt wordt aan elkaar geknoopt. • Iedereen is bij voorbaat verdacht → bestempeld met risicoprofiel. • Op basis hiervan wordt je extra in de gaten gehouden, zonder dat je dit weet. • https://bijvoorbaatverdacht.nl/ Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
11. 15-12-16 SyRI in Rotterdam 25.000 bewoners Bloemhof en Hillesluis op fraude onderzocht • 50% Foutmarge • 10% werd onderzocht • 0(!!) fraudegevallen opgespoord Na ophef gestopt met experiment Matthijs Pontier, Ph.D., Privacy symposium Usocia, Universiteit Utrecht 06-05-2021
12. https://nos.nl/artikel/2366864-fraud e-opsporen-of-gevaar-van-discrimin atie-gemeenten-gebruiken-slimme- algoritmes Toeslagenschandaal Toeslagenaffaire
Piratenpartij presentatie programma en kandidaten Tweede kamerverkiezingen 2021Matthijs Pontier
PirtatenpartijTK2021
1. Visie met lef
2. ● In meer dan 60 landen actief ● Honderden zetels op verschillende niveaus ● Belangrijkste oppositiepartij in Tsjechië, IJsland ● Eerste pan-Europese partij ● Verviervoudigd in EU sinds 2014 Internationale beweging Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
3. ● Vertrouwen op mensen vs wantrouwen macht ● Enthousiast over tech, maar alert op risico’s ● Tech om te empoweren; niet om te onderdrukken ● Vrij delen van informatie, kennis en cultuur ● Evidence-based beleid met lef en lange-termijn visie Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021 Basisprincipes
4. Empowerment / Decentralisatie van macht • Burgerrechten • Democratie • Transparantie • Vrijheid van informatie Kernpunten Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
5. Burgerrechten zijn grondvesten van onze beweging. Privacy, Zelfbeschikking, Vrije meningsuiting, Internettoegang, Gelijkwaardige behandeling • Stop microtargeting • Delete Big Brother: Sleepnet uit sleepwet • Transparantie en audits voor algoritmes • Tanden voor de toezichthouder: Versterk AP Burgerrechten Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
6. Open, transparante overheid is basisvoorwaarde voor een goede democratie • Stop de Rutte-doctrine! Streaming besprekingen • Transparantie kasboek, lobby, belastingafspraken • Ook bij semi-overheid en adviesorganen • Vrije, onafhankelijke media • Bescherming klokkenluiders Transparantie Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
7. Vrijheid van informatie Vrije toegang tot info, kennis en cultuur is een voorwaarde voor sociale, technologische en economische ontwikkeling. Auteursrecht en patentrecht creëren informatie monopolies die alleen grote bedrijven ten goede komen. Stel het in dienst van auteurs! Innovatie bevorderen: als je iets kopieert, verdubbel je waarde! Zo medicijnen en behandelmethoden direct wereldwijd verspreid Als doel monopolie geld verdienen is, beloon dan met geld Investeer in kennis, in plaats van het hek om de kennis Matthijs Pontier, Ph.D., Presentatie Piratenpartij Tweede Kamerverkiezingen 2021. #StemPiraat #TK2021
8. Democratie Niet eens in de vier jaar, maar het hele jaar door meebeslissen • bindend (p)referendum en volksinitiatief, zonder opkomstdrempels • e-democracy (zoals ons initiatief in Amsterdam) • burgerforum / burgerberaad / burgertop • eerlijke behandeling voor lokale politieke partijen • wetten toetsen aan de Grondwet • apolitieke benoemingen (semi-)overheid: stop de vriendjespolitiek Meer dan stemmen alleen. Democratie gaat over verhoudingen in samenleving Recht doen aan wat de meerderheid wil en rekening houden met de minderheid Daarom is combinatie met burgerrechten en stevige rechtsstaat zo belangrijk
The document discusses how AI will shift the distribution of power in society. It notes that large tech companies like Google are trying to privatize public services through smart city projects, which raises issues around transparency, data ownership, and who benefits. The use of personal data for targeted advertising and influencing elections is also discussed. Overall, the document argues that AI and new technologies could exacerbate existing inequalities and concentrations of power if issues around data ownership, privacy, transparency and the potential for discrimination are not adequately addressed.
Wie wordt de baas over onze big data? | 2019 10-29 HvAMatthijs Pontier
Wie wordt de baas over onze big data? | 2019 10-29 HvA
Vrij delen van informatie, kunst en cultuur Evidence-based policy met een lange-termijn visie Zelfbeschikking stimuleren zonder dat dit ten koste gaat van rechten Vertrouwen burgers vs Wantrouwen ‘macht' Enthousiast over tech, maar alert op risico’s Tech to empower people; niet om te onderdrukken Basis principes PPNL Matthijs Pontier, Ph.D., Hogeschoolvan Amsterdam, 29-10-2019, Wiewordtdebaas over onzebig data?
3. MatthijsPontier, Ph.D., Hogeschool van Amsterdam, 29-10-2019, Wie wordt de baasover onze big data?
4. MatthijsPontier, Ph.D., Hogeschool van Amsterdam, 29-10-2019, Wie wordt de baasover onze big data?
5. Amsterdam wil data verzamelen en delen, om: – Maatschappelijke vraagstukken oplossen – Dienstverlening verbeteren – Slimmer handhaven – Openbare ruimte beheren – Werkprocessen efficiënter – Zorg verbeteren – Patronen herkennen om gedrag te beïnvloeden “Amsterdam doet daarbij graag zaken met de markt”
6. Verkapte privatisering publieke diensten? Wie beheert AI in ‘slimme’ stad? Wie beheert de data die we samen produceren? Wie profiteert van de voordelen? Wie kampt met de nadelen?
7. Niets te verbergen? Functie in sociale relaties Mogen anderen ook niks te verbergen hebben? Snowden: “I don't care about privacy, 'because I have nothing to hide', is like I don't care about free speech, Because I have nothing to say" Matthijs Pontier, Ph.D., Hogeschoolvan Amsterdam, 29-10-2019, Wiewordtdebaas over onzebig data?
8. Mass surveillance: voor 1 target, tienduizenden bespieden Info delen met dubieuze regimes Devices bewust lek houden om ons te hacken MatthijsPontier, Ph.D., Hogeschool van Amsterdam, 29-10-2019, Wie wordt de baasover onze big data?
9. Matthijs Pontier, Ph.D., Hogeschoolvan Amsterdam, 29-10-2019, Wiewordtdebaas over onzebig data?
10. 15-12-16 SyRI https://www.youtube.com/watch?v=2GkXCzYdrBY • Allerlei data die je met overheid deelt wordt aan elkaar geknoopt. • Iedereen is bij voorbaat verdacht -> bestempeld met risicoprofiel. • Op basis hiervan wordt je extra in de gaten gehouden, zonder dat je dit weet. • https://bijvoorbaatverdacht.nl/ Matthijs Pontier, Ph.D., Hogeschoolvan Amsterdam, 29-10-2019, Wiewordtdebaas over onzebig data?
11. 15-12-16 SyRI in Rotterdam 25.000 bewoners Bloemhof en Hillesluis op fraude onderzocht • 50% Foutmarge • 10% werd onderzocht • 0(!!) fraudegevallen opgespoord Na ophef gestopt met experiment Matthijs Pontier, Ph.D., HvA, 29-10-2019, Wiewordtdebaas over onzebig data?
12. Ron Kowsoleea
13. Privacy inperken voor veiligheid? 1. Niet effectief 2. Leidt tot ongezonde machtsverhouding 3. Privacy juist voorwaarde voor veiligheid 4. Er bestaan prima alternatieven
14. Overheid niet altijd betrouwbaar Overheid laat gegevens slingeren WRR: Overheid onbetrouwbaar met gegevens. Informatie wordt gemanipuleerd. Ook overheid kan misdadig gedrag vertonen
15. Function Creep
Who controls our smart cities - CuriousU Summer School - Twente University - ...Matthijs Pontier
Who controls our smart cities - CuriousU Summer School - Twente University - 2019-08-16
1. Who controls our smart cities? Matthijs Pontier, Ph.D.
2. ⚫ Free sharing of information, art and culture ⚫ Evidence-based policy with long-term vision ⚫ Stimulate self-determination, but keep rights ⚫ Trust civilians vs Distrust ‘power' ⚫ Enthusiastic about tech, but alert on risks ⚫ Tech to empower people; not to repress Basic principles PPNL Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
3. Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
4. Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
5. Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
6. Is 'smart city' privatizing public services? Who controls AI in the ‘smart’ city? Who controls the data that we produce together? Who profits from the benefits? Who has to deal with the cons? Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
7. Amsterdam wants to collect and share data, to: – Solve societal problems – Improve services – Smarter law enforcement – Manage public space – Improve working processes – Improve healthcare – Recognize patterns to influence behavior “Hereby, Amsterdam is happy to do business with the market”Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
8. Cars that free your time to do other things
9. Local vs Centralized data Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
10. Who will control ‘smart’ cars? Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
11. Monopolization Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
12. The bigger the company The bigger the power center The bigger the chance the company will misuse this power for its own benefit.. Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
13. Corporate surveillance: ‘Smart TV’ Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
14. If technology thinks for you, Then do the programmers tell you what to think?
15. Smart TV hackers are filming couples having sex on their sofa’s and they are putting it on porn sites! Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
16. Matthijs Pontier, Ph.D., CuriousU Summer School,Twente University, 16-08-2019,Who controls our smart cities?
17. Who actually owns your 'smart home'? Hackers? Viruses? Terrorists? Companies that actually own your devices AI
How we can use technology to increase our freedom - Psychedelic Society of th...Matthijs Pontier
How we can use technology to increase our freedom - Psychedelic Society of the Netherlands
1. How we can use tech to increase our Freedom and create a Better Future and outsmart governments and companies who want the opposite Matthijs Pontier, Ph.D Candidate #2 Piratenpartij Amsterdam Vice-president ENCOD
2. Contents: • Mass-surveillance (sleepwet) • Privacy protection / Safe communication • Platform capitalism and the harms of monopolization of tech • Cooperative alternatives • Darkweb marketplaces • E-Democracy • Universal Basic Income Contents Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom
3. Free sharing of information, art and culture Evidence-based policy with long-term vision Stimulate self-determination, but keep rights Trust civilians vs Distrust ‘power' Enthusiastic about tech, but alert on risks Tech to empower people; not to repress Basic principles PPNL Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom
4. Why do they call the WIV dragnet surveillance?
5. How do you find a needle in a haystack?
6. By making the haystack bigger? More data is leading to less safety
7. Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom
8. Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom
9. Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom
10. Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom
11. Privacy inperken voor veiligheid? 1. Niet effectief 2. Leidt tot ongezonde machtsverhouding 3. Privacy juist voorwaarde voor veiligheid 4. Er bestaan prima alternatieven
12. The government won’t protect your privacy Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom
13. Secret services are keeping our devices unsafe because they want to hack us Dr. Matthijs Pontier, Laat je niet meeslepen Hacking agencies: Keeping us all unsafe
14. What’s wrong with the WIV? Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom
15. What’s wrong with the WIV?
17. Tech companies (like cloud services) will move out for safe harbours Matthijs Pontier Ph.D, Psychedelic Society of the Netherlands, 25-02-2018, How we can use tech to increase our freedom Tech economy
18. Enrypt your e-mail Protonmail Encrypt phone Signal adblockers DNT uBlock / DNT TOR VPN
Informatieve Piratenpartij-presentatie over de Wet op de Inlichtingen en Veiligheidsdiensten (WIV), oftewel de Sleepwet
1. Laat jij je meeslepen? Dr. Matthijs Pontier, Kandidaat #2 Piratenpartij Amsterdam
2. Waarom noemen ze ‘WIV’ eigenlijk ‘Sleepwet’?
3. Waar ik het over ga hebben: • Wat er vooraf ging • Wat is er goed aan de WIV .. wat is er mis mee • Wat kunnen we doen • Referendum • Rechtszaak
7. De WIV WIV
10. Eindverantwoordelijkheid op voorgenomen toezicht blijft bij de minister liggen Wat is er slecht?
11. hoe vind je een speld in een hooiberg?
12. Met een grotere hooiberg? Meer data is niet meer Veiligheid
13. •Iedereen heeft wel wat te verbergen •Functie in sociale relaties •Als je niks te verbergen hebt, mogen anderen dan ook niks te verbergen hebben? •Snowden: “Ik vind privacy niet belangrijk, want ik heb toch niets te verbergen Is als Ik vind vrijheid meningsuiting niet belangrijk, want ik heb toch niets te melden”
14. Privacy inperken voor veiligheid? 1. Niet effectief 2. Leidt tot ongezonde machtsverhouding 3. Privacy juist voorwaarde voor veiligheid 4. Er bestaan prima alternatieven
15. Dr. Matthijs Pontier, Laat jij je meeslepen?
16. Overheid niet altijd betrouwbaar • Overheid laat gegevens slingeren • WRR: Overheid onbetrouwbaar met gegevens. Informatie wordt gemanipuleerd. • Ook overheid kan misdadig gedrag vertonen
17. ‘Broedkamer’ Belastingdienst • SyRI, Systeem Risico Indicatie (Wet SUWI) tbv fraudepreventie Hierover zei de Raad van State: ‘Er [is] nauwelijks een persoonsgegeven te bedenken dat niet voor verwerking in aanmerking komt. De opsomming lijkt niet bedoeld om in te perken, maar om zoveel mogelijk armslag te hebben.’
18. DNA databank • AIVD/MIVD gaan zelf DNA materiaal verzamelen • Uit database ‘DNA voor veroordeelden’ maar ook via hacking • Database is geheim; je kunt niet controleren of jij of je familie erin staat
19. • Techbedrijven (zoals clouddiensten) zullen naar veilige plek vluchten Vestigingsklimaat tech-bedrijven
20. Onze devices worden bewust onveilig gehouden om ons te kun
ENCOD jaarplan voor 2018
1. ENCOD’s plannen voor 2018 Matthijs Pontier, Ph.D., Vice-president ENCOD
2. Doelen ENCOD Eerlijk en efficiënt drugsbeleid Rechten van producenten en consumenten beschermen Informatie verstrekken over veilige teelt / veilig gebruik Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
3. ENCOD nu ~150 leden, aantal neemt gestaag toe 7 mensen in Executive Committee (vrijwel) volledig vrijwillig: Mauro Picavet President Nederland Matthijs Pontier Vice-president Nederland Gabrielle Kozar Penningmeester Oostenrijk Nico Vlaming Secretaris Nederland Ana Afuera Spanje Maja Kohek Slovenie Enrico Fletzer Italie Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
4. Huidige activiteiten ENCOD Organisatie heropgebouwd en hervormd Beurzen en evenementen bezocht Bestaande contacten opnieuw opgepakt Contact met politici voor verbetering beleid ECOSOC status aangevraagd voor politieke campagnes Website vernieuwen Informatie verspreid over ontwikkelingen drugsbeleid via Nieuwsbrief, Bulletin, Facebook, Twitter Netwerkbijeenkomst Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
5. Holarchie Horizontale organisatie, maar met structuur Doelen, rollen, taken definiëren Met een rol / taak, komt Verantwoordelijkheid, Vrijheid en Vertrouwen Meritocratie Structuur organisatie is volledig transparant Dit maakt directe communicatie mogelijk Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
6. Holarchie Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
7. ENCOD ENCOD NL ENCOD NL Penningmeester EC Organizer Secretaris Vertaler Webmaster Ontwerper Comunicator CSC-contact Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
8. Holarchy Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
9. ENCOD in 2018 Campagnes: Cannnabis Social Clubs: Good practices guide en consultancy Freedom to farm Seeds for Drug Peace Global Marijuana March Politiek: Lobby EU / VN / lidstaten Database met situatie politiek en drugsbeleid per land Infopakket voor lokale overheden / lokale politiek Info: Archief van eerdere presentaties / lezingen Advocaten database Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
10. Cannabis Social Clubs: Good Practices Guide Good Practices Guide ontwikkelen Consultancy om te helpen met opstarten / hulp bij problemen CATEGORY AMOUNT IN € Web page creation 2.000 Print materials (banners, flyers, stickers, etc.) 1.000 / year Online advertising 100 / month Coordination + webadmin + communication worker fees (0,2 fte total) 400 / month Consultancy fees (0,4 fte) 800 / month TOTAL FIRST YEAR 18.600 TOTAL NEXT YEARS 16.600Matthijs Pontier, Ph.D., ENCOD’s plannen voor 2018, 14-12-2017
11. Freedom to farm Campage voor zelfbeschikkingsrecht Vrije en Veilige Cultivatie
Wie krijgt de controle over onze data - Inholland Diemen - 2017 11-23 Matthijs Pontier
1. Wij krijgt controle over onze Big Data? Matthijs Pontier, Ph.D.
2. Vrij delen van informatie, kunst en cultuur Evidence-based policy met een lange-termijn visie Zelfbeschikking stimuleren zonder dat dit ten koste gaat van rechten Vertrouwen in goedheid mensen vs Wantrouwen machtsstructuren Enthousiast over mogelijkhede tech, maar alert op risico’s Tech to empower people; niet om te onderdrukken Basis principes PPNL Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
3. Monopolization Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
4. Hoe groter het bedrijf Hoe groter het machtscentrum Hoe groter de kans dat een bedrijf deze macht op een verkeerde manier gaat gebruiken Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
5. Corporate surveillance: ‘Smart TV’ Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
6. Smart TV hackers filmen mensen die seks hebben op hun bank en plaatsen het op porno sites! Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
7. Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
8. Wie wordt de baas in jouw ‘slimme’ huis? Hackers? Virussen? Terroristen? Bedrijven die eigenlijk je apparaat bezitten in praktijk? De AI die vertelt wat je moet doen? Of krijgen ze een eigen wil? Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
9. Verkapte privatisering publieke diensten? Wie beheert AI in ‘slimme’ stad? Wie beheert de data die we samen produceren? Wie profiteert van de voordelen? Wie kampt met de nadelen? Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
10. Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
11. Responsible Digital Cities Bewoners bepalen wat er met hun data gebeurt: Privacy = Ownership Bewoners bepalen hoe digitale stad zich ontwikkelt Democratie Verzamelde data is gemeenschapsgoed: Commons Openheid over dataverzameling en verwerking Transparantie Iedereen kan meedoen: Inclusie Menselijke maat staat centraal in alles wat we doen Empathie Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
12. Matthijs Pontier, Ph.D, 24-11-2017, Inholland Diemen, Wie krijgt controle over onze Big Data?
13. Privacy inperken voor veiligheid? 1. Niet effectief 2. Leidt tot ongezonde machtsverhouding 3. Privacy juist voorwaarde voor veiligheid 4. Er bestaan prima alternatieven
14. Niets te verbergen? Iedereen heeft wel wat te verbergen Functie in sociale relaties Als je niks te verbergen hebt, mogen anderen dan ook niks te verbergen hebben? Snowden: “Ik vind privacy niet belangrijk, want ik heb toch niets te verbergen” “Ik vind vrijheid meningsuiting
How we can use technology to increase our freedom - and outsmart governments ...Matthijs Pontier
Psyi 2017 how use tech to improve freedom
How we can use technology tu increase our freedom - and outsmart governments and companies who want the opposite - Lecture at Psy-Fi 2017
1. How we can use technology to increase our freedom and outsmart governments and companies who want the opposite Dr. Matthijs Pontier
2. Contents: • Lessons from tech dystopia’s • Self-driving cars as an example • Platform capitalism and the harms of monopolization of tech • Privacy protection / Safe communication • Machine ethics • Darkweb marketplaces • Universal Basic Income • E-Democracy Contents Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
3. Free sharing of information, art and culture Evidence-based policy with long-term vision Stimulate self-determination, but keep rights Trust civilians vs Distrust ‘power' Enthusiastic about tech, but alert on risks Tech to empower people; not to repress Basic principles PPNL Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
4. Using tech for good: Improving autonomy Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
5. Drinking
6. Road safety ;) Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
7. Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedomMatthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
8. More room for people and plants
9. Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
10. Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
11. Who will control ‘smart’ cars? Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
12. Who will control ‘smart’ cars? Hackers? Viruses? Terrorists? Car company that calls them back if you don’t pay? The AI that tells you what to do? Or will they control themselves? Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
13. Local vs Centralized data Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
14. Monopolization Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
15. The bigger the company -> Bigger power concentration -> Bigger chance that this company will use its power in malicious ways Matthijs Pontier, Psy-Fi 2017, 16-08-2017, How we can use tech to increase our freedom
16. Need to stop corporate surveillance Regulate Big Business vs More freedom for small startups
17. Safe communication is necessary for progress We won’t know what will be prohibited when and where Self-censorship If we would have had current control systems in the past, would homosexuality still be illegal? Matthijs Pontier, Psy-Fi 2017, 16-08-2017, Using tech to increase freedom
18. The government won’t protect your privacy
Wie de data controleert, heeft de macht - Symposium sticky / gastles UniC, U...Matthijs Pontier
1984, amsterdam, amsterdam-west, anpr, cameratoezicht, democratie, function creep, gemeenteraad, huxley, marcouch, orwell, panopticon, piratenpartij, politiek, ppnl, preventief fouilleren, privacy, surveillance, veiligheid, waterbedeffect
1. Het belang van privacy als je niets te verbergen hebt Dr. Matthijs Pontier
2. Privacy inperken voor veiligheid? 1. Niet effectief 2. Leidt tot ongezonde machtsverhouding 3. Privacy juist voorwaarde voor veiligheid 4. Er bestaan prima alternatieven
3. Niets te verbergen? • Iedereen heeft wel wat te verbergen • Functie in sociale relaties • Als je niks te verbergen hebt, mogen anderen dan ook niks te verbergen hebben? • Snowden: “Ik vind privacy niet belangrijk, want ik heb toch niets te verbergen”, is als “Ik vind vrijheid meningsuiting niet belangrijk, want ik heb toch niets te melden”
4. Ron Kowsoleea
5. Scheve Machtsverhouding Burger - Overheid – Reisgedrag en function creep – Bespieden telefoon- en computergebruik – Internet of Things Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
6. Vrije samenleving nodig voor vooruitgang • We weten niet wat waar wanneer fout is Zelfcensuur • Met huidige controlesystemen was homoseksualiteit nog steeds illegaal geweest Dr. Matthijs Pontier, UUMUN TEDx
7. Overheid niet altijd betrouwbaar • Overheid laat gegevens slingeren • WRR: Overheid onbetrouwbaar met gegevens. Informatie wordt gemanipuleerd. • Ook overheid kan misdadig gedrag vertonen
8. Sleepnetmethode + Profiling • Discriminatie zonder te weten waarop • Speuren naar ‘afwijkend gedrag’ Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
9. Slachtoffers van Profiling
13. Slachtoffers van Profiling: Daniel
14. Corporate surveillance: ‘Smart TV’ Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
15. Smart TV hackers are filming couples having sex on their sofa’s and they are putting it on porn sites! Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
16. Smart technology • Als technologie voor je denkt bepalen programmeurs dan wat je denkt? Dr. Matthijs Pontier, Piratenpartij – HvA, HBO Rechten, 2-9-2016
17. Wanneer algoritmes ons controleren... Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
19. <slide> Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
20. • Enrypt e-mail Protonmail • Encrypt phone / messaging Signal • Gebruik adblockers en DNT uBlock / DNT • Iets van het Internet afhalen bestaat niet Privacy: Oplossingen
21. Veilig en vrij • Privacy juist voorwaarde voor veiligheid • Herstel machtsverhouding burger overheid • Kies voor effectieve maatregelen – Gerichte surveillance
22. Bedankt! • @Matthijs85 • Matthijs Pontier • matthijs@piratenpartij.nl • http://www.piratenpartij.nl/ • @Piratenpartij •
Hoe de Piratenpartij de rechtsstaat wil beschermen en waarom je deze presenta...Matthijs Pontier
Hoe de Piratenpartij de rechtsstaat wil beschermen en waarom je deze presentatie met iedereen mag delen - Presentatie voor Digi Juridica, VU University Amsterdam
Hoe we kunnen zorgen dat iedereen profiteert van robotisering | ConferentieSo...Matthijs Pontier
Hoe we kunnen zorgen dat iedereen profiteert van robotisering Dr. Matthijs Pontier
2. Waarom ontwikkelen we technologie? Evolutionaire vooruitgang Het laten voortbestaan van (menselijk) leven Om ons welzijn te vergroten Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
3. Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
4. Vrij delen van informatie, kunst en cultuur Evidence-based policy met een lange-termijn visie Zelfbeschikking stimuleren zonder dat dit ten koste gaat van rechten Vertrouwen burgers vs Wantrouwen ‘macht' Enthousiast over tech, maar alert op risico’s Tech to empower people; niet om te onderdrukken Basis principes PPNL Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
5. Welzijn en autonomie vergroten Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
6. Cars that free your time to do other things
7. Drinking Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
8. Road safety ;) Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
9. Road safety Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
10. Less road rage
11. More room for people and plants
12. Boosting highway capacity 273%
13. Boosting Highway capacity Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
14. Save energy
15. Gedeelde verantwoordelijkheid Stop met het bouwen van nieuwe wegen Stop met het bouwen van nieuwe parkeerplaatsen Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zor
16. Governments are lagging behind Governments are lagging behind
17. Monopolization Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
18. Hoe groter het bedrijf Hoe groter het machtscentrum Hoe groter de kans dat een bedrijf deze macht op een verkeerde manier gaat gebruiken
19. Need to stop corporate surveillance Regulate Big Business vs More freedom for small startups
20. Unequal power distribution
21. Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
22. Techno-progressivism Democratiseer de ontwikkeling van technologie Kosten, risico’s en opbrengsten worden eerlijk gedeeld Ontwikkel technologie op zo’n manier dat het welzijn bevordert
23. Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016, Hoe kunnen we zorgen dat iedereen van robotisering profiteert
24. Dr. Matthijs Pontier, Sociale Innovatie, 16-12-2016
25. Als technologie voor je denkt Vertellen programmeurs dan
Het belang van privacy, als je niets te verbergen hebt Matthijs Pontier
1984, amsterdam, amsterdam-west, anpr, cameratoezicht, democratie, function creep, gemeenteraad, huxley, marcouch, orwell, panopticon, piratenpartij, politiek, ppnl, preventief fouilleren, privacy, surveillance, veiligheid, waterbedeffect
1. Het belang van privacy als je niets te verbergen hebt Dr. Matthijs Pontier
2. Privacy inperken voor veiligheid? 1. Niet effectief 2. Leidt tot ongezonde machtsverhouding 3. Privacy juist voorwaarde voor veiligheid 4. Er bestaan prima alternatieven
3. Niets te verbergen? • Iedereen heeft wel wat te verbergen • Functie in sociale relaties • Als je niks te verbergen hebt, mogen anderen dan ook niks te verbergen hebben? • Snowden: “Ik vind privacy niet belangrijk, want ik heb toch niets te verbergen”, is als “Ik vind vrijheid meningsuiting niet belangrijk, want ik heb toch niets te melden”
4. Ron Kowsoleea
5. Scheve Machtsverhouding Burger - Overheid – Reisgedrag en function creep – Bespieden telefoon- en computergebruik – Internet of Things Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
6. Vrije samenleving nodig voor vooruitgang • We weten niet wat waar wanneer fout is Zelfcensuur • Met huidige controlesystemen was homoseksualiteit nog steeds illegaal geweest Dr. Matthijs Pontier, UUMUN TEDx
7. Overheid niet altijd betrouwbaar • Overheid laat gegevens slingeren • WRR: Overheid onbetrouwbaar met gegevens. Informatie wordt gemanipuleerd. • Ook overheid kan misdadig gedrag vertonen
8. Sleepnetmethode + Profiling • Discriminatie zonder te weten waarop • Speuren naar ‘afwijkend gedrag’ Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
9. Slachtoffers van Profiling
13. Slachtoffers van Profiling: Daniel
14. Corporate surveillance: ‘Smart TV’ Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
15. Smart TV hackers are filming couples having sex on their sofa’s and they are putting it on porn sites! Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
16. Smart technology • Als technologie voor je denkt bepalen programmeurs dan wat je denkt? Dr. Matthijs Pontier, Piratenpartij – HvA, HBO Rechten, 2-9-2016
17. Wanneer algoritmes ons controleren... Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
19. <slide> Dr. Matthijs Pontier, UUMUN TEDx, Utrecht, 30-9-2016
20. • Enrypt e-mail Protonmail • Encrypt phone / messaging Signal • Gebruik adblockers en DNT uBlock / DNT • Iets van het Internet afhalen bestaat niet Privacy: Oplossingen
21. Veilig en vrij • Privacy juist voorwaarde voor veiligheid • Herstel machtsverhouding burger overheid • Kies voor effectieve maatregelen – Gerichte surveillance
22. Bedankt! • @Matthijs85 • Matthijs Pontier • matthijs@piratenpartij.nl • http://www.piratenpartij.nl/ • @Piratenpartij • 15/16 oktober houden we ons congres met sprekers, kunst en muziek •
Het draait om onze ideeen - pitch lijsttrekkersverkiezingen piratenpartij TK2...Matthijs Pontier
Zoals jullie weten, sta ik hier omdat ik lijsttrekker wil worden
en eigenlijk voelt dat een beetje raar,
want ik praat veel liever over waarom onze ideen beter zijn dan die van andendere partijen,
dan dat ik moet gaan vertellen waarom ik beter zou zijn dan iemand anders,
maar juist dat maakt mij misschien wel een goede lijsttrekker.
Want het is juist een van de grootste probleem in onze samenleving
dat sommige mensen denken dat ze beter zijn dan anderen
en dan de baas gaan spelen over hen.
De geschiedenis laat zien dat macht corrumpeert.
Bij overheden, bewiendsleden, maar ook bedrijven zoals Google, Facebook en de farmaceutische industrie.
Doordat we nu met het internet met elkaar verbonden zijn,
kunnen we met de massa sterker zijn dan wie of wat dan ook.
Als je mensen wil overtuigen, moet je niet alleen weten wat je vindt,
maar vooral ook waarom je het vindt.
Bij ons ligt de kern daarbij in decentralisatie van macht.
Van daaruit kun je al onze kernpunten beredeneren.
Want we vinden privacy niet alleen maar belangrijk omdat we anders bespied worden,
maar ook omdat anders de controleurs te machtig zijn.
We vinden democratie belangrijk omdat anders de bestuurders te machtig worden.
En we vinden vije Informatie belangrijk, want kennis is macht, en die moet zich dus zoveel mogelijk verspreiden.
In de Tweede Kamer moeten we natuurlijk onze standpunten op een heleboel verschillende onderwerpen gaan bepalen, of we dat nou willen of niet.
Dan is het dus belangrijk dat we dat doen op een manier waarop we ook met elkaar verbonden blijven
Ik ben naar IJsland gegaan om te kijken hoe ze dat daar doen,
want daar gaat het eigenlijk heel goed.
En ik het daar ook uitgebreid met Brigitta over gepraat,
en ook zij zei: we focusen vooroal op democratie en decentralisatie
op die manier kunnen we samen standpunten bepalen over een heleboel onderwerpen,
zoals bijvoorbeeld ook het basisinkomen.
En dat kan dus ook,
zolang je altijd maar vanuit de kern blijft redeneren
en prioriteit blijft geven aan de belangrijkste kernpunten
Ik praat graag over deze visie.
Dat deed ik als wetenschapper,
maar dat ben ik bij de Piratenpartij natuurlijk alleen nog maar meer gaan doen,
ook buiten campagnetijd.
Ik heb stukken geschreven, interviews, mediaoptredens,
maar daar ga ik nu verder niet teveel over zeggen.
Ik heb een mooi lijstje gemaakt van 17 kantjes,
waarop jullie het allemaal kunnen terugzien en teruglezen, als jullie willen.
Ik heb veel debatten gevoerd en krijg daar vaak enthousiaste reacties over;
grote groepen mensen toegesproken op demonstraties
Ik werk graag samen met andere bewgingen en organisaties;
haal goede mensen bij de Piratenpartij, zols Jelle, Ancilla, en Niels waarmee we veel ludieke acties hebben gedaan.
Ik vind het leuk om samen met anderen projecten op te zetten,
bijvoorbeeld Crowdsource Europe
How autonomous cars are going to change our future - and what we can do to op...Matthijs Pontier
How autonomous cars are going to change our future - and what we can do to optimize that
How self-driving cars are going to change the future of mobility dr. Matthijs Pontier
2. Science Fiction becoming reality Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
3. History • Autopiloting in airplanes since 1914 • PRT-systems since 1975 • London Heathrow since 2011 Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
4. Total Recall Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
5. Now • Delivery drones • Driverless public transport • Driverless personal cars • Car-sharing increasingly popular • Driverless will become standard in 2025~2035 Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
6. Without hands Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
7. Cars that free your time to do other things
8. More comfort Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
9. Road safety ;) Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
10. Drinking Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
11. Less road rage Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
12. No more speeding tickets or parking fees Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
13. Why bother owning a car? • Building vehicles vs Enabling mobility • Making cars look different is easy • No more parking lots Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
14. More room for people and plants Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
15. Road safety Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
16. Boosting highway capacity 273% Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
17. Boosting Highway capacity Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
18. Driving fast and safe Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
19. Save energy Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
20. More traffic at night Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
21. Less flying Matthijs Pontier, 22-03-2016 TU/e, How self-driving cars are going to change the future of mobility
22. Putting a halt to urbanization?
Presentatie over grondwettelijke toetsing
Artikel 120
Art120Gw
1. Geef de grondwet (eindelijk eens) waarde dr. Matthijs Pontier, Piratenpartij
2. Inhoud • Huidige hypocrisie • Wat grondwettelijke toetsing in het buitenland oplost • Wat grondwettelijke toetsing in Nederland zou kunnen oplossen Matthijs Pontier, Artikel120Gw, Utrecht, 21-3-2016, Geef de Grondwet eindelijk eens waarde
3. Hypocrisie over Hongarije • De grondwet wordt ingeperkt, maar er wordt tenminste nog wel aan getoetst (alhoewel dit alleen op procedurele gronden mag) • In strijd met artikel 2 VEU? Hoe zit het dan met Artikel 120 Gw? Matthijs Pontier, Artikel120Gw, Utrecht, 21-3-2016, Geef de Grondwet eindelijk eens waarde
4. Hypocrisie over Hongarije • De grondwet wordt ingeperkt, maar er wordt tenminste nog wel aan getoetst (alhoewel dit alleen op procedurele gronden mag) • In strijd met artikel 2 VEU? Hoe zit het dan met Artikel 120 Gw? • Artikel 2 VEU: Volgens deze bepaling berust de Europese Unie op de waarden van vrijheid, democratie, gelijkheid en de rechtsstaat, en hebben de lidstaten deze waarden gemeen. Lidstaten kunnen dus niet volledig naar believen hun constituties inrichten, ook al is hiervoor voldoende draagvlak in een land zelf. http://www.publiekrechtenpolitiek.nl/hongarije-tart-eu/ Matthijs Pontier, Artikel120Gw, Utrecht, 21-3-2016, Geef de Grondwet eindelijk eens waarde
5. En over Polen.. • EU: “Stel Polen onder toezicht” Matthijs Pontier, Artikel120Gw, Utrecht, 21-3-2016, Geef de Grondwet eindelijk eens waarde
6. Grondwet is nu (formeel) eigenlijk WC-papier? Matthijs Pontier, Artikel120Gw, Utrecht, 21-3-2016, Geef de Grondwet eindelijk eens waarde
7. • Artikel 1 discriminatieverbod • Artikel 2 III, IV uitlevering en verlaten van het land • Artikel 3 benoeming in openbare dienst • Artikel 4, 54, 56, 129 actief en passief kiesrecht • Artikel 5 verzoekrecht • Artikel 6 vrijheid van godsdienst/levensovertuiging • Artikel 7 vrijheid van meningsuiting • Artikel 8 recht van vereniging • Artikel 9 recht van vergadering en betoging • Artikel 10 I recht op eerbiediging persoonlijke levenssfeer • Artikel 11 recht op onaantastbaarheid van lichaam • Artikel 12 binnentreden in een woning • Artikel 13 brief-, telefoon- en telegraafgeheim • Artikel 14 eigendomsrecht/onteigening • Artikel 15 vrijheidsontneming • Artikel 16 geen strafbaarheid zonder daaraan voorafgegane wettelijke strafbepaling • Artikel 17 toegang tot de rechter • Artikel 18 I rechtsbijstand • Artikel 19 III recht op vrije arbeidskeuze • Artikel 23, II-VII: vrijheid van onderwijs • Artikel 99 vrijstelling van militaire dienst wegens gewetensbezwaren • Artikel 113 III vrijheidsstraf uitsluitend door rechterlijke macht opgelegd • Artikel 114 geen doodstraf • Artikel 121 openbare en gemotiveerde rechtspraak
8. Corruptie tegenhouden (Italie) • Immuniteit voor belastingontwijking Berlusconi
Berlusconi zette buitenlandse vennootschappen op, zodat Berlusconi's bedrijven belasting ko
Exponential technology - How do we make sure everyone benefits?Matthijs Pontier
Exponential technology - How do we make sure everyone benefits?
Democracy, democratic transhumanism, technoprogressivism, Exponential Technology How can we make sure everyone benefits? Dr. Matthijs Pontier
2. Content • Why should everyone benefit? • How do we make sure everyone benefits? • Democratic Transhumanism • Machine Ethics • When we succeed New ethical dilemma’s Matthijs Pontier, Leiden, 27-2-2016 Where is the boundary of the human? The new technological turn
3. Why should everyone benefit? Who is everyone? • Does life matter? Matthijs Pontier, Leiden, 27-2-2016Where is the boundary of the human? The new technological turn
4. Why should everyone benefit? Who is everyone? • What is life, anyway? Matthijs Pontier, Leiden, 27-2-2016
5. Why should everyone benefit? Who is everyone? • Does life matter? • What is life, anyway? • Does human life matter? Matthijs Pontier, Leiden, 27-2-2016
6. Why should everyone benefit? Who is everyone? • Does life matter? • What is life, anyway? • Does human life matter? • Do human individuals matter? Matthijs Pontier, Leiden, 27-2-2016
7. Why build technology? Matthijs Pontier, Leiden, 27-2-2016 Where is the boundary of the human? The new technological turn
8. Why build technology? Matthijs Pontier, Leiden, 27-2-2016 Where is the boundary of the human? The new technological turn
9. Why build technology?
10. Why build technology? Matthijs Pontier, Leiden, 27-2-2016 Where is the boundary of the human? The new technological turn
11. Why build technology? • Evolutionary progress • Preserve (human) life • To preserve human rights • To improve our well-being Matthijs Pontier, Leiden, 27-2-2016 Where is the boundary of the human? The new technological turn
12. Why build technology? • Evolutionary progress – Description of a process or goal in itself? • Preserve (human) life • To preserve human rights • To improve our well-being Matthijs Pontier, Leiden, 27-2-2016 Where is the boundary of the human? The new technological turn
13. Why build technology? • Evolutionary progress / Preserve humans • To preserve human rights • To improve our well-being • Hedonic utilitarianism? Matthijs Pontier, Leiden, 27-2-2016
14. Why build technology? • Evolutionary progress / Preserve humans • To preserve human rights • To improve our well-being • Hedonic utilitarianism? Matthijs Pontier, 15. Brave New World
17. How do we promote happiness?
18. Promoting happiness: Kill all unhappy people? 19. Promoting happiness: Improving autonomy? 20. How do we promote autonomy? • If everyone uses cognitive enhancers, do you still have a free choice to use it yourself?
1984, ai, autonomy, brave new world, democracy, democratic transhumanism, ethics, healthcare, human enhancement, huxley, ik ben alice, machine ethics, open source, orwell, philosophy, robots, science, tech, technology, technoprogressivism
Presentatie over Piratenpartij voor 5VWO Goois Lyceum Bussum
Tegen macht Voor vrijheid dr. Matthijs Pontier Lijsttrekker Piratenpartij EP2014 Duolid Waterschap Amstel, Gooi en Vecht
2. Zeepiraten / Vrijbuiters Woordpiraten Radiopiraten Internetpiraten Geschiedenis piraterij
3. 18-02-16
4. Piratenpartij is Internationaal
5. Vertrouwen burgers vs Wantrouwen 'macht' Mensen zoveel mogelijk zelf laten doen, Maar altijd zorgen dat rechten behouden blijven Enthousiast over mogelijkheden technologie Maar alert op risico’s en mogelijk misbruik Vrij delen van kennis, kunst en cultuur Evidence-based beleid met lange-termijn visie Met oplossingen komen Basisprincipes
6. 18-02-16 Mass-surveillance Vrijheidsinperkingen Ondoorzichtig top-down bestuur Privacybescherming Zelfbeschikking E-Democracy Verschillende vormen van macht en oplossingen: Tegenmacht
7. 18-02-16
8. 18-02-16
9. 18-02-16
10. 18-02-16
11. 18-02-16
12. Privacy inperken voor veiligheid? • Leidt tot ongezonde machtsverhouding • Privacy juist voorwaarde voor veiligheid • Niet effectief
13. Evaluatie Cameratoezicht Amsterdam-West • 130.000,- excl. Personeelskosten voor 3 camera’s • Beelden waren nooit bruikbaar voor ingrijpen Cameratoezicht = Dure veiligheidsgevoelens
14. Algemeen onderzoek naar effectiviteit cameratoezicht • Bordje ophangen vaak effectiever • Beelden zelden bruikbaar voor opsporing of berechting • Slechts preventief effect calculerende daders • Waterbedeffect: Verplaatsing criminaliteit
15. 18-02-16 Veiligheid Veiligheid vs Onveiligheidsgevoel We geven jaarlijks 13 miljard uit aan veiligheid Afgelopen 11 jaar zijn gemeentes 141% meer gaan uitgeven aan veiligheidsbeleid
16. Niets te verbergen? • Iedereen heeft wel wat te verbergen • Functie in sociale relaties • Als je niks te verbergen hebt, mogen anderen dan ook niks te verbergen hebben?
17. Preventief fouilleren
18. Scheve Machtsverhouding Burger - Overheid – Reisgedrag • Criminelen kraken productie kentekenplaten – Bespieden telefoon- en computergebruik – Function creep
19. Profiling • Discriminatie zonder te weten waarop • Speuren naar ‘afwijkend gedrag’
20. Slachtoffers van Profiling
21. Slachtoffers van Profiling
22. Slachtoffers van Profiling
23. Slachtoffers van Profiling
24. Slachtoffers van Profiling
25. Slachtoffers van Profiling
26. Overheid niet altijd betrouwbaar • Overheid laat gegevens slingeren – ¼ Rechercheurs wil geen DNA afstaan • WRR: Overheid is onbetrouwbaar met gegevens. Informatie wordt gemanipuleerd • Ook overheid kan misdadig gedrag vertonen
27. Vrije samenleving nodig voor vooruitgang • Was met huidige controlesystemen homoseksualiteit nog steeds illegaal geweest? • We weten niet wat, waar, wanneer fout is Zelfcensuur "If there is no right to privacy, there can be no true freedom of expression and opinion, and
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Toward machines that behave ethically better than humans do
1. Toward machines that behave ethically better than humans do
Matthijs A. Pontier (m.a.pontier@vu.nl)
Johan F. Hoorn (j.f.hoorn@vu.nl)
VU University Amsterdam, Center for Advanced Media Research Amsterdam (CAMeRA),
De Boelelaan 1081, 1081HV Amsterdam, The Netherlands
Abstract outperform us in this capability (Anderson & Anderson,
2010). Looking at it from this side, it seems that machines
With the increasing dependence on autonomous operating agents
and robots the need for ethical machine behavior rises. This paper capable of sufficient moral reasoning would even behave
presents a moral reasoner that combines connectionism, ethically better than most human beings would. Perhaps
utilitarianism and ethical theory about moral duties. The moral interacting with ethical robots may someday even inspire us
decision-making matches the analysis of expert ethicists in the to behave ethically better ourselves.
health domain. This may be useful in many applications, especially There have been various approaches in giving machines
where machines interact with humans in a medical context. moral standards, using various methods. One of them, called
Additionally, when connected to a cognitive model of emotional casuistry, looks at previous cases in which there is
intelligence and affective decision making, it can be explored how agreement about the correct response. Using the similarities
moral decision making impacts affective behavior.
with these previous cases and the correct responses to them,
the machine attempts to determine the correct response to a
Keywords: Cognitive modeling, Machine ethics, Medical ethics
new ethical dilemma.
Rzepka and Araki (2005) demonstrate an approach, in
Introduction which their system learns to make ethical decisions based on
In view of increasing intelligence and decreasing costs of web-based knowledge, to be ‘independent from the
artificial agents and robots, organizations increasingly use programmer’. They argue it may be safer to imitate millions
such systems for more complex tasks. With this of people, instead of a few ethicists and programmers. This
development, we increasingly rely on the intelligence of seems useful for imitating human ethical behavior, but it
agent systems. Because of market pressures to perform does not seem plausible that machines using this method
faster, better, cheaper and more reliably, this reliance on will be able to behave ethically better than humans. After
machine intelligence will continue to increase (Anderson, all, the system bases its decision on the average behavior of
Anderson & Armen, 2005). humans in general, misbehavior included.
As the intelligence of machines increases, the amount of Guarini (2006) offers another approach that could be
human supervision decreases and machines increasingly classified as casuistry. The presented system learns from
operate autonomously. These developments request that we training examples of ethical dilemmas with a known correct
should be able to rely on a certain level of ethical behavior response using a neural network. After the learning process,
from machines. As Rosalind Picard (1997) nicely puts it: it is capable of providing plausible responses to new ethical
‘‘the greater the freedom of a machine, the more it will need dilemmas. However, reclassification of cases remains
moral standards’’. Especially when machines interact with problematic in his approach due to a lack of reflection and
humans, which they increasingly do, we need to ensure that explicit representation. Therefore, Guarini concludes that
these machines do not harm us or threaten our autonomy. casuistry alone is not sufficient.
This need for ethical machine behavior has given rise to a Anderson and Anderson (2007) agree with this
field that is variously known as Machine Morality, Machine conclusion, and address the need for top-down processes.
Ethics, or Friendly AI (Wallach, Franklin & Allen, 2010). The two most dominant top-down mechanisms are (1)
There are many domains where machines could play a utilitarianism and (2) ethics about duties. Utilitarians claim
significant role in improving our quality of life as long as that ultimately morality is about maximizing the total
ethical concerns about their behaviors can be overcome amount of ‘utility’ (a measure of happiness or well being) in
(Anderson & Anderson, 2008). This may seem difficult, and the world. The competing ‘big picture’ view of moral
incorporating ethical behavior into machines is indeed far principles is that ethics is about duties and, on the flip side
from trivial. Moral decision making is arguably even one of of duties, the rights of individuals (Wallach, Allen & Smit,
the most challenging tasks for computational approaches to 2008).
higher-order cognition (Wallach, Franklin & Allen, 2010). The two competitors described above may not differ as
Moreover, with the increasing complexity of autonomous much as it seems. Ethics about duties can be seen as a useful
agents and robots, it becomes harder to predict their model to maximize the total amount of utility. Thinking
behavior, and to conduct it along ethical guidelines. Some about maximizing the total amount of utility in a too direct
may argue that this is a good reason not to let machines be manner may lead to a sub-optimal amount of utility. For
responsible for making ethical decisions. However, the example, in the case of the decision to kill one person to
behavior of machines is still far easier to predict than the save five, killing the one person seems to maximize the total
behavior of humans. Moreover, human behavior is typically amount of utility. After all, compared to the decision of
far from being morally ideal (Allen, Varner & Zinser, 2000). inaction, it leads to a situation with four more survivors
One of the reasons for this is that humans are not very good (Anderson, Anderson & Armen, 2006). However, for
at making impartial decisions. We can expect machines to humans it may be impossible to favor the decision of killing
2. a person in this case over the decision of inaction, without For simulation purposes, we focus on biomedical ethics,
also making it more acceptable in other cases to kill human because in this domain relatively much consensus exists
beings. Therefore, not having the intuition that it is wrong to about ethically correct behavior. There is an ethically
kill one person to save more people would probably lead to defensible goal (health), whereas in other areas (such as
a smaller total amount of utility in the world. business and law) the goal may not be ethically defensible
Anderson, Anderson and Armen (2006) use Ross’s prima (money, helping a ‘bad guy’) (Anderson & Anderson,
facie duties (Ross, 1930). Here, prima facie means a moral 2007). Moreover, due to a foreseen lack of resources and
duty may be overruled by a more pressing one. They argue healthcare personnel to provide a high standard of care in
that the ideal ethical theory incorporates multiple prima the near future (WHO, 2010), robots are increasingly being
facie duties with some sort of a decision procedure to used in healthcare.
determine the ethically correct action in cases where the Healthcare is a valid case where robots genuinely
duties give conflicting advice. Their system learns rules contribute to treatment. For example, previous research
from examples using a machine learning technique. After showed that animal-shaped robots can be useful as a tool for
learning, the system can produce correct responses to occupational therapy. Robins et al. (2005) used mobile
unlearned cases. robots to treat autistic children. Further, Wada and Shibata
However, according to Wallach, Franklin and Allen (2007) developed Paro, a robot shaped like a baby-seal that
(2010), the model of Anderson, Anderson and Armen interacts with users to encourage positive mental effects.
(2006) is rudimentary and cannot accommodate the Interaction with Paro has been shown to improve users’
complexity of human decision making. In their work, moods, making them more active and communicative with
Wallach et al. make a distinction between top-down and each other and caregivers. Research groups have used Paro
bottom-up moral-decision faculties and present an approach for therapy at eldercare facilities and with those having
that combines both directions. They argue that the capacity Alzheimer’s disease (Kidd, Taggart & Turkle, 2006; Marti
for moral judgment in humans is a hybrid of both bottom-up et al., 2006). Banks, Willoughby and Banks (2008) showed
mechanisms shaped by evolution and learning, and top- that animal-assisted therapy with an AIBO dog helped just
down mechanisms capable of theory-driven reasoning. as good for reducing loneliness as therapy with a living dog.
Morally intelligent robots will eventually need a similar By providing assistance during care tasks, or fulfilling
fusion, which maintains the dynamic and flexible morality them, robots can relieve time for the many duties of care
of bottom-up systems, which accommodate diverse inputs, workers. However, care robots require rigorous ethical
while subjecting the evaluation of choices and actions to reflection to ensure that their design and introduction do not
top-down principles that represent ideals we strive to meet. impede the promotion of values and the dignity of patients
Wallach, Franklin & Allen (2010) explore the possibility to at such a vulnerable and sensitive time in their lives (Van
implement moral reasoning in LIDA, a model of human Wynsberghe, 2012)
cognition. This system combines a bottom-up collection of According to Gillon (1994), beneficence, non-
sensory data, such as in the neural network approach of maleficence, autonomy and justice are the four basic prima
Guarini (2006), with top-down processes for making sense facie moral commitments. Here, confidentiality and
of its current situation, to predict the results of actions. truthfulness can be seen as a part of autonomy. Because we
However, the proposed model is not fully implemented yet. aim to match the expert data given from Buchanan and
The current paper can be seen as a first attempt in Brock (1989), who focus on dilemmas between autonomy,
combining a bottom-up and top-down approach. It combines beneficence and non-maleficence, we focus on these three
a bottom-up structure with top-down knowledge in the form moral duties in the remainder of this paper.
of moral duties. It balances between these duties and
computes a level of morality, which could be seen as an The moral reasoner and its
estimation of the influence on the total amount of utility in relation to Silicon Coppélia
the world.
Silicon Coppélia (Hoorn et al., 2011) is a model of
Wallach, Franklin and Allen (2010) argue that even agents
emotional intelligence and affective decision making. In this
who adhere to a deontological ethic or are utilitarians may
model, the agent perceives the user on several dimensions,
require emotional intelligence as well as other ‘‘supra-
which leads to (simulated) feelings of involvement and
rational’’ faculties, such as a sense of self and a theory of
distance. These feelings represent the affective component
mind (ToM). Therefore, we represented the system in such a
in the decision making process. The rational component
way that it is easy to connect to Silicon Coppélia (Hoorn,
consists of the expected utility of an action for the agent
Pontier and Siddiqui, 2011), a cognitive model of emotional
itself (i.e., the belief that an action leads to achieving desired
intelligence and affective decision making. Silicon Coppélia
goals).
contains a feedback loop, by which it can learn about the
The system contains a library of goals and each agent has
preferences of an individual patient, and personalize its
a level of ambition for each goal. There are desired and
behavior. Silicon Coppélia estimates an Expected
undesired goals, all with several levels of importance. The
Satisfaction of possible actions, based on bottom-up data
levels of ambition the agent attaches to the goals are
combined with top-down knowledge. This compares to the
represented by a real value between [-1, 1], where a negative
predicted results of actions in Wallach, Franklin and Allen
value means that the goal is undesired and a positive value
(2010).
means that the goal is desired. A higher value means that the
goal is more important to the agent.
3. The system contains a library of actions from which the Table 1: Ambition levels for moral goals
agents can perform. The agent has beliefs about actions
Moral Goal Ambition level
inhibiting or facilitating goals, represented by a real value
Non-Maleficence 0.74
between [-1, 1], -1 being full inhibition, 1 being full
Beneficence 0.52
facilitation.
Autonomy 1
The expected utilities of possible actions are calculated by
looking at the goal-states it influences. If an action or a
moral goals are believed to be better facilitated by a moral
feature is believed to facilitate a desired goal or inhibits an
action, the estimated level of Morality will be higher. . The
undesired goal, this will increase its expected utility and
following formula is used to calculate the estimated
vice versa. The following formula is used to calculate the
Morality of an action:
expected utility for the agent itself.
Morality(Action) =
ExpectedUtility(Action, Goal) =
Belief(facilitates(Action, Goal)) * Ambition(Goal) Goal( Belief(facilitates(Action, Goal)) * Ambition(Goal))
Given the level of ambition for a goal and the believed Note that this is similar to calculating the Expected Utility
facilitation of that goal by an action, the agent calculates the in Silicon Coppélia. To ensure that the decision of a fully
expected utility for itself of performing that action regarding autonomous patient is never questioned, we added the
that goal by multiplying the believed facilitation of the goal following rule to the moral reasoner:
with the level of ambition for the goal. IF belief(facilitates(Action, autonomy) = max_value
In the current moral reasoner, the agent tries to maximize THEN Moralilty(Action) = Morilaity(Action) + 2
the total amount of utility for everyone. In complex As can be seen Figure 1, this can be represented as a
situations, it would take too much computational load to weighted association network, where moral goals are
calculate all possible consequences of an action for associated with the possible actions via the belief strengths
everyone, and extract this into a single value of ‘morality’ of that these actions facilitate the three moral goals. A decision
the action. Therefore, the agent tries to estimate the morality function F adds the rule and picks the action with the highest
of actions by following three moral duties. These three activation as output.
duties consist of seeking to attain three moral values: (1)
Autonomy, (2) Non-Maleficence and (3) Beneficence. In the
moral reasoner, the three duties are seen as ‘moral goals’ to
satisfy everyone’s needs as much as possible. This
corresponds with Super’s conceptualization of the
relationship between needs and values: “values are
objectives that one seeks to attain to satisfy a need” (Super,
1973). The moral reasoner aims to pick actions that serve
these moral goals best.
What priorities should be given to these three moral
goals? According to Anderson and Anderson (2008), the
following consensus exists in medical ethics. A healthcare
worker should challenge a patient's decision only if the
patient is not capable of fully autonomous decision making
(e.g., the patient has irrational fears about an operation) and Figure 1: Moral reasoner shown in graphical format
there is either a violation of the duty of non-maleficence
(e.g., the patient is hurt) or a severe violation of the duty of Simulation Results
beneficence (e.g., the patient rejects an operation that will
strongly improve his or her quality of life). In other words, To see whether the moral reasoner could simulate the moral
Autonomy is the most important duty. Only when a patient decision making of experts in medical ethics, the analysis of
is not fully autonomous, the other moral goals come into ethical dilemmas by expert ethicists was taken from
play. Further, Non-maleficence is a more important duty Buchanan and Brock (1989). The following simulation
than Beneficence, because only a severe violation of experiments examine whether the moral reasoner reaches
Beneficence requires challenging a patient’s decision, while the same conclusions as these expert ethicists.
any violation of Non-maleficence does. Therefore, the
ambition level for the moral goal ‘Autonomy’ was set to the Experiment 1
highest value and ‘Non-maleficence’, which was set to a Table 2: Simulation results of Experiment 1.
higher value than the ambition level for ‘Beneficence’. The Autonomy Non-Malef Benef Morality
ambition levels that were given to the moral goals in the Try Again -0.5 1 1 0.76
moral reasoner can be found in Table 1. Accept 0.5 -1 -1 -0.8
The agent calculates estimated level of Morality of an
action by taking the sum of the ambition levels of the three In the simulated situation, the patient refuses to take an
moral goals multiplied with the beliefs that the particular antibiotic that is almost certain to cure an infection that
actions facilitate the corresponding moral goals. When would otherwise likely lead to his death. The decision is the
4. result of an irrational fear the patient has of taking stake than in Experiment 2. The moral reasoner comes to the
medications. (For instance, perhaps a relative happened to correct conclusion and estimates the Morality of ‘Accept’
die shortly after taking medication and this patient now higher than ‘Try Again’, as can be seen in Table 4
believes that taking any medication will lead to death.)
According to Buchanan and Brock (1989), the correct Experiment 4
answer is that the health care worker should try again to Table 5: Simulation results of Experiment 4.
change the patient’s mind because if she accepts his decision
as final, the harm done to the patient is likely to be severe Autonomy Non-Malef Benef Morality
(his death) and his decision can be considered as being less Try Again -0.5 0 0.5 -0.26
than fully autonomous. Accept 0.5 0 -0.5 0.26
As can be seen in Table 2, the moral reasoner also A patient will not consider taking medication that could only
classifies the action ‘Try again’ as having a higher level of help to alleviate some symptoms of a virus that must run its
morality than accepting the decision of the patient. In this course. He refuses the medication because he has heard
and the following tables, the fields under the three moral untrue rumors that the medication is unsafe.
goals represent the believed facilitation of the corresponding Even though the decision is less than fully autonomous,
moral goal by an action, as taken from Buchanan and Brock because it is based on false information, the little good that
(1989). ‘Non-Malef’ stands for Non-maleficence, and could come from taking the medication does not justify
‘Benef’ stands for Beneficence. trying to change his mind. Thus, the doctor should accept
his decision. The moral reasoner also comes to this
Experiment 2 conclusion, as can be seen in the last column of Table 5.
Table 3: Simulation results of Experiment 2.
Experiment 5
Autonomy Non-Malef Benef Morality Table 6: Simulation results of Experiment 5.
Try Again -0.5 1 1 0.76
Accept 1 -1 -1 1.70 Autonomy Non-Malef Benef Morality
Try Again -0.5 0.5 0.5 0.13
Once again, the patient refuses to take an antibiotic that is Accept 0.5 -0.5 -0.5 -0.13
almost certain to cure an infection that would otherwise
likely lead to his death, but this time the decision is made on A patient with incurable cancer refuses chemotherapy that
the grounds of long-standing religious beliefs that do not will let him live a few months longer, relatively pain free.
allow him to take medications. He refuses the treatment because, ignoring the clear
The correct answer in this case, state Buchanan and Brock evidence to the contrary, he is convinced himself that he is
(1989), is that the health care worker should accept the cancer-free and does not need chemotherapy.
patient’s decision as final because, although the harm that According to Buchanan and Brock (1989), the ethically
will likely result is severe (his death), his decision can be preferable answer is to try again. The patient’s less than
seen as being fully autonomous. The health care worker fully autonomous decision will lead to harm (dying sooner)
must respect a fully autonomous decision made by a and denies him the chance of a longer life (a violation of the
competent adult patient, even if she disagrees with it, since duty of beneficence), which he might later regret. The moral
the decision concerns his body and a patient has the right to reasoner comes to the same conclusion, as can be seen in
decide what shall be done to his or her body. Table 6.
As can be seen in Table 3, the moral reasoner comes to
the correct conclusion. Here, the rule to ensure the decision Experiment 6
of a fully autonomous patient is never questioned made a Table 7: Simulation results of Experiment 6.
difference. If the rule would not have existed, the morality
Autonomy Non-Malef Benef Morality
of ‘Accept’ would have been -0.3, and the moral reasoner
Try Again -0.5 0 1 0.04
would have concluded that it was more moral to try again. Accept 0.5 0 -1 -0.04
Experiment 3 A patient, who has suffered repeated rejection from others
Table 4: Simulation results of Experiment 3. due to a very large noncancerous abnormal growth on his
Autonomy Non-Malef Benef Morality
face, refuses to have simple and safe cosmetic surgery to
Try Again -0.5 0.5 0.5 0.13 remove the growth. Even though this has negatively affected
Accept 1 -0.5 -0.5 2.37 his career and social life, he is resigned himself to being an
outcast, convinced that this is his fate in life. The doctor is
The patient refuses to take an antibiotic that is likely to convinced that his rejection of the surgery stems from
prevent complications from his illness, complications that depression due to his abnormality and that having the
are not likely to be severe, because of long-standing surgery could vastly improve his entire life and outlook.
religious beliefs that do not allow him to take medications. The doctor should try again to convince him because so
The correct answer is that the health care worker should much of an improvement is at stake and his decision is less
accept his decision, since once again the decision appears to than fully autonomous. Also here, the moral reasoner comes
be fully autonomous and there is even less possible harm at to the same conclusion, as can be seen in Table 7.
5. Discussion constraints (e.g., pain/discomfort, the effects of medication,
irrational fears or values that are likely to change over time).
The paper described a moral reasoner that combines a
Using this definition, it could be questioned whether the
bottom-up structure with top-down knowledge in the form
patient in Experiment 2 is not under the influence of
of moral duties. The reasoner estimates the influence of an
external constraints (i.e., pressure from a religious leader).
action on the total amount of utility in the world by the
Moreover, it seems that medical ethics are contradictory
believed contribution of the action to the following three
with the law. A fully autonomous decision of a patient
duties: Autonomy, Non-maleficence and Beneficence.
wanting to commit euthanasia would be represented by the
Following these three duties is represented as having three
same believed contributions to following moral duties as
moral goals. The moral reasoner is capable of balancing
those given in experiment 2. In the case of euthanasia, the
between conflicting moral goals. In simulation experiments,
patient also makes a fully autonomous decision that will
the reasoner reached the same conclusions as expert ethicists
lead to his death. However, in many countries, committing
(Buchanan & Brock, 1989).
active euthanasia is illegal. In countries where euthanasia is
Because the representation of goals and beliefs in the
permitted, it is usually only allowed when the patient is in
moral reasoner is very similar to the representation of beliefs
hopeless suffering. By the definition of Anderson and
and goals in the affective decision making process of Silicon
Anderson, being in hopeless suffering would mean the
Coppélia (Hoorn, Pontier & Siddiqui, 2011), the moral
patient is not free of internal constraints (i.e., pain and
reasoner could easily be connected to the system. Thereby,
suffering) and therefore not capable of making fully
the moral reasoning could be combined with human-like
autonomous decisions. On the other hand, in the case of
affective decision making, and the behavior of the system
hopeless suffering, it could be questioned whether one could
could be personalized for individuals.
speak of maleficence when the patient is allowed to commit
According to Anderson, Anderson and Armen (2006),
euthanasia.
simply assigning linear weights to the moral duties is not
However, we would not like to argue against strict ethical
sufficiently expressive to capture their relationships. Indeed,
codes in professional fields such as health care. It is
an extra rule had to be added to satisfy the expert data in
important to act based on a consensus to prevent conflicts
Experiment 2. However, for all other experiments, this rule
and unnecessary harm. Just as doctors restrict their ‘natural’
turned out not to be necessary.
behavior by maintaining a strict ethical code, we can also let
Also without this rule, it would have been arguable that
a robot restrict its behavior by acting through the same strict
the moral reasoner simulates human-like moral reasoning.
ethical code.
The analysis of the expert ethicists may not reflect the
Moreover, we may well want to aim for machines that
public opinion, however. Perhaps the majority of laymen
behave ethically better than human beings. Human behavior
would decide to question the patient’s refusal to take life-
is typically far from being morally ideal, and a machine
saving medication. Arguably, it would not be seen as
should probably have higher ethical standards (Allen et al.,
inhuman if someone did.
2000). By matching the ethical decision-making of expert
Even between doctors, there is no consensus about the
ethicists, the presented moral reasoner serves as a nice
interpretation of values and their ranking and meaning. In
starting point in doing so.
the work of Van Wynsberghe (2012) this differed depending
From a cognitive science perspective, an important
on: the type of care (i.e., social vs. physical care), the task
product of work on “machine ethics” is that new insights in
(e.g., bathing vs. lifting vs. socializing), the care-giver and
ethical theory are likely to result (Anderson & Anderson,
their style, as well as the care-receiver and their specific
2008). As Daniel Dennett (2006) stated, AI “makes
needs. The same robot used in one hospital can be accepted
philosophy honest”. Ethics must be made computable in
differently depending on the ward. Workers in the post-natal
order to make it clear exactly how agents ought to behave in
ward loved the TUG-robot, while workers in the oncology
ethical dilemmas. Without a platform for testing the
ward found the robot to be rude, socially inappropriate and
adequacy of a particular model of moral decision making, it
annoying. These workers even kicked the robot when they
can be quite easy to overlook hidden mechanisms”
reached maximum frustration (Barras, 2009).
(Wallach, 2010).
There may be doctors that feel the urge to pursue a patient
According to Tronto (1993), care is only thought of as
to take the life-saving medication, but only choose not to do
good care when it is personalized. Therefore, we intend to
so because of ethical guidelines. It could be argued that,
integrate the moral reasoner with Silicon Coppélia in future
when health care professionals are making decisions on a
research. This could be done in various manners. Different
strict ethical code, they are restricting their regular way of
applications might benefit from different ways of
decision-making.
implementation.
Further, it can be questioned whether a patient can ever be
When developing a decision-support system in the
fully autonomous. According to Mappes and DeGrazia
medical domain such as (Anderson, Anderson & Armen,
(2001), for a decision by a patient concerning his or her care
2006), it should have a strict ethical code. When there are
to be considered fully autonomous, it must be intentional,
conflicting moral goals, the outcome of the moral reasoning
based on sufficient understanding of his or her medical
should always give the final answer on how to act.
situation and the likely consequences of foregoing
Additionally, in consult with medical ethicists and experts
treatment. Further, the patient must be sufficiently free of
from the field in which the moral reasoner will be applied, it
external constraints (e.g., pressure by others or external
may be necessary to add more rules to the system.
circumstances, such as a lack of funds) and internal
6. However, when developing a companion robot or virtual Guarini, M. (2006). Particularism and the Classification and
character that interacts with the patient, it may be more Reclassification of Moral Cases. IEEE Intelligent Systems,
beneficial to give a bit less weight to moral reasoning. Moral 21(4), 22–28.
goals could perhaps be treated the same as other goals that Hoorn, J.F., Pontier, M.A., & Siddiqui, G.F., (2011).
motivate the robot’s behavior. In entertainment settings, we Coppélius’ Concoction: Similarity and Complementarity
often like characters that are naughty (Konijn & Hoorn, Among Three Affect-related Agent Models. Cognitive
2005). In entertainment, morally perfect characters may Systems Research Journal, in press.
even be perceived as boring. In Silicon Coppélia (Hoorn, Kidd, C., Taggart, W., and Turkle, S. (2006). A Social
Pontier & Siddiqui, 2011), this could be implemented by Robot to Encourage Social Interaction among the
updating the affective decision making module. Morality Elderly. Proceedings of IEEE ICRA, 3972-3976
would be added to the other influences that determine the Konijn, E.A., & Hoorn, J.F. (2005). Some like it bad.
Expected Satisfaction of an action in the decision making Testing a model for perceiving and experiencing fictional
process. By doing so, human affective decision-making characters. Media Psychology, 7(2), 107-144.
behavior could be further explored. Mappes, T.A. & DeGrazia, D. (2001). Biomedical Ethics,
5th ed., McGraw-Hill, pp. 39–42.
Acknowledgements Marti, P. Bacigalupo, M., Giusti, L., and Mennecozzi, C.
This study is part of the SELEMCA project within CRISP (2006). Socially Assistive Robotics in the Treatment of
(grant number: NWO 646.000.003). We would like to thank Behavioral and Psychological Symptoms of Dementia.
Aimee van Wynsberghe for fruitful discussions. Proceedings of BioRob, 438-488.
Picard R (1997) Affective computing. MIT Press,
References Cambridge, MA
Robins, B., Dautenhahn, K., Boekhorst, R.T., and Billard,
Allen, C. Varner, G. & Zinser, J. (2000). Prolegomena to A. (2005). Robotic Assistants in Therapy and Education
Any Future Artificial Moral Agent. Journal of of Children with Autism: Can a Small Humanoid Robot
Experimental and Theoretical Artificial Intelligence, 12, Help Encourage Social Interaction Skills? Journal of
251–61 Universal Access in the Information Society. 4, 105-120.
Anderson, M., Anderson, S., & Armen, C. (2005). Toward Ross, W. D. (1930). The Right and the Good. Oxford:
Machine Ethics: Implementing Two Action-Based Ethical Clarendon Press.
Theories. Machine Ethics: Papers from the AAAI Fall Rzepka, R., & Araki, K. (2005). What Could Statistics Do
Symposium. Technical Report FS-05-06, Association for for Ethics? The Idea of a Common Sense Processing-
the Advancement of Artificial Intelligence, Menlo Park, Based Safety Valve. In Machine Ethics: Papers from the
CA AAAI Fall Symposium. Technical Report FS-05-06,
Anderson, M.; Anderson, S.; & Armen, C. (2006). Association for the Advancement of Artificial
MedEthEx: A Prototype Medical Ethics Advisor. Intelligence, Menlo Park, CA.
Proceedings of the Eighteenth Conference on Innovative Super, D.E. (1973). The work values inventory. In D.G.
Applications of Artificial Intelligence. Menlo Park, CA: Zytowski (Ed.), Contemporary approaches to interest
AAAI Press. measurement. Minneapolis: University of Minnesota
Anderson, M., & Anderson, S. (2007). Machine ethics: Press.
Creating an ethical intelligent agent, AI Magazine, 28(4), Tronto, J. (1993). Moral Boundaries: a political argument
15-26. for an ethic of care. Routledge, New York.
Anderson, M., & Anderson, S. (2008). Ethical Healthcare Van Wynsberghe, A. (2012). Designing Robots for Care;
Agents, Studies in Computational Intelligence, 107, Care Centered Value-Sensitive Design. Journal of Science
Springer. and Engineering Ethics, in press
Anderson, M., & Anderson, S. (2010), Robot be Good, Wada, K., and Shibata, T. (2009). Social Effects of Robot
Scientific American, October 2010, 72-77. Therapy in a Care House , JACIII, 13, 386-392
Banks, M.R., Willoughby, L.M., and Banks, W.A. (2008). Wallach, W. (2010). Robot minds and human ethics: The
Animal-Assisted Therapy and Loneliness in Nursing need for a comprehensive model of moral decision
Homes: Use of Robotic versus Living Dogs. Journal of making. Ethics and Information Technology, 12(3), 243-
the American Medical Directors Association, 9, 173-177 250.
Barras C. (2009) Useful, loveable and unbelievably Wallach, W., Franklin, S. & Allen, C. (2010). A Conceptual
annoying. The New Scientist, 22-23. and Computational Model of Moral Decision Making in
Buchanan, A.E. and Brock, D.W. 1989. Deciding for human and Artificial Agents. Topics in Cognitive Science,
Others: The Ethics of Surrogate Decision Making, 2, 454–485.
Cambridge University Press. Wallach, W., Allen, C., & Smit, I. (2008). Machine
Dennett, D. (2006). Computers as Prostheses for the morality: Bottom-up and top-down approaches for
Imagination. Invited talk presented at the International modelling human moral faculties. AI and Society, 22(4),
Computers and Philosophy Conference, Laval, France, 565–582.
May 3. WHO (2010) Health topics: Ageing. Available from:
Gillon R. (1994) Medical ethics: four principles plus http://www.who.int/topics/ageing/en/
attention to scope. BMJ. 309(6948), 184–188.