Your SlideShare is downloading. ×
Rationalizations, Egoism, Population Ethics, and the Problem-Solving Problem
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Rationalizations, Egoism, Population Ethics, and the Problem-Solving Problem

264
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
264
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. From Rationalizations to theProblem-Solving Problem...
  • 2. How to spot a rationalization• Example situation: Suppose you spot a nice person and ask yourself whether you should approach him/her.• Reason against: I‘ve got homework to do!• Question: Is that your true rejection? Is it consistent with other things you believe? Does it make sense?
  • 3. How to spot a rationalization• General technique: VARY THE SITUATION ! More generally: THINK THE OPPOSITE !• Situation: Suppose you spot a nice person and ask yourself whether you should approach him/her.• Variation: Suppose the nice person approached you.• Reason against: I’ve got homework to do?
  • 4. How to spot a rationalization• Stated reason: Homework Suspected real reason: Your approach (anxiety)• More specific techniques to spot a rationalization: (1) Change the stated reason. (2) Change the suspected real reason. (1): If the conclusion stands, the stated reason was fake. (2): If the conclusion falls, the stated reason was fake.
  • 5. How to spot a rationalization• Example: Animal ethics Claim: Cows don‘t have a right to life. Stated reason: Cows are not intelligent enough. Suspected real reason: „Wrong“ species, not human. (2): „Not intelligent enough“ humans don‘t have a right to life. => Rationalization.
  • 6. How to spot a rationalization• Example: General ethics, WHY CARE? Claim: I know it‘s terrible. I just don‘t care. Stated reason: I‘m an egoist.• VARY THE SITUATION and see what else follows:
  • 7. How to spot a rationalization• VARY THE SITUATION and see what else follows if you‘re a true egoist: (i) Would you push a button that gave you $100 and inflicted a painful disease on a child? Follow-up question (action/omission bias): (ii) If you wouldn‘t take those $100, then why do you keep $100 instead of preventing a painful disease in a child?
  • 8. How to spot a rationalization• VARY THE SITUATION and see what else follows if you‘re a true egoist: (i) Would you push a button that gave you $100 and inflicted a painful disease on a child? Follow-up question (action/omission bias): (ii) If you wouldn‘t take those $100, then why do you keep $100 instead of preventing a painful disease in a child?
  • 9. The Practical Master Argument for a Caring Life-Career for all Non-100%-Assholes(1) Your degree of caring is not 0, i.e. some goals you in factpursue (however low priority) are not egoistic(2) For almost all egoistic preferences there is a caring life-career that‘s as good as the best non-caring careers(3) => All non-100%-assholes win by going for a caring life-career
  • 10. A theoretical argument against partial altruism/egoism(1) Model for partial altruism/egoism:Starting out as a 100% egoist, you take a pill (deal!) thatturns you into a 90% egoist and 10% altruist.(2) If the (-10% egoism, +10% altruism) deal is goodonce, why not twice? Why not take the pill a second time?(3) => Either 100% egoism or 100% altruism.(Reversal Test: 100% altruism, taking 10% egoism pills?)
  • 11. Populations in good worldsWorld_1: 1 billion hungry, 3 billion happyWorld_2: 1 billion hungry, 6 billion happyWhat‘s your metric? Where do you even want to go?What‘s your goal (ethically caring agent)?Disagreement: Catastrophe!
  • 12. Populations in good worldsWorld_1: 1 billion hungry, 3 billion happyWorld_2: 1 billion hungry, 6 billion happyWorld_2: 1 billion hungry, 6 billion happyWorld_3: 1 billion hungry, 20 billion happyWorld_2: 1 billion hungry, 6 billion happyWorld_4: 2 billion hungry, 20 billion happyWorld_5: 0World_6: 1 billion happyWorld_5: 0World_7: 1 hungry (or tortured), 1 billion happy
  • 13. Populations in good worldsIf we are not going to postulate a moral duty to turn rocksinto happiness......we‘ll have to go for some prior existence view:(1) sufficiently happy being = non-existence(2) very, very happy being = non-existence(3) => sufficiently happy = very, very happy being ?
  • 14. Populations in good worldsPrior existence world-comparisons (personal identity?): Individual_1 Individual_2World_1: 2 0World_2: 3 1World_3: 0 2World_4: 1 3World_5: 2 0
  • 15. Populations in good worldsWorld_1: 1 billion hungry, 3 billion happyWorld_2: 1 billion hungry, 6 billion happySteven Pinker, „The Better Angels of ourNature, Why Violence has Declined“:Obvious progress, right?(1) Relative/Average Metric(2) Absolute (Happiness – Suffering) Metric
  • 16. Populations in good worldsWorld_1: 1 billion hungry, 3 billion happyWorld_2: 1 billion hungry, 6 billion happy(1) Relative/Average Metric:World_a: 1 billion torturedWorld_b: 1 billon tortured + 100 hungry
  • 17. Populations in good worldsWorld_1: 1 billion hungry, 3 billion happyWorld_2: 1 billion hungry, 6 billion happy(2) Absolute (Happiness – Suffering) MetricWorld_a: 1 billion torturedWorld_b: 1 billon tortured + 10 (or 100) billion happyWorld_c: 0.1 billion tortured
  • 18. Populations in good worldsWorld_1: 1 billion hungry, 3 billion happyWorld_2: 1 billion hungry, 6 billion happy(3) Negative Metric: minimize problematic livesWorld_a: 1000 people, 100 crime victimsWorld_b: 1000 people, 50 crime victimsWorld_c: 333 people, 33 crime victims
  • 19. Populations in good worldsWorld_1: 1 billion hungry, 3 billion happyWorld_2: 1 billion hungry, 6 billion happy(3) Negative Metric: minimize problematic livesWorld_a: 999 good lives, 1 terribleWorld_b: 0World_a: 1 child with P(good life) = 99.9% P(terrible life) = 0.1%World_b: 0 children
  • 20. Populations in good worldsWorld_1: 1 billion hungry, 3 billion happyWorld_2: 1 billion hungry, 6 billion happy(3) Negative Metric: minimize problematic livesWorld_a: 999 good lives, 1 terribleWorld_b: 0World_a: 1 child with P(good life) = 99.9% P(terrible life) = 0.1%World_b: 0 children
  • 21. Populations in good worldsExamples:• Animal Ethics• X-Risks
  • 22. How to create a better world?(i) In case of ethical uncertainty: safe bets(ii) Promising: Don‘t solve problems directly, butsolve the problem-solving problem
  • 23. The Problem-Solving Problem“The world has an abundance of serious ethical problems, causing human and animal suffering, anddelays or risks to our future. Wild animals suffer gruesome fates, farmed animals are tortured, humansendure diseases, war, poverty, torture, slavery... These problems could be called villains to be defeated.The biggest villain is a sort of all powerful meta-villain, called insufficient intelligence to solve ourproblems instantly. Imagine that an advanced extraterrestrial group of cyborgs, having evolved formillions of years with superintelligence, reached Earth and contacted our world leaders in order to helpus solve our problems. Does anybody honestly think that they would follow the same inefficientstrategies that we do to solve our problems, such as distributing nets to prevent malaria in Africa, orencouraging people to donate to it?Their solutions would be much faster, they might rapidly develop a gene therapy suited to ourneeds, that would spread in a highly contagious virus or some other method of delivery and turn us intomore evolved and ethically efficient beings. They might develop cultured animal products such asmeat, eggs, milk, and leather that would cost very cheap and instantly substitute abusive animalfarming. Their solutions would be extremely different and more efficient.Why are we not as efficient as these aliens? The only thing preventing us from being like them is notbeing intelligent enough. Therefore, intelligence enhancement or defeating the villain of insufficientintelligence is very important, perhaps the most important thing of all. It is the chief of all the othervillains.” – Jonatas Müller
  • 24. The Problem-Solving Problem• Meta-Problems: (i) not enough intelligence/rationality (ii) not enough empathy/altruism• Cultural engineering: - rationality skills - good values, caring people• Transhumanism• Artificial Intelligence (AI)