SlideShare a Scribd company logo
1 of 2
The ethical dilemmaof self-drivingcars
One of mypersonal favourite moviesof recenttimesis‘The ImitationGame.’Amongmanyothersit
raisesa questionthatcan ina way be mappedto our currentdiscussionof the ethical dilemma
autonomoussystems face.
Afterthe historicbreakingof the Enigma,AlanTuringandhisteamdecide todesignanalgorithmto
decide whatvesselstosave andwhat notto, basedon a presetprioritylist.Thiswastoensure thatthe
Nazisdonot come to knowof the code’sfailure tostayunbreakable andhence nottryto change it.
Whatevermaybe the historical discrepancieswithretellingof the historiceventthatchangedthe
course of the war, the questionremains –Who,if any at all,decideswhotosave andwhoto not?
Maybe we’ve rushedintoputtingall pointsacrossatad too quickly.Let’sgoat a more revisedpace.
Enigmabasicallyhadcodedinformationastowhichshipwouldbe attackedduringthe war.Designedto
be unbreakable,the Nazisopenlydidall militarycommunicationusingthis.Once the code wasbroken,
the militarytooktosavingonlya fewselectshipsfrompotentialharmafterinterceptinganddecoding
the Enigmain orderto keepthe decodingasecret.The ethical questionhere canbe justifiedsayingthat
wartime requiressacrifices.
But the same questioncanpose as one of the majorhurdlesautonomouscarshave to face before being
a commonsighton roads. A TED Ed videofastforwardsyoua few decadestoa thoughtexperiment
designedtoillustrate the same.
Suppose youare cruisingthe highwayaself-drivingcarwhensuddenly the contentsof atruck infront of
youfall downon the road leavingthe carnot enoughtime tocome to a halt.Onone side of the car
there’samotorcyclistandon the otheran SUV. Now whatdoesthe car do? Hitthe motorcyclistto
ensure yoursafety?Ornot stopand hitthe truckminimizingdamage toothersatthe cost of yourlife?
Or take the middle groundbyhittingthe SUV whichhasa low probabilityof damage tolife.
Nowif such a situationarose intoday’sroadsany reactionwouldbe consideredjustthat.Animpulsive
response toa situation.A reaction.Butincase of an autonomouscar it isno more remainsa reaction,
but turnsintoa decision.A programmerwhogivesitinstructionstofollowif sucha conditionarose isin
a way dictatingthe response basedonhisreasoningorinstructionsgiventohim.There liesthe problem.
Who decidesforthe car? Governments?Companies?ITs?
Granted,there are numerousadvantagestoself-drivingvehiclesmostimportantof whichisthe removal
of humanerrorfrom the equationthusminimizingaccidents,trafficjams,roadcongestionsamong
manyothers.But that doesn’tmeanaccidentswon’thappenandwhentheydo,(if theydo) the
outcome isnot alwaysinstinctive.Itmaymeanthat the response ispre decidedwhenthe emergency
handlingalgorithmiscodedin,inall probabilitymonthsbefore the accidentactuallyhappens.
Let’sgo furtherto considerthatthe top priorityinsucha situationis‘Minimize Harm.’Eventhenyou
mightbe facedwitha newset of problems.Toillustrate thisinasimilarsetupwe considertwo
motorcyclists.One withahelmetandone without.If the cardecidedto‘minimizeharm’ andcrash into
the guy witha helmetitisina way castigatingthe responsible,law abidingcitizen.If itchoosestocrash
intothe otherguy deeminghimirresponsible,itisexertingtotalitarianjustice andwill goagainstthe
veryprinciple of ‘minimize harm’itisbuilton.
Consumersandmanufacturersare facedwithmore ethical dilemmaswithcars.Onone handyou have a
car that will minimizeharm,evenif itmeansgettingyoukilled.Onthe otheryouhave one that will save
youno matterwhat, evenif itmeansgettingotherskilled.Whatwill youchoose?
Thisas previouslystatedistobe governedby standardprotocolsanddriversof manual vehicleswhoare
inno way connectedtothese protocolssometimesturnoutto be the victims.Isit somehow betterto
have a random,instinctive reactionratherthanapredeterminedone?
Thisis justone of the ethical dilemmaswe face withthisinnovationcomingtoeverydayuse.Since the
car isbasicallyacomputerrunning,putina hackerthere andchange some part of the code and you end
up witha disaster.Whoisto be heldaccountable if sucha thinghappens?The company?The coders?
The passengers?The government?
Evenif we are to take ‘minimize harm’asthe ultimate deciderprotocol,thenthe minimal harmthatcan
be done is NOTavoidthe impendingaccidentandletthe passengerdie.Inthatcase,who’ll buythese
driverlesscars?Andwhatisminimal harmexactly?3oldmenor 1 kid.If the car HAS to crash intoone of
the two,what isthe factor that decides‘minimal harm?’
In realitywe maynotalways face the exact sucha problem.Nevertheless, asineveryexperimentwe
testthe limitsof our theoryandonlyif it iscent percentfoolproof we use the theorytosolve our
problemsthusmakingthe qualityof lifebetter.

More Related Content

Similar to The ethical dilemma of self-driving cars

Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.avturchin
 
DrugPolicy-ChangingNarrativeOpinionPieceDec2015
DrugPolicy-ChangingNarrativeOpinionPieceDec2015DrugPolicy-ChangingNarrativeOpinionPieceDec2015
DrugPolicy-ChangingNarrativeOpinionPieceDec2015shane varcoe
 
The Ethics of Artificial Intelligence
The Ethics of Artificial IntelligenceThe Ethics of Artificial Intelligence
The Ethics of Artificial IntelligenceKarl Seiler
 
Nicola King, Teneo Blue Rubicon
Nicola King, Teneo Blue RubiconNicola King, Teneo Blue Rubicon
Nicola King, Teneo Blue RubiconARMarketing.org
 
How Technology Is Changing Our Choices and the ValuesThat He.docx
How Technology Is Changing Our Choices and the ValuesThat He.docxHow Technology Is Changing Our Choices and the ValuesThat He.docx
How Technology Is Changing Our Choices and the ValuesThat He.docxwellesleyterresa
 
Slave to the Algorithm 2016
Slave to the Algorithm  2016 Slave to the Algorithm  2016
Slave to the Algorithm 2016 Lilian Edwards
 
2600 v16 n1 (spring 1999)
2600 v16 n1 (spring 1999)2600 v16 n1 (spring 1999)
2600 v16 n1 (spring 1999)Felipe Prado
 

Similar to The ethical dilemma of self-driving cars (9)

Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.
 
DrugPolicy-ChangingNarrativeOpinionPieceDec2015
DrugPolicy-ChangingNarrativeOpinionPieceDec2015DrugPolicy-ChangingNarrativeOpinionPieceDec2015
DrugPolicy-ChangingNarrativeOpinionPieceDec2015
 
Road traffic digest no. 9
Road traffic digest no. 9Road traffic digest no. 9
Road traffic digest no. 9
 
The Ethics of Artificial Intelligence
The Ethics of Artificial IntelligenceThe Ethics of Artificial Intelligence
The Ethics of Artificial Intelligence
 
Nicola King, Teneo Blue Rubicon
Nicola King, Teneo Blue RubiconNicola King, Teneo Blue Rubicon
Nicola King, Teneo Blue Rubicon
 
How Technology Is Changing Our Choices and the ValuesThat He.docx
How Technology Is Changing Our Choices and the ValuesThat He.docxHow Technology Is Changing Our Choices and the ValuesThat He.docx
How Technology Is Changing Our Choices and the ValuesThat He.docx
 
Slave to the Algorithm 2016
Slave to the Algorithm  2016 Slave to the Algorithm  2016
Slave to the Algorithm 2016
 
Common Law Essay.pdf
Common Law Essay.pdfCommon Law Essay.pdf
Common Law Essay.pdf
 
2600 v16 n1 (spring 1999)
2600 v16 n1 (spring 1999)2600 v16 n1 (spring 1999)
2600 v16 n1 (spring 1999)
 

The ethical dilemma of self-driving cars

  • 1. The ethical dilemmaof self-drivingcars One of mypersonal favourite moviesof recenttimesis‘The ImitationGame.’Amongmanyothersit raisesa questionthatcan ina way be mappedto our currentdiscussionof the ethical dilemma autonomoussystems face. Afterthe historicbreakingof the Enigma,AlanTuringandhisteamdecide todesignanalgorithmto decide whatvesselstosave andwhat notto, basedon a presetprioritylist.Thiswastoensure thatthe Nazisdonot come to knowof the code’sfailure tostayunbreakable andhence nottryto change it. Whatevermaybe the historical discrepancieswithretellingof the historiceventthatchangedthe course of the war, the questionremains –Who,if any at all,decideswhotosave andwhoto not? Maybe we’ve rushedintoputtingall pointsacrossatad too quickly.Let’sgoat a more revisedpace. Enigmabasicallyhadcodedinformationastowhichshipwouldbe attackedduringthe war.Designedto be unbreakable,the Nazisopenlydidall militarycommunicationusingthis.Once the code wasbroken, the militarytooktosavingonlya fewselectshipsfrompotentialharmafterinterceptinganddecoding the Enigmain orderto keepthe decodingasecret.The ethical questionhere canbe justifiedsayingthat wartime requiressacrifices. But the same questioncanpose as one of the majorhurdlesautonomouscarshave to face before being a commonsighton roads. A TED Ed videofastforwardsyoua few decadestoa thoughtexperiment designedtoillustrate the same. Suppose youare cruisingthe highwayaself-drivingcarwhensuddenly the contentsof atruck infront of youfall downon the road leavingthe carnot enoughtime tocome to a halt.Onone side of the car there’samotorcyclistandon the otheran SUV. Now whatdoesthe car do? Hitthe motorcyclistto ensure yoursafety?Ornot stopand hitthe truckminimizingdamage toothersatthe cost of yourlife? Or take the middle groundbyhittingthe SUV whichhasa low probabilityof damage tolife. Nowif such a situationarose intoday’sroadsany reactionwouldbe consideredjustthat.Animpulsive response toa situation.A reaction.Butincase of an autonomouscar it isno more remainsa reaction, but turnsintoa decision.A programmerwhogivesitinstructionstofollowif sucha conditionarose isin a way dictatingthe response basedonhisreasoningorinstructionsgiventohim.There liesthe problem. Who decidesforthe car? Governments?Companies?ITs? Granted,there are numerousadvantagestoself-drivingvehiclesmostimportantof whichisthe removal of humanerrorfrom the equationthusminimizingaccidents,trafficjams,roadcongestionsamong manyothers.But that doesn’tmeanaccidentswon’thappenandwhentheydo,(if theydo) the outcome isnot alwaysinstinctive.Itmaymeanthat the response ispre decidedwhenthe emergency handlingalgorithmiscodedin,inall probabilitymonthsbefore the accidentactuallyhappens. Let’sgo furtherto considerthatthe top priorityinsucha situationis‘Minimize Harm.’Eventhenyou mightbe facedwitha newset of problems.Toillustrate thisinasimilarsetupwe considertwo motorcyclists.One withahelmetandone without.If the cardecidedto‘minimizeharm’ andcrash into the guy witha helmetitisina way castigatingthe responsible,law abidingcitizen.If itchoosestocrash intothe otherguy deeminghimirresponsible,itisexertingtotalitarianjustice andwill goagainstthe veryprinciple of ‘minimize harm’itisbuilton.
  • 2. Consumersandmanufacturersare facedwithmore ethical dilemmaswithcars.Onone handyou have a car that will minimizeharm,evenif itmeansgettingyoukilled.Onthe otheryouhave one that will save youno matterwhat, evenif itmeansgettingotherskilled.Whatwill youchoose? Thisas previouslystatedistobe governedby standardprotocolsanddriversof manual vehicleswhoare inno way connectedtothese protocolssometimesturnoutto be the victims.Isit somehow betterto have a random,instinctive reactionratherthanapredeterminedone? Thisis justone of the ethical dilemmaswe face withthisinnovationcomingtoeverydayuse.Since the car isbasicallyacomputerrunning,putina hackerthere andchange some part of the code and you end up witha disaster.Whoisto be heldaccountable if sucha thinghappens?The company?The coders? The passengers?The government? Evenif we are to take ‘minimize harm’asthe ultimate deciderprotocol,thenthe minimal harmthatcan be done is NOTavoidthe impendingaccidentandletthe passengerdie.Inthatcase,who’ll buythese driverlesscars?Andwhatisminimal harmexactly?3oldmenor 1 kid.If the car HAS to crash intoone of the two,what isthe factor that decides‘minimal harm?’ In realitywe maynotalways face the exact sucha problem.Nevertheless, asineveryexperimentwe testthe limitsof our theoryandonlyif it iscent percentfoolproof we use the theorytosolve our problemsthusmakingthe qualityof lifebetter.