1. The ethical dilemmaof self-drivingcars
One of mypersonal favourite moviesof recenttimesis‘The ImitationGame.’Amongmanyothersit
raisesa questionthatcan ina way be mappedto our currentdiscussionof the ethical dilemma
autonomoussystems face.
Afterthe historicbreakingof the Enigma,AlanTuringandhisteamdecide todesignanalgorithmto
decide whatvesselstosave andwhat notto, basedon a presetprioritylist.Thiswastoensure thatthe
Nazisdonot come to knowof the code’sfailure tostayunbreakable andhence nottryto change it.
Whatevermaybe the historical discrepancieswithretellingof the historiceventthatchangedthe
course of the war, the questionremains –Who,if any at all,decideswhotosave andwhoto not?
Maybe we’ve rushedintoputtingall pointsacrossatad too quickly.Let’sgoat a more revisedpace.
Enigmabasicallyhadcodedinformationastowhichshipwouldbe attackedduringthe war.Designedto
be unbreakable,the Nazisopenlydidall militarycommunicationusingthis.Once the code wasbroken,
the militarytooktosavingonlya fewselectshipsfrompotentialharmafterinterceptinganddecoding
the Enigmain orderto keepthe decodingasecret.The ethical questionhere canbe justifiedsayingthat
wartime requiressacrifices.
But the same questioncanpose as one of the majorhurdlesautonomouscarshave to face before being
a commonsighton roads. A TED Ed videofastforwardsyoua few decadestoa thoughtexperiment
designedtoillustrate the same.
Suppose youare cruisingthe highwayaself-drivingcarwhensuddenly the contentsof atruck infront of
youfall downon the road leavingthe carnot enoughtime tocome to a halt.Onone side of the car
there’samotorcyclistandon the otheran SUV. Now whatdoesthe car do? Hitthe motorcyclistto
ensure yoursafety?Ornot stopand hitthe truckminimizingdamage toothersatthe cost of yourlife?
Or take the middle groundbyhittingthe SUV whichhasa low probabilityof damage tolife.
Nowif such a situationarose intoday’sroadsany reactionwouldbe consideredjustthat.Animpulsive
response toa situation.A reaction.Butincase of an autonomouscar it isno more remainsa reaction,
but turnsintoa decision.A programmerwhogivesitinstructionstofollowif sucha conditionarose isin
a way dictatingthe response basedonhisreasoningorinstructionsgiventohim.There liesthe problem.
Who decidesforthe car? Governments?Companies?ITs?
Granted,there are numerousadvantagestoself-drivingvehiclesmostimportantof whichisthe removal
of humanerrorfrom the equationthusminimizingaccidents,trafficjams,roadcongestionsamong
manyothers.But that doesn’tmeanaccidentswon’thappenandwhentheydo,(if theydo) the
outcome isnot alwaysinstinctive.Itmaymeanthat the response ispre decidedwhenthe emergency
handlingalgorithmiscodedin,inall probabilitymonthsbefore the accidentactuallyhappens.
Let’sgo furtherto considerthatthe top priorityinsucha situationis‘Minimize Harm.’Eventhenyou
mightbe facedwitha newset of problems.Toillustrate thisinasimilarsetupwe considertwo
motorcyclists.One withahelmetandone without.If the cardecidedto‘minimizeharm’ andcrash into
the guy witha helmetitisina way castigatingthe responsible,law abidingcitizen.If itchoosestocrash
intothe otherguy deeminghimirresponsible,itisexertingtotalitarianjustice andwill goagainstthe
veryprinciple of ‘minimize harm’itisbuilton.
2. Consumersandmanufacturersare facedwithmore ethical dilemmaswithcars.Onone handyou have a
car that will minimizeharm,evenif itmeansgettingyoukilled.Onthe otheryouhave one that will save
youno matterwhat, evenif itmeansgettingotherskilled.Whatwill youchoose?
Thisas previouslystatedistobe governedby standardprotocolsanddriversof manual vehicleswhoare
inno way connectedtothese protocolssometimesturnoutto be the victims.Isit somehow betterto
have a random,instinctive reactionratherthanapredeterminedone?
Thisis justone of the ethical dilemmaswe face withthisinnovationcomingtoeverydayuse.Since the
car isbasicallyacomputerrunning,putina hackerthere andchange some part of the code and you end
up witha disaster.Whoisto be heldaccountable if sucha thinghappens?The company?The coders?
The passengers?The government?
Evenif we are to take ‘minimize harm’asthe ultimate deciderprotocol,thenthe minimal harmthatcan
be done is NOTavoidthe impendingaccidentandletthe passengerdie.Inthatcase,who’ll buythese
driverlesscars?Andwhatisminimal harmexactly?3oldmenor 1 kid.If the car HAS to crash intoone of
the two,what isthe factor that decides‘minimal harm?’
In realitywe maynotalways face the exact sucha problem.Nevertheless, asineveryexperimentwe
testthe limitsof our theoryandonlyif it iscent percentfoolproof we use the theorytosolve our
problemsthusmakingthe qualityof lifebetter.