Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Â
Artificial Blue And Deep Intelligence
1. 5 november 1997 15:36 rik2 Sheet number 108 Page number 101
Artificial Blue andDeep Intelligence
Abdul Paliw
ala
CTI, LawTechnologyCentre, School of LawUniversityof Warw
ick
This paper explores the implications ofthe Kasparov v Deep Blue Chess Match for
w
orkonlegal know
ledgebasedsystems. It suggeststhat it isnecessarytoaccept the
specificityofcomputerandhumanlegal culturesandnotsimplyadoptlegal theories
w
hich appear most easilypalatable for AI w
orkers. A secondsuggestion is that AI
w
ork is better conceived in terms of a cybernetic partnership betw
een computers
and human programmers, domain experts and users. A third suggestion, that of
tendencyfor humans to make mistakes suggests a proper role for AI preciselyin
those areas w
here human w
eakness is a significant factor. The final observation
is that there may already be in train a process of changing the rules or at least
the conditions of lawto suit AI in law
. This superficiallyattractive idea raises the
deepest political issues.
âShould w
e tryto make artificial intelligence byduplicating howhumans do it or
instead trytoexploit theparticular strength of machines?â DavidStork (1
997)
â(I)n most know
ledge based sy
stems ... it is not intended to imitate the problem-
solving approach and reasoning methods of the expert. Most LKBSs are based on
artificial models onlymeant toproducereliableresults.â (De Vries et al 1991.)
The victory of Deep Blue against Kasparov could, paradoxically
, give comfort to
those w
ho believe in AI as w
ell as those w
ho believe in the uniqueness of human
intelligence. For the AI community
, there is tangible proof of success. 1
Deep Blue
came close to fulfilling the promise of the original human v computer chess game
w
ith HAL in 2001. For believers in the âuniqueness of human intelligenceâ, there
is satisfaction that Deep Blue can do w
ell in chess onlybecause it is ultimatelya
quantifiable game, and that even the J
apanese Go is as yet not amenable to the
number crunchingpow
er of current machines.
Thus it is suggestedon the Kasparov vs Deep Blue w
ebsite (1997a).
Deep Blue, as it stands today
, is not a âlearningsystemâ. It is therefore
not capable of utilizingartificial intelli
gence to either learn from its
opponent or âthinkâ about the current position of the chessboard.
Stork (1997) provides an apparentlycomfortable compromise reasoningthat com-
puter intelligence and human intelligence have their ow
n strengths and domains.
According to him, current computers have their strengths in quantification and
research w
hereas humans do w
ell those bits w
hich call for âsophisticated pattern
recognitionâ andrapidadaptability
.
1) There is a considerable developing literature on Deep Blue vs Kasparov
. For the pur-
pose of this article it may be sufficient to brow
se the IBM site guide on the match at
http:/ / w
w
w
.chess.ibm.com/ guide/ html/ j.1.html.
101
2. 5 november 1997 15:36 rik2 Sheet number 109 Page number 102
JURIX â97: Abdul Paliw
ala
Computers are, ofcourse, everimproving. An analysis offeredofthe victoryofDeep
Blue over Kasparov w
as that betw
een 1996 and 1997, Deep Blueâs program w
as
madeadaptablebetw
eengames. ThesuccessofDeep Blueisnot entirelythesuccess
of some abstract device, but that of a co-operative cybernetic partnership betw
een
computers andhumans, w
ith Deep Bluebeingassistedbyexpert programmersand
an international grand master (Kasparov v Deep Blue w
ebsite 1997a). Another
explanation, of course, is the w
eakness of humans. Kasparov hada baddayin the
last game. Hew
as tootense, triedtoplaylike Karpovandmadesomesillymistakes
(Anand1997). Deep Bluew
ouldnât makesillymistakes, but neitherw
ouldit beable
to adapt rapidlyw
ithin the game. It does not operate byrecognition of Kasparovâs
patterns of play
. In fact unlike human opponents of Kasparov
, it is not playing
against Kasparov
. Finally
, and more significantly
, Kasparov has suggested that a
key reason for Deep Blueâs victory w
as that the game w
as designed in a manner
w
hich suited the computer. He thought that Deep Blue w
ould not stand a chance
w
ere the game tobe playedunder âhumanâ conditions.
Each of the above issues has interestingimplications for AI andlaw
.
Different Cultures
The Deep Blue programmersâ strength w
as not to ask the question in terms of
âHowdoes Kasparov play?â but in defining the game of chess in terms amenable
to the computer anddetermine the goals in terms of computer techniques. In con-
tradistinction, w
hile much of AI in lawresearch has been about emulating legal
reasoning, a considerable amount has attempted to fit legal reasoning into forms
amenable tocontemporaryAI methodology
. 2
Both approaches have their problems.
In particular, the problems have arisen from the deliberate attempt to focus on
analytical positivism as the basis for theoretical understandingofthe legal system;
perhaps because of deceptive similarity to logics accessible to computers. This is
the case even for the exploration of âdeep structuresâ anddeonticlogic.
An alternative perspective w
ouldbe to use the differences betw
een computer rea-
soningandhuman reasoningas the startingpoint for analysis, tosee them as tw
o
separate cultures attemptingtosolvesimilar problems. A w
iderangeofalternative
theoretical perspectives provide âculturalâ understandingoflegal systems, w
hether
in the tradition of legal anthropologyand legal pluralism w
hich appreciate legal
systems w
ithin their ow
n cultural contexts (Arthurs 1985, Abel 1982, Moore 1973),
of US critical legal theory w
hich emphasises the culture of law (Fish 1980), of
autopoietic theoryw
hich sees lawas self-referential cognitive systems (Luhmann
1985 and Teubner 1993) or the post-modernist approaches of Foucault 1979 and
Derrida 1978 w
hich emphasise anddeconstruct the differencebetw
een discourses.
None of these have specificallyexploredcomputer cultures, but cybernetic theory
,
for example of Deleuze and Guattari (1994), or the w
ork of Baudrillard (1983)
are concerned precisely w
ith the specificity of computer cultures. Such a cultural
approach w
ouldfree the AI communityof the albatross of copyingthe specificities
of human legal reasoning. To put it crudely
, it does not, a la J
erome Frank, have
to deal w
ith human quirks such as w
hat the judge had to eat for breakfast that
morning, or w
hether she quarrelled w
ith her husband as an intrinsic part of the
computersystem. Theissuew
ouldbew
hethertheresultsachievedcomputationally
w
ere commensurate w
ith the human goals of the legal system. More significantly
,
neither does it have topretendthat in some w
aycrude logicprogrammingsystems
are capable of providing answ
ers to the mysteries of human reasoning. Instead,
2) Bench-Capon and Visser (1996) reviewthe issues in their consideration of ontological techniques. It
is interestingthat their w
orkis self-confessedlyderivedfrom computational issues anddoes not refer
tolegal theoretical ones.
102
3. 5 november 1997 15:36 rik2 Sheet number 110 Page number 103
Artificial Blueand Deep Intelligence
research can take place on legal cultures as cultural systems and determine the
separate w
ays in w
hich computer cultures can develop practical solutions to some
legal tasks. In the languageofDeep Blue, w
orkcan concentrateon those bits oflaw
w
hich can be chessified. There is, ofcourse, alreadyevidenceofsuch systems w
hich
are called âlegal practice systemsâ. Very feware AI systems in the full sense, but
theyw
ork. GenuineAIsystems couldw
ithintheirow
ncultural constraints perform
specific tasks w
ithout imitating human reasoning. More significantly
, research on
the development of legal KBS, for example in relation to neural netw
orks, w
ould
continue w
ithout the artificialityof an attempt tocopyhuman reasoning.
A cybernetic partnership?
Deep Blueâs minders could not do much w
hile the game w
as going on, but could
tw
eaktheprogrammingbetw
eengames. Thisis anexampleofa cyberneticpartner-
ship betw
een computers and humans. Richard Susskindâs (1989) Latent Damage
System w
as based on a partnership in w
hich the domain expert (Philip Capper)
providedthe heuristics w
hich w
as cyberneticallyinvolvedw
ith computerlogic. The
typical decision support systems involve cybernetic partnerships both in terms of
the domain expertise, heuristics anduser decision making. The human makes the
complex decisionsw
hichthenleadtofurthercomputeranalysisandsoon. Ofcourse,
there is nonecessaryimplication of the dominance of humans over computers, but
of divisions of labour. The issue is howto determine the division of labour across
different cultural systems. This is the reason for the cultural analysis of the type
suggested.
The weakness of humans
Kasparov had an off dayin the final game (Anand 1997). Computers do not have
such off days. Computers can consistently perform giant feats of number crunch-
ing. An impressive feature of Deep Blueâs performance is that it becomes virtu-
allyunbeatable in endgame situations w
hen the options are few
er and thus more
amenable to the number crunching pow
er of Deep Blue. It is not surprising that
computer legal practice systems w
hich have been successful until nowhave been
precisely in those domains in w
hich the computer can keep track of transactions
and transaction requir
ements, w
hereas humans might be prone to error. The fact
that hardly any of these systems qualify as true AI systems is not necessarily
significant.
The Rules of the Game
The final consideration is that Deep Blue is allegedtohave w
on because the condi-
tions ofthe game favouredit. This is the most tantalizingaspect ofthe issue as far
as lawis concerned. If onlylawcan be quantified so that it becomes amenable to
number crunching, ifit couldbe convertedtoalgorithms, ifjudicial discretion could
be effectivelyreduced â that is, if lawcould be jurimetricised, then the problems
for AI systems of logic and legal reasoning and legal cultures w
ould disappear. It
is possible that such shifts in legal cultures are alreadytaking place, that under-
neath the ideological adherence of most law
yers to the uncertaintyof law
, there is
a surreptitious development of a jurimetricculture of quantification.
As early as the seventies, it w
as being suggested that legislation could use algo-
rithms to communicate complex ideas (eg. Tw
ining and Miers 1976). There has
been an increasing use of mathematical formulae in legislation, most spectacu-
larlyin the United Kingdom w
ith the Child Support Act 1990, under w
hich Child
Support is calculated in accordance w
ith a complex mat
hematical formula. There
are obvious advantages of such formulae. The Child Support provisions w
ould be
103
4. 5 november 1997 15:36 rik2 Sheet number 111 Page number 104
JURIX â97: Abdul Paliw
ala
extremelydifficult todescribe in prose, w
hereas theyare easilycomprehensible as
a formula. Moreoverthe formula can be complex andtake account ofa w
ide variety
of factors. But most significantly
, it can be amenable tothe development of âexpert
typeâ calculation systems w
hich can provide for the parties and the civil servants
consistent methods of calculation once the factors have been ascertained. That is,
the rules of the game are made tofit computer logic. Such computation w
ouldalso
save considerable legal resources. I suggestedthat this development maybe taking
place at the 12th BILETA Conference in response to De Mulder and Noortw
ijkâs
(1997) paper. More significantly
, in a recent un
publishedpaper, Zariski has begun
toanalyse tendencies in academicliterature w
hich showan increasingtilt tow
ards
quantification.
If this brings comfort to the AI/ jurimetrics communities, then theyshould bew
are
of the popular resistance against the crude formula of the Child Support Act. The
objections toa lack of consideration for the human factors w
as such as toforce the
government to concede an element of discretion in the formula in the most recent
legislative reforms.
Conclusions
The implications ofchess games betw
een computers andhumans are not as simple
as they appear at first sight. Neither are the implications for AI and law
, and
perhaps it is a mistake to drawanalogies. But having been asked to do so by my
hosts at this conference, as a good guest it is necessary to oblige. I have made
four suggestions. Firstly
, that like the progra mmers of Deep Blue, and some AI
researchers, it is necessaryto accept the specificityof computer and human legal
cultures and not simply adopt legal theories w
hich appear most easily palatable
for AI w
orkers. This w
ouldparadoxicallyliberate AI w
orkers from tryingtoinvent
computer systems w
hich mimic human legal reasoning and hence concentrate on
tasks w
hich computer systems can properlyaccomplish.
Another suggested lesson from Kasparov v Deep Blue is that AI w
ork is better
conceived in terms of a cybernetic partnership betw
een computers and human
programmers, domain experts andusers. The thirdlesson, that ofthe tendencyfor
humans to make mistakes suggests a proper role for AI precisely in those areas
w
here human w
eakness is a significant factor.
The final lesson is much more controversial. Changing the rules, or at least the
conditions, of the game to suit AI is superficiallyattractive, but raises the deepest
human and political issues. Informed decisions cannot be made w
ithout proper
research. Until nowthefocusofAIandlawresearchhaseitherbeenondevelopment
of practical systems, research into computing logic systems and analysis of legal
reasoning. I w
ould like to suggest a greater focus on analysis of both computer
and legal cultures and in particular of the w
ay in w
hich legal cultures might be
shiftingtow
ards jurimetrics. Onlyw
hen w
e have done this can w
e speculate about
the implications for human values.
References
Abel R. 1982 ed. The politics of informal justice. Vol 1 and2. NewYork. Academic
Press.
Anand, W. 1997. More questions than answ
ers. Kasparov v Deep Blue w
ebsite
http:/ / w
w
w
.chess.ibm.com/ home/ may11/ story
3.html.
Arthurs H 1985. Without the Law
: Administrative Justice and Legal Pluralism in
mid-19th CenturyEngland. Toronto. Universityof TorontoPress.
BaudrillardJ1983. In the shadowofthe silent majorities, NewYork, Semiotext(e).
104
5. 5 november 1997 15:36 rik2 Sheet number 112 Page number 105
Artificial Blueand Deep Intelligence
Bench-Capon, T
. andVisser, P
. 1996. Deep models, ontologies andlegal know
ledge-
basedsystems. In vanKralingenet al eds. Legal Know
ledgeBasedSy
stems. Jurix
â96. TilburgUP1996 at pp.3-15.
Capper, P
. andSusskindR. 1989. Latent Damage: TheExpert Sy
stem. London But-
terw
orths.
De Vries, W.S., van den Herik, H.J
., Schmidt, A.H.J. 1991. Separate modelling
of user system cooperation. In Breuker, J
.A. et al. eds. Legal Know
ledge Based
Sy
stems: Model-BasedLegal Reasoning, Koninklijke Vermande, Lelystad.
Deleuze, Gilles and FĚ
elix Guattari 1994. What is philosophy
? translated byHugh
Tomlinson andGraham Burchell. London Verso.
Derrida, J
. 1978. Writingand Difference. London, Routledge.
Fish, S. 1980. Is there a text in this class? the authorityofinterpretivecommunities
StanleyFish. Cambridge, Mass. London HarvardUniversityPress.
Foucault, M. 1979. The historyof sexualityVol 1. London. Allen Lane.
Kasparov vs Deep Blue w
ebsite. 1997. Meet the Players
http:/ / w
w
w
.chess.ibm.com/
meet/ html/ d.html
Luhmann, Niklas. 1985. A sociological theoryof lawtranslated by Elizabeth King
andMartin AlbroweditedbyMartin Albrow
. London Routledge & Kegan Paul.
Moore, S. F. 1973. LawandSocial Change: The semi-autonomous social fieldas an
appropriate subject of study
. Lawand SocietyReview7, 719-46.
Stork, D. 1997. The endof an era, the beginningof another? HAL, Deep Blue and
Kasparov
. Kasparov vs Deep Blue
w
ebsite http:/ / w
w
w
.chess.ibm.com/ learn/ html/ e.8.1.html
Teubner, Gunther 1993. Law as an autopoietic system translated by Anne Ban-
kow
ska and Ruth Adler, edited by Zenon Bankow
ski, Oxford, UK Cambridge,
Mass., USA Blackw
ell Publishers.
Tw
ining, W. and Miers, D. 1976. Howto do things w
ith rules. 1 st
ed. London, Wei-
denfeld& Nicolson. Ch 9.
Zariski, A. 1997. Virtual w
ords andthe fate of law(unpublishedpaper).
105
6. 5 november 1997 15:36 rik2 Sheet number 113 Page number 106
106