Grey Area of the Information Technology Act, 2000.pptx
AI and Legal Tech in Context: Privacy and Security Commons
1. AI and Legal Tech in Context:
Governing Privacy and Security Commons
How effective privacy and information security depend on
formal and informal institutions that encourage sharing
knowledge, information, and data.
Michael Madison
University of Pittsburgh School of Law & Pitt Cyber
ACBA February 2018
@profmadison
& knowledge-commons.net
2. Artificial intelligence / algorithms / cognitive
computing raise numerous non-technical public
policy questions in legal settings as well as
elsewhere
Privacy concerns ● security concerns ● transparency concerns ●
accountability concerns ● accessibility concerns ● monopoly &
competition concerns ● cooperation & exclusivity concerns ● “nature
of humanity” concerns
Normative / conceptual / theoretical questions: What role{s} should AI /
algorithms play?
Descriptive / research-based questions: How do systems for
generating, distributing, accessing, and managing information operate
in practice?
3. The research question
How is effective privacy / security accomplished?
The hypothesis: governance
Privacy and security are grounded in commons
institutions (i.e., they depend on formal and informal
group-based patterns of practice and belief) and
require more than technical solutions (e.g.,
authentication techniques) or legal rules (e.g.,
criminal enforcement).
@profmadison
& knowledge-commons.net
4. The research strategy
We study information and knowledge commons,
institutions that generate and protect information by
sharing it in the context of rule-based systems.
Commons are
Institutionalized sharing of resources among
members of some group or community, solving some
social dilemma.
Not a place. Not a thing. Not “the commons.”
@profmadison
& knowledge-commons.net
5. 5
The research to date
• Defined the Knowledge Commons Research
Framework (Madison et al. 2010) (building on
Ostrom 1990)
• Governing Knowledge Commons (Frischmann et
al. Oxford UP 2014)
• Governing Medical Knowledge Commons
(Strandburg et al. Cambridge UP 2017)
• Governing Privacy Commons (forthcoming
Cambridge UP)
• Governing University Commons (forthcoming
Cambridge UP)
@profmadison
& knowledge-commons.net
6. Privacy and security commons research in progress
In case study context, identify the informational dilemma(s) to be solved; the
resources to be managed; the group or community; the formal and informal rules by
which information in the group or community is governed (internal and external);
the outcomes – good or bad.
Privacy/security examples implicate (i) mix of privacy & collaboration rules
governing information resources (ii) in order to promote valuable practices:
• Anonymous, private, secure voting systems encourage democratic
participation and lead to aggregating political preferences in fair ways
• Financial institutions’ secure collection & management of customer data
encourages participation in financial markets
• Secure sharing of private information in social networks (both close-knit &
loose-knit) can promote healthy community and society
• Chatham House Rule for confidential meetings encourages productive
collaboration
@profmadison
& knowledge-commons.net
7. Strengths and weaknesses
• Contextual approach leads to learning more about variance in communities,
obstacles/dilemmas, objectives, and institutions beyond tech firms, beyond
markets, beyond governments
• Overlaps and intersections among commons institutions can be explored
• Bottom up learning about normative values
• Possibility of improving institutional design via design principles
BUT
• The approach doesn’t work at the extremes: privacy with n=1; privacy with
n=everyone
• The approach is complicated by working with physical / material resources
• Sidelines normative debate and values
• Essentially ethnographic; needs dedicated research community
@profmadison
& knowledge-commons.net
8. Payoffs: Artificial intelligence, algorithms, and
cognitive computing in context
• Commons governance – sharing information resources to generate productive
outcomes – is historical, traditional, and effective.
• Distinguish between information system as infrastructure (single resource,
multiple uses & users) and system as proprietary service (an exclusive thing).
Commons is more likely to be effective as governance for infrastructure.
• Payoff 1: Exclusivity matters most when AI is used in consulting one-to-one with
clients and delivering services to clients. Example: predictive analytics.
• Payoff 2: Commons may matter more, and may trump exclusivity, where legal
tech / AI operates as infrastructure – e.g., resource in advocacy and/or judicial
administration. Example: DNA testing, election security. Similar: case text
databases.
• Payoff 3: Governance cannot be divorced from hard values questions.
Capabilities of AI may challenge distinctions between humans and tech in
framing big “what is justice?” questions. If AI can write briefs (as Westlaw can,
or will soon), and if AI can adjudicate disputes (as insurance carriers may soon
do, with simple claims), then why have lawyers? That’s not a rhetorical question.