Mass declassification sept 23 2010v2.1
Upcoming SlideShare
Loading in...5
×
 

Mass declassification sept 23 2010v2.1

on

  • 695 views

My public presentation as delivered to the Public Interest Declassification Board (PIDB) trying to determine the best way to declassify and release over 400M classified documents.

My public presentation as delivered to the Public Interest Declassification Board (PIDB) trying to determine the best way to declassify and release over 400M classified documents.

Statistics

Views

Total Views
695
Views on SlideShare
695
Embed Views
0

Actions

Likes
0
Downloads
21
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Here is a look at the DeepQA architecture. This is like looking inside the brain of the Watson system from about 30,000 feet high. Remember, natural language is ambiguous, polysemous, tacit and its meaning is often highly contextual. Bottom line -- the computer needs to consider many possible meanings, attempting to find the inference paths that are most confidently supported by the data. The primary computational principle supported by the DeepQA architecture is to assume and maintain multiple interpretations of the question, to generate many plausible answers or hypotheses and to collect and process many different evidence paths that might support or refute those hypotheses. Each component in the system adds assumptions about what the question means or what the content means or what the answer might be or why it might be correct. DeepQA is implemented as an extensible architecture and was designed from the outset to support interoperability across independently developed analytics. For this reason it was implemented using UIMA, a framework and OASIS standard for interoperable text and multi-modal analysis contributed by IBM to the open-source community and now an Apache Project (http://uima.apache.org) Over 100 different algorithms, implemented as UIMA components, were developed, advanced and integrated into this architecture to build Watson . In the first step, Question and Category analysis , parsing algorithms decompose the question into its grammatical or syntactic components. Other algorithms here will identify and tag specific semantic entities like names, places or dates. In particular the type of thing being asked for, if is indicated at all, will be identified. We call this the LAT or Lexical Answer Type, like this “FISH”, this “CHARACTER” or “COUNTRY”. In Query Decomposition, different assumptions are made about if and how the question might be decomposed into sub questions. The original and each identified sub part follow parallel paths through the system. In Hypothesis Generation, DeepQA does a variety of very broad searches for each of several interpretations of the question. These searches are performed over a combination of unstructured data, natural language documents, and structured data, available knowledge bases. The goal of this step is to generate possible answers to the question and/or its sub parts. At this point there is not a lot of confidence in these possible answers since little intelligence has been applied to understanding the content that might relate to the question. The focus is on generating a broad set of hypotheses, – or for this application what we call “Candidate Answers”. To implement this step for Watson we used multiple open-source text and KB search components. DeepQA, acknowledges that resources are ultimately limited. And some parameterized judgment about which candidate answers are worth pursuing further must be made given constrains on time and available hardware. Based on a trained threshold for optimizing the tradeoff between accuracy and latency, DeepQA uses soft filtering -- it uses different light-weight algorithms to judge which candidates are worth gathering evidence for and which should get less attention and continue through the computation as-is. In contrast, if this were a hard-filter those candidates falling below the filter would be eliminated from consideration entirely at this point. In Hypothesis & Evidence Scoring the candidate answers are scored independently of any additional evidence by deeper analysis algorithms. This may for example include Typing Algorithms. These are algorithms that produce a score indicating how likely it is that a candidate answer is an instance of the Lexical Answer Type determined in the first step – for example Country, Agent, Character, City, Slogan, Book etc. Many of these algorithms may fire using different resources and techniques to come up with a score. What is the likelihood that “Washington” for example, refers to a “General” or a “Capital” or a “State” or a “Mountain” or a “Father” or a “Founder”? Evidence , in this case, more documents, passages and more structured facts, are collected for the many candidate answers. Each of these pieces of evidence are subjected to many independently developed algorithms that deeply analyze the evidentiary passages, for example, and score the likelihood that the passage supports or refutes the correctness of the candidate answer. In the Synthesis step, if the question had been decomposed into sub-parts, one or more synthesis algorithms will fire, with varying levels of certainty, They will apply methods for inferring a coherent final answer from the constituent elements derived from the questions sub-parts. Finally, arriving at the last step, Final Merging and Ranking, are many possible answers, each paired with many pieces of evidence and each of these scored by many algorithms to produce hundreds of feature scores. All giving some evidence for the correctness of each candidate answer. Trained models are applied to weigh the relative importance of these feature scores. These models are trained with ML methods to predict, based on past performance, how best to combine all this scores to produce final, single confidence numbers for each candidate answer and to produce the final ranking of all candidates. The answer with the strongest confidence would be Watson’s final answer. And Watson would try to buzz-in provided that top answer’s confidence was above a certain threshold. ----------------------- The DeepQA system defers commitments and carries possibilities through the entire process while searching for increasing broader contextual evidence and more credible inferences to support the most likely candidate answers. All the algorithms used to interpret questions, generate candidate answers, score answers, collection evidence and score evidence are loosely coupled but work holistically by virtue of DeepQA’s pervasive machine learning infrastructure. No one component could realize its impact on end-to-end performance without being integrated and trained with the other components AND they are all evolving simultaneously. In fact what had 10% impact on some metric one day, 1 month later might only contribute 2% to overall performance due to evolving component algorithms and interactions. This is why the system as it develops is regularly trained, evaluated and retrained. DeepQA is a complex system architecture designed to incrementally extend both in data and algorithms to deal with the challenges of natural language processing applications and to adapt to new domains of knowledge. The Jeopardy! Challenge has greatly inspired its design and implementation for the Watson system. -David A. Ferrucci

Mass declassification sept 23 2010v2.1 Mass declassification sept 23 2010v2.1 Presentation Transcript

  • Mass Declassification What If? Jeff Jonas, IBM Distinguished Engineer Chief Scientist, IBM Entity Analytics [email_address] September 23, 2010
  • The Ask
    • What emerging technology or innovative approaches come to mind … which may have applicability to this task?
    • Use your imagination. What if?
    • Not talking about any specific products
    • Not focusing on the widely available COTS/GOTS technologies (OCR, document management, case management, workflow, etc.)
  • The Problem at Hand
    • Volumes may be beyond human, brute force review (@5min/ea = 18,382 FTEs)
    • Necessitates some form of machine triage
      • Red: A disclosure risk
      • Yellow: A possible disclosure risk
      • Green: No disclosure risk
    • Reliable machine triage requires substantially better prediction systems
    • Even then, advanced means for humans to deal with the remaining large volumes of “possibles” is still required
  • Background
    • Early 80’s: Founded Systems Research & Development (SRD), a custom software consultancy
    • 1989 – 2003: Built numerous systems for Las Vegas casinos including a technology known as Non-Obvious Relationship Awareness (NORA)
    • 2001/2003: Funded by In-Q-Tel
    • 2005: IBM acquires SRD
    • Cumulatively: I have had a hand in a number of systems with multi-billions of rows describing 100’s of millions of entities
    • Affiliations:
      • Member, Markle Foundation Task Force on National Security in the Information Age
      • Senior Associate, Center for Strategic and International Studies (CSIS)
      • Distinguished Research Faculty (adjunct), Singapore Management University, School of Information Systems
      • Member, EPIC advisory board
      • Board Member, US Geospatial Intelligence Foundation (USGIF), the GEOINT organizing body
  • In Today’s Session
    • Intro to context accumulating systems
    • Predictions and data points needed for mass declassification
    • Strawman architecture
    • Challenges
    • Q&A
  • Context Accumulating Systems
  • From Pixels to Pictures to Insight Observations Context Relevance Consumer (An analyst, a system, the sensor itself, etc.) Contextualization
    • Context, definition of:
    • Better understanding something by taking into account the things around it.
  • Without Context [email_address]
  • Consequences
    • Algorithms flat-lining (e.g., alert queues)
    • Enterprise amnesia on the rise
    • Overwhelmed by false positives and false negatives? You have seen nothing yet
    • Not enough humans to fix this with brute force
    • Risk assessment becomes the risk
  • Context Accumulation Trusted Supplier Job Applicant Stolen Identity Known Terrorist [email_address]
  • Puzzle Metaphor Primer
    • Imagine an ever-growing pile of puzzle pieces of varying sizes, shapes and colors
    • What it represents is unknown – there is no picture on hand
    • Is it one puzzle, 15 puzzles, or 1,500 puzzles?
    • Some pieces are duplicates and some are missing
    • Some are pieces are incomplete, low quality, or have been misinterpreted
    • Some pieces may even be professionally fabricated lies
    • Until you take the pieces to the table, you don’t know what you are dealing with
  • How Context Accumulates
    • With each new observation … one of three assertions are made: 1) Un-associated; 2) near like neighbors; or 3) connections
    • Asserted connections must favor the false negative
    • New observations sometimes reverse earlier assertions
    • Some observations produce novel discovery
    • As the working space expands, computational effort increases
    • The emerging picture helps focus collection interests
    • Given sufficient observations, there can come a tipping point
    • Thereafter, confidence improves while computational effort decreases!!!!
  • False Negatives Overstate The Universe Observations Unique Identities True Population
  • Counting Is Difficult Mark Smith 6/12/1978 443-43-0000 Mark R Smith (707) 433-0000 DL: 00001234 File 1 File 2
  • The Rise and Fall of a Population Observations Unique Identities True Population
  • Data Triangulation Mark Smith 6/12/1978 443-43-0000 Mark R Smith (707) 433-0000 DL: 00001234 File 1 File 2 Mark Randy Smith 443-43-0000 DL: 00001234 New Record
  • Increasing Accuracy and Performance Observations Unique Identities True Population
  • “ Expert Counting” is Fundamental to Prediction
    • Is it 5 people each with 1 account … or is it 1 person with 5 accounts?
    • If one cannot count … one cannot estimate vector or velocity (direction and speed).
    • Without vector and velocity … prediction is nearly impossible.
    • Therefore, if you can’t count, you can’t predict.
  • Mass Declassification Predictions
  • Mass Declassification Predictions
    • Whose equity is it?
    • Machine triage – disposition
    • Queue prioritization
  • Using What Data Points?
    • FOR EXAMPLE:
    • 450M target documents
    • Dirty words
    • Previous declassifications
    • Previous declassification denials
    • FOIA’s
    • Intellipedia
    • Wikipedia
    • WikiLeaks
    • Deceased persons
    • Publically available accounts/facts
  • Open Source Discovery/Scoring
    • “ Height of Pakistan’s Mufasa missile.”
        • What is 15.5 meters?
          • New York Times, Sept 21, 2010, C3
            • “ Pakistan unveils Mufasa 7 Warhead”
          • Wikipedia: Mufasa_7_Warhead
  • Context Accumulation FOIA March 2010 Open Source Reference Dirty Word Classified – Asserted Mufasa 7 Warhead
  • Context Accumulation + Statistics
    • Document Element Total | Declass | Class-Default | Class-Asserted
    • Author: “Billy K” 4503 1600 403 0
    • Codeword: “Tomatoe” 4818 4600 218 0
    • Classification: “SI/TK/001” 23 22 1 0
    • Actors: “Salam Ahmed” 782 700 82 0
    Declassification dispositions … becoming a force multiplier. The more human dispositions, the more automated dispositions. Human Triage Auto Triage 5,000 20 10,000 4,000 100,000 65,000 1,000,000 17,000,000
  • Policy Questions
    • What related information is already available in the public domain?
      • Evidence: Exists in open source
    • What damage might conceivably result from disclosure and what benefits might ensue?
      • Evidence: Same text already released (by same equity holder)
  • Strawman Architecture
  • Strawman Architecture 450M Docs Historical Dispositions DirtyWords Etc. Feature Extraction & Classification Context Accumulation Predictions(*) Workflow System (*) Recommendations: Equity of, Disposition, Priority Dispositions
  • Another Idea: Crowd Sourcing
    • Can you predict specific people with privileges and knowledge … to whom can be routed selected documents for evaluation?
    • Can you publish machine-triage recommendations to a wiki or other form of internal broadcast for community crowd sourcing?
  • Another Idea: Better Classification
    • Using the overall declassification platform to assist in proper classification (real-time)
    • And, better pre-tagging to assist in future auto-declassification
  • Challenges
  • Challenges
    • Entity extraction is imperfect
    • Predictions may still not good enough, often enough
    • Not in English
    • The user work surface and its distribution
    • Consequences of an inappropriate release
    • With super access and super tools, this may call for stronger audit and insider-threat protections
    • Your contracting cycle and the creation of the system might take until mid-2011 or 2012 or 2013
  • Closing Thoughts
  • Closing Thoughts
    • Contextualization is essential to better prediction
    • There are not enough humans to ask every question every day
    • “ Human attention directing” systems are critical to the mission
    • The data must find the data, the relevance must find the user
  • Worst Case Scenario
    • Rich context enables better hints for users, results in faster dispositions
    • Rich context enables improved sequencing of the work
  • Related Blog Posts
    • Smart Sensemaking Systems, First and Foremost, Must be Expert Counting Systems
    • Data Finds Data
    • Puzzling: How Observations Are Accumulated Into Context
    • The Fast Last Puzzle Piece
    • Algorithms At Dead-End: Cannot Squeeze Knowledge Out Of A Pixel
    • How to Use a Glue Gun to Catch a Liar
    • It Turns Out Both Bad Data and a Teaspoon of Dirt May Be Good For You
    • Smart Systems Flip-Flop
  • Blogging At: www.JeffJonas.TypePad.com Information Management Privacy National Security and Triathlons Questions?
  • Mass Declassification What If? Jeff Jonas, IBM Distinguished Engineer Chief Scientist, IBM Entity Analytics [email_address] September 23, 2010