Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Belief: learning about new problems from old things

249 views

Published on

Talk given in NYC, February 2017, about what belief is and how it's manipulated across humans and machines

Published in: Data & Analytics
  • Be the first to comment

  • Be the first to like this

Belief: learning about new problems from old things

  1. 1. Belief Learning about new problems from old things
  2. 2. Why am I interested in belief? • Long-time love of autonomy • Most data science is based on belief • Lean, Agile etc are based on belief • Deep learning systems are creating assholes • America’s belief systems are being hacked
  3. 3. Agenda • Understanding beliefs • Why do techs care about this? • What goes wrong? • What could help?
  4. 4. Understanding Beliefs
  5. 5. Belief • Pistis: confidence in a ‘fact’ • Doxa: acceptance of a ‘fact’ • Normative belief: what you think other people expect you to believe…
  6. 6. You don’t escape the machine Sensor Effector
  7. 7. Truth != Belief
  8. 8. Truth != Belief
  9. 9. Learning = Forming Beliefs
  10. 10. Beliefs can be shared
  11. 11. Beliefs can be shared
  12. 12. Beliefs can be hacked A Facebook ‘like’, he said, was their most “potent weapon”. “Because using artificial intelligence, as we did, tells you all sorts of things about that individual and how to convince them with what sort of advert. And you knew there would also be other people in their network who liked what they liked, so you could spread. And then you follow them. The computer never stops learning and it never stops monitoring.” Sensor Effector Contagion Adaptation
  13. 13. Beliefs can be ‘wrong’
  14. 14. The Internet is made of beliefs
  15. 15. Why do techs care about this?
  16. 16. Lean Enterprise
  17. 17. Lean Enterprise includes beliefs ts All About Value Hypotheses
  18. 18. Hypothesis • A testable definite statement • (Null hypothesis: a hypothesis that you’re trying to prove false, e.g. “these results have the same underlying statistics”) • “Our mailing lists are too wide” • “More people will vote for us if we target our advertisements to them”
  19. 19. Lean Value Trees • Mission: transform the way the US Government does business • Goal: get elected to power • Strategic bet: use data analytics to increase ‘friendly’ voter numbers • Promise of value: larger turnout in ‘friendly’ demographics • Hypotheses and experiments • Promise of value: smaller turnout in ‘unfriendly’ • Strategic bet: use behavioral computing to increase approval ratings • Strategic bet: use propaganda techniques to destroy opposing views • Strategic bet: change the way votes are counted • Goal: reallocate wealth
  20. 20. Trimming the tree: beliefs • Mission: transform the way the US Government does business • Goal: get elected to power • Strategic bet: use data analytics to increase ‘friendly’ voter numbers • Strategic bet: use behavioral computing to increase approval ratings • Strategic bet: use propaganda techniques to destroy opposing views • Goal: reallocate wealth
  21. 21. Hypotheses and experiments: totally beliefs!
  22. 22. = Optimising for other peoples’ beliefs
  23. 23. And potentially adjusting them…
  24. 24. What goes wrong?
  25. 25. Ways to go wrong with a model ❖ Bad inputs ❖ Biased classifications ❖ Missing demographics ❖ Bad models ❖ Unclean inputs, assumptions etc ❖ Lazy interpretations (eg. clicks == interest) ❖ Trained once in a changing world ❖ Willful abuse ❖ gaming with ‘wrong’ data (propaganda etc)
  26. 26. Ways to go wrong with a human ❖ Manipulation ❖ Bias ❖ Censorship ❖ Privacy violations ❖ Social discrimination ❖ Property right violations ❖ Market power abuses ❖ Cognitive effects ❖ Heteronomy
  27. 27. Ways humans go wrong • Imperfect recall • Unconscious bias • Confirmation bias • Mental immune systems • Familiarity backfire effect • Memory traces • Emotions = stronger traces
  28. 28. The frame problem
  29. 29. Uncertainty and Unknowns
  30. 30. Human bias in big data
  31. 31. Not always wrong: drifts and shifts
  32. 32. What if somebody lies?
  33. 33. (PS Make sure it isn’t you)
  34. 34. What could help?
  35. 35. –Unknown “Ignore everything they say, watch everything they do.”
  36. 36. Verification, Validation
  37. 37. Autonomy and Proxies
  38. 38. –Richard Heuer (“Psychology of Intelligence Analysis”) “Conflicting information of uncertain reliability is endemic to intelligence analysis, as is the need to make rapid judgments on current events even before all the evidence is in” Find People Already Doing This
  39. 39. Statistics as a coding for ‘belief’
  40. 40. Structured Analytic Techniques
  41. 41. Comparing multiple hypotheses: ACH • Hypothesis. Create a set of potential hypotheses. • Evidence. List evidence and arguments for each hypothesis. • Diagnostics. List evidence against each hypothesis • Refinement. Review findings so far, find gaps in knowledge • Inconsistency. Estimate relative likelihood of each hypothesis • Sensitivity. Run sensitivity analysis on evidence, arguments • Conclusions and evidence. Present most likely explanation, and reasons why other explanations were rejected.
  42. 42. ACH in practice
  43. 43. What could help with changing beliefs?
  44. 44. Humans as adjustable systems
  45. 45. Human networks as adjustable systems
  46. 46. Countering political ‘beliefs’ • Teaching people about disinformation / questioning • Making belief differences visible • Breaking down belief ‘silos’ • Building credibility standards • New belief carriers (e.g. meme wars)
  47. 47. Thank you @bodaceacat @bodaceacat

×