Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

CansecWest2019: Infosec Frameworks for Misinformation

111 views

Published on

Talk given by Sara-Jayne Terp, Pablo Breuer at CanSecWest 2019 on information security frameworks for disinformation

Published in: Technology
  • Be the first to comment

  • Be the first to like this

CansecWest2019: Infosec Frameworks for Misinformation

  1. 1. INFOSEC FRAMEWORKS FOR MISINFORMATION SARA “SJ” TERP AND PABLO BREUER CANSECWEST 2019
  2. 2. TALK OBJECTIVES • Describe the problem • Establish a common language • Introduce a framework • Talk about what we can do with the framework
  3. 3. Describing the Problem Misinformation
  4. 4. SOCIAL ENGINEERING AT SCALE Facebook Group Shares Interactions Blacktivists 103,767,792 6,182,835 Txrebels 102,950,151 3,453,143 MuslimAmerica 71,355,895 2,128,875 Patriototus 51,139,860 4,438,745 Secured.Borders 5,600,136 1,592,771 Lgbtun 5,187,494 1,262,386
  5. 5. INTENT TO DECEIVE Force adversary to make decision or take action based on information that I: • Hide • Give • Change (or change the context on) • Deny/degrade • Destroy Enable my decisions based upon knowing yours “Operations to convey selected information and indicators to audiences to influence their emotions, motives, and objectives reasoning, and ultimately the behavior of governments, organizations, groups, and individuals”
  6. 6. ARTEFACTS
  7. 7. ARTEFACTS
  8. 8. Describing the problem Why misinformation is different now
  9. 9. INSTRUMENTS OF NATIONAL POWER …and how to influence other nation-states. Diplomatic Informational Military Economic Resources available in pursuit of national objectives…
  10. 10. NATIONSTATE MISINFORMATION From To Brazil Brazil China China, Taiwan, US Iran India, Pakistan Russia Armenia, France, Germany, Netherlands, Philippines, Serbia, UK, USA, Ukraine, World Saudi Qatar Unknown France, Germany, USA
  11. 11. MISINFORMATION STRATEGIES Distort Distract Divide Dismay Dismiss
  12. 12. WHAT’S DIFFERENT NOW?
  13. 13. OTHER ACTORS AND THEIR MOTIVATIONS • State and non-state actors • Entrepreneurs • Grassroots groups • Private influencers
  14. 14. RESPONSE: NOT JUST ADMIRING THE PROBLEM
  15. 15. MISINFORMATION PYRAMID
  16. 16. MISINFOSEC: MISINFORMATION + INFOSEC All cyberspace operations are based on influence. - Pablo Breuer
  17. 17. MISINFORMATION VIEWED AS… • Information security (Gordon, Grugq, Rogers) • Information operations / influence operations (Lin) • A form of conflict (Singer, Gerasimov) • [A social problem] • [News source pollution]
  18. 18. ATTACK. DEFEND. NETWORKS. LOOKED FAMILIAR.
  19. 19. MAYBE THERE WERE THINGS WE COULD USE
  20. 20. ADDING MISINFORMATION TO INFOSEC “Prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation” - NSPD-54
  21. 21. INFOSEC ALREADY INCLUDES COGNITIVE
  22. 22. PSYOPS AND INFOSEC AREN’T JOINED UP Information Operations PSYOPS Computer Network Operations
  23. 23. INFOSEC SUPPORT TO MISINFORMATION TRACKING
  24. 24. THERE’S NO COMMON LANGUAGE “We use misinformation attack (and misinformation campaign) to refer to the deliberate promotion of false, misleading or mis-attributed information. Whilst these attacks occur in many venues (print, radio, etc), we focus on the creation, propagation and consumption of misinformation online. We are especially interested in misinformation designed to change beliefs in a large number of people.”
  25. 25. MISINFOSEC COMMUNITIES ● Industry ● Academia ● Media ● Community ● Government
  26. 26. FIRST OUTPUT: MISINFOSEC FRAMEWORK STANDARDS
  27. 27. FRAMEWORKS Underpinning misinformation
  28. 28. STAGE-BASED MODELS ARE USEFUL RECON WEAPONIZE DELIVER EXPLOIT CONTROL EXECUTE MAINTAIN Persistence Privilege Escalation Defense Evasion Credential Access Discovery Lateral Movement Execution Collection Exfiltration Command and Control
  29. 29. WE CHOSE THE ATT&CK FRAMEWORK
  30. 30. AND STARTED MAPPING MISINFORMATION ONTO IT Initial Access Create Artefacts Insert Theme Amplify Message Command And Control Account takeover Steal existing artefacts Create fake emergency Repeat messaging with bots Create fake real-life events Create fake group Deepfake Create fake argument Parody account Buy friends Deep cover
  31. 31. POPULATING THE FRAMEWORK • Campaigns • e.g. Internet Research Agency, 2016 US elections • Incidents • e.g. Columbia Chemicals • Failed attempts • e.g. Russia - France campaigns
  32. 32. HISTORICAL CATALOG
  33. 33. HISTORICAL CATALOG: DATASHEET • Summary: Early Russian (IRA) “fake news” stories. Completely fabricated; very short lifespan. • Actor: probably IRA (source: recordedfuture) • Timeframe: Sept 11 2014 (1 day) • Presumed goals: test deployment • Artefacts: text messages, images, video • Related attacks: These were all well-produced fake news stories, promoted on Twitter to influencers through a single dominant hashtag -- #BPoilspilltsunami, #shockingmurderinatlanta, • Method: 1. Create messages. e.g. “A powerful explosion heard from miles away happened at a chemical plant in Centerville, Louisiana #ColumbianChemicals” 2. Post messages from fake twitter accounts; include handles of local and global influencers (journalists, media, politicians, e.g. @senjeffmerkley) 3. Amplify, by repeating messages on twitter via fake twitter accounts • Result: limited traction • Counters: None seen. Fake stories were debunked very quickly.
  34. 34. FEEDS INTO TECHNIQUES LIST • Behavior: two groups meeting in same place at same time • Intended effect: IRL tension / conflict • Requirements: access to groups, group trust • Detection: • Handling: • Examples: Title Description Short_Description Intended_Effect Behavior Resources Victim_Targeting Exploit_Targets Related_TTPs Kill_chain_Phases Information_Source Klil_Chains Handling
  35. 35. THIS IS WHAT A FINISHED FRAMEWORK LOOKS LIKE
  36. 36. FINDING TECHNIQUES Tracking incidents and artefacts
  37. 37. INCIDENT ANALYSIS Top-down (strategic): info ops ❏ What are misinformation creators likely to do? What, where, when, how, who, why? ❏ What do we expect to see? ❏ What responses and impediments to responses were there? Bottom-up (tactical): data science ❏Unusual hashtag, trend, topic, platform activity? ❏Content from ‘known’ trollbots, 8/4chan, r/thedonald, RussiaToday etc ❏What are trackers getting excited about today?
  38. 38. Top-down analysis Means of implementing influence strategies
  39. 39. STRATEGIES Distort Distract Divide Dismay Dismiss
  40. 40. DISTORTION TECHNIQUES • Distort facts: match intended outcome • Exaggerate: rhetoric & misrepresent facts • Generate: realistic false artifacts • Mismatch: links, images, and claims to change context of information
  41. 41. DISTRACTION TECHNIQUES • String along: respond to anyone who engages to waste time • Play dumb: pretend to be naive, gullible, stupid • Redirect: draw engagement to your thread • Dilute: add other accounts to dilute threads • Threadjack: change narrative in existing thread
  42. 42. DIVISION TECHNIQUES • Provoke: create conflicts and confusion among community members • Dehumanize: demean and denigrate target group • Hate speech: attack protected characteristics or classes • Play victim: claim victim status • Dog-whistle: use coded language to indicate insider status • Hit and run: attack and delete after short time interval • Call to arms: make open calls for action
  43. 43. DISMAY TECHNIQUES • Ad hominem: make personal attacks, insults & accusations • Assign threats: name and personalize enemy • Good old-fashioned tradecraft
  44. 44. DISMISSAL TECHNIQUES • Last word: respond to hostile commenters then block them so they can’t reply • Brigading: coordinate mass attacks or reporting of targeted accounts or tweets • Shit list: add target account(s) to insultingly named list(s)
  45. 45. Bottom-up analysis Collecting Artefacts to find incidents
  46. 46. MISINFORMATION PYRAMID
  47. 47. RESOURCES Trollbot lists: • https://botsentinel.com/ Tools: • APIs / python libraries / Pandas • https://github.com/IHJpc2V1cCAK/socint • https://labsblog.f-secure.com/2018/02/16/searching-twitter-with-twarc/ Existing datasets • https://github.com/bodacea/misinfolinks
  48. 48. ARTEFACTS: ACCOUNTS
  49. 49. ARTEFACTS: IMAGES
  50. 50. ARTEFACTS: TEXT (WORDS, HASHTAGS, URLS ETC)
  51. 51. ARTEFACTS: DOMAINS
  52. 52. MOVING UP: CONTENT AND CONTEXT ANALYSIS • Metadata analysis • Social network analysis • Text analysis (frequency, sentiment etc) • Time series analysis • Visual inspection (Bokeh, Gephi etc) • Correlation • Models, e.g. clustering and classification • Narrative analysis
  53. 53. ANALYSIS: BEHAVIOURS
  54. 54. ANALYSIS: RELATIONSHIPS
  55. 55. EXPERT TRACKERS @katestarbird #digitalsherlocks @josh_emerson @conspirator0 @r0zetta@fs0c131y
  56. 56. WHY BUILD FRAMEWORKS? … what do we do with them?
  57. 57. COMPONENTWISE UNDERSTANDING AND RESPONSE • Lingua Franca across communities • Defend/countermove against reused techniques, identify gaps in attacks • Assess defence tools & techniques • Plan for large-scale adaptive threats (hello, Machine Learning!) • Build an alert structure (e.g. ISAC, US-CERT, Interpol)
  58. 58. WE NEED TO DESIGN AND SHARE RESPONSES
  59. 59. WE NEED TO BUILD COMMUNITIES ● Industry ● Academia ● Media ● Community ● Government
  60. 60. WE NEED INTELLIGENCE SHARING AND COORDINATION
  61. 61. WE NEED FRAMEWORKS
  62. 62. SPECIAL THANK YOUS
  63. 63. THANK YOU Sara “SJ” Terp Bodacea Light Industries sarajterp@gmail.com @bodaceacat CDR Pablo C. Breuer U.S. Special Operations Command / SOFWERX Pablo.Breuer@sofwerx.org @Ngree_H0bit
  64. 64. Community • Parody-based counter-campaigns (e.g. riffs on “Q”) • SEO-hack misinformation sites • Dogpile onto misinformation hashtags • Divert followers (typosquat trolls, spoof messaging etc) • Identify and engage with affected individuals • Educate, verify, bring into the light 64
  65. 65. Offense: Potentials for Next • Algorithms + humans attack algorithms + humans • Shift from trolls to ‘nudging’ existing human communities (‘useful idiots’) • Subtle attacks, e.g. ’low-and-slows’, ‘pop-up’, etc • Massively multi-channel attacks • More commercial targets • A well-established part of hybrid warfare 65
  66. 66. Defence: Potential for next • Strategic and tactical collaboration • Trusted third-party sharing on fake news sites / botnets • Misinformation version of ATT&CK, SANS20 frameworks • Algorithms + humans counter algorithms + humans • Thinking the unthinkable • “Countermeasures and self-defense actions” 66
  67. 67. Non-state Misinformation 67
  68. 68. Indexing, not Censorship 6

×