7. DISARM
Foundation
2022
Cognitive Security is Information Security applied to
disinformation+
“Cognitive security is the application of information security principles, practices, and tools
to misinformation, disinformation, and influence operations.
It takes a socio-technical lens to high-volume, high-velocity, and high-variety forms of
“something is wrong on the internet”.
Cognitive security can be seen as a holistic view of disinformation from a security
practitioner’s perspective
7
8. DISARM
Foundation
2022
Earlier Definitions: Cognitive Security: both of them
“Cognitive Security is the application of
artificial intelligence technologies, modeled on
human thought processes, to detect security
threats.” - XTN
MLSec - machine learning in information
security
● ML used in attacks on information
systems
● ML used to defend information
systems
● Attacking ML systems and algorithms
● Adversarial AI
“Cognitive Security (COGSEC) refers to
practices, methodologies, and efforts made to
defend against social engineering
attempts‒intentional and unintentional
manipulations of and disruptions to cognition
and sensemaking” - cogsec.org
CogSec - social engineering at scale
● Manipulation of individual beliefs,
belonging, etc
● Manipulation of human communities
● Adversarial cognition
8
9. DISARM
Foundation
2022
Earlier Definitions: Social Engineering: both of them
“the use of centralized planning in an attempt
to manage social change and regulate the
future development and behavior of a society.”
● Mass manipulation etc
“the use of deception to manipulate
individuals into divulging confidential or
personal information that may be used for
fraudulent purposes.”
● Phishing etc
9
11. DISARM
Foundation
2022
Cyber Security vs Cognitive Security: Objects
Computers
Networks
Internet
Data
Actions
People
Communities
Internet
Beliefs
Actions
11
Image: DISARM Foundation
12. DISARM
Foundation
2022
Things to worry about: Hybrid incidents
12
● Hybrid: cyber + cognitive + physical
● Cyber supporting cognitive
● Cognitive supporting cyber
● Cyber attack forms adapted to
cognitive
Image: Verizon DBIR https://www.verizon.com/business/resources/reports/dbir/
15. DISARM
Foundation
2022
Ecosystem Assessment
Information
Landscape
• Information seeking
• Information sharing
• Information sources
• Information voids
Harms
Landscape
• Motivations
• Sources/ Starting points
• Effects
• Misinformation Narratives
• Hateful speech narratives
• Crossovers
• Tactics and Techniques
• Artifacts
Response
Landscape
• Monitoring organisations
• Countering organisations
• Coordination
• Existing policies
• Technologies
• etc
15
16. DISARM
Foundation
2022
Information Landscape
● Actors
● Channels
● Influencers
● Groups
● Messaging
● Narratives and memes
● Tools
16
● Verified information
● Rumours
● Misinformation
● Conspiracies
● Information voids / deserts
People and accounts:
● Seeking information - using search,
questions, influencers etc
● Sharing information through channels
● Posting information
17. DISARM
Foundation
2022
Harms Component Landscape
● Actors
○ Nationstates, individuals, companies, DAAS
companies
● Channels
○ Where people seek, share, post information
○ Where people are encouraged to go
● Influencers
○ Not about followers: might be large influence over
smaller groups
● Groups
○ Created to create or spread disinfo. Often real
members, fake creators. Lots of themes. Often
closed groups.
● Messaging
○ Cognitive bias codex of about 200 biases: each of
these is a vulnerability
● Narratives and memes
○ Narratives designed to spread fast / be sticky.
Often on a theme, often repeated
● Tools
○ Bots, personas, network analysis, marketing tools,
IFTTT etc
17
● Verified information
● Rumours
● Misinformation
● Conspiracies
● Information voids / deserts
People and accounts:
● Seeking information - using search,
questions, influencers etc
● Sharing information through channels
● Posting information
18. DISARM
Foundation
2022
Response landscape
18
Image: DISARM Foundation
1000s of response groups. Many more
potential groups. Sporadic coordination
● Media view: MDM (mis/dis/mal-information);
falseness and intent to harm
● Military view: psyops/MISO
● Communications view: management of trust
● Infosec view: information protection
20. DISARM
Foundation
2022
Tools: desk surveys, collaboration, GAMES
Learning game
● fun experience that teaches you
something
● Useful for training large numbers
of people simultaneously
Red team / Purple team
● test an organisation’s defences by
thinking like a bad guy
● Useful for finding system
vulnerabilities, and predicting future
moves
Tabletop exercise
● key people responding to a
simulated event
● Useful for creating cohesive
teams. Often large scale
Simulation
● imitation of processes and
environment
● Useful for “what if” automated
tests
20
● http://www.theknowledgeguru.com/games-vs-simulations-choosing-right-approach/
● https://www.edutopia.org/sims-vs-games
● Roozenbeek, van der Linder “Inoculation theory and Misinformation”, NATO report, 2021
22. DISARM
Foundation
2022
Disinformation as a risk management problem
Manage the risks, not the artifacts
● Risk assessment, reduction, remediation
● Risks: How bad? How big? How likely? Who to?
● Attack surfaces, vulnerabilities, potential losses / outcomes
Manage resources
● Mis/disinformation is everywhere
● Detection, mitigation, response
● People, technologies, time, attention
● Connections
22
Image: https://www.risklens.com/infographics/fair-model-on-a-page
23. DISARM
Foundation
2022
FAIR adaptation - sneak peek
23
● Assets: what are you protecting?
○ E.g. election authority reputation
● Threats: protecting from whom?
○ We know this bit…
● Threat effects: CIA+
● Losses: what do you stand to lose?
○ What are the harms?
○ How do you estimate them?
● Stakeholders: who should care?
● Vulnerabilities: what increases your
likelihood of an event?
○ Unaware population
○ Information voids etc
● Controls: what can you do to reduce
risk?
24. DISARM
Foundation
2022
Risk Effect: Parkerian Hexad
Confidentiality, integrity, availability
■ Confidentiality: data should only be visible
to people who authorized to see it
■ Integrity: data should not be altered in
unauthorized ways
■ Availability: data should be available to be
used
Possession, authenticity, utility
■ Possession: controlling the data media
■ Authenticity: accuracy and truth of the
origin of the information
■ Utility: usefulness (e.g. losing the
encryption key)
24
Image: Parkerian Hexad, from
https://www.sciencedirect.com/topics/computer-
science/parkerian-hexad
Image: https://www.staffhosteurope.com/blog/2019/03/cybersecurity-and-the-parkerian-hexad
25. DISARM
Foundation
2022
Risk component: Digital harms frameworks
Physical harm e.g. bodily injury, damage to physical assets (hardware,
infrastructure, etc).
Psychological harm e.g. depression, anxiety from cyber bullying, cyber stalking etc
Economic harm financial loss, e.g. from data breach, cybercrime etc
Reputational harm e.g. Organization: loss of consumers; Individual: disruption of
personal life; Country: damaged trade negotiations.
Cultural harm increase in social disruption, e.g. misinformation creating real-
world violence.
Political harm e.g. disruption in political process, government services from
e.g. internet shutdown, botnets influencing votes
25
Image: https://dai-global-digital.com/cyber-harm.html)
Plus responder harms: psychological damage, security risks
34. DISARM
Foundation
2022
Planning
Strategic
Planning
Objective
Planning
Preparation
Develop
People
Develop
Networks
Microtargeting
Develop
Content
Channel
Selection
Execution
Pump Priming Exposure
Prebunking
Humorous counter
narratives
Mark content with
ridicule /
decelerants
Expire social media
likes/ retweets
Influencer disavows
misinfo
Cut off banking
access
Dampen emotional
reaction
Remove / rate limit
botnets
Social media
amber alert
Etc
Go Physical Persistence
Evaluation
Measure
Effectiveness
Have a
disinformation
response plan
Improve
stakeholder
coordination
Make civil society
more vibrant
Red team
disinformation,
design mitigations
Enhanced privacy
regulation for social
media
Platform regulation
Shared fact
checking
database
Repair broken
social connections
Pre-emptive action
against
disinformation
team infrastructure
Etc
Media literacy
through games
Tabletop
simulations
Make information
provenance
available
Block access to
disinformation
resources
Educate influencers
Buy out troll farm
employees / offer
jobs
Legal action
against for-profit
engagement farms
Develop
compelling counter
narratives
Run competing
campaigns
Etc
Find and train
influencers
Counter-social
engineering
training
Ban incident actors
from funding sites
Address truth in
narratives
Marginalise and
discredit extremist
groups
Ensure platforms
are taking down
accounts
Name and shame
disinformation
influencers
Denigrate funding
recipient / project
Infiltrate in-groups
Etc
Remove old
and unused
accounts
Unravel Potemkin
villages
Verify project
before posting fund
requests
Encourage people
to leave social
media
Deplatform
message groups
and boards
Stop offering press
credentials to
disinformation
outlets
Free open library
sources
Social media
source removal
Infiltrate
disinformation
platforms
Etc
Fill information
voids
Stem flow of
advertising money
Buy more
advertising than
disinformation
creators
Reduce political
targeting
Co-opt
disinformation
hashtags
Mentorship: elders,
youth, credit
Hijack content
and link to
information
Honeypot social
community
Corporate
research funding
full disclosure
Real-time updates
to factcheck
database
Remove non-
relevant content
from special
interest groups
Content
moderation
Prohibit images in
political Chanels
Add metadata to
original content
Add warning labels
on sharing
Etc
Rate-limit
engagement
Redirect searches
away from disinfo
Honeypot: fake
engagement
system
Bot to engage and
distract trolls
Strengthen
verification
methods
Verified ids to
comment or
contribute to poll
Revoke whitelist /
verified status
Microtarget
likely targets
with counter
messages
Train journalists to
counter influence
moves
Tool transparency
and literacy in
followed channels
Ask media not to
report false info
Repurpose images
with counter
messages
Engage payload
and debunk
Debunk/ defuse
fake expert
credentials
Don’t engage with
payloads
Hashtag jacking
Etc
DMCA
takedown
requests
Spam domestic
actors with lawsuits
Seize and analyse
botnet servers
Poison
monitoring and
evaluation
data
Bomb link
shorteners with calls
Add random links
to network graphs
35
DISARM Blue: Countermeasures Framework
Image: DISARM Foundation
36. DISARM
Foundation
2022
Tools
DISARM objects work with all STIX-compatible systems
● MITRE ATT&CK Navigator
● EEAS using DISARM STIX objects in OpenCTI
● Compatible with many other information security tools
DISARM objects already embedded in tools
● DISARM already in every MISP instance
User-friendly standalone tools
● DISARM Foundation building DISARM Explorer app to make non-technical use of DISARM
easier.
37
40. DISARM
Foundation
2022
Example Information Landscape
• Traditional Media
• Newspapers
• Radio - including community radio
• TV
• Social Media
• Facebook
• Whatsapp
• Twitter
• Youtube/ Telegram/ etc
• Others
• Word of mouth
41
41. DISARM
Foundation
2022
Example Threat Landscape
• Motivations
• Geopolitics mostly absent
• Party politics (internal, inter-party)
• Actors
• Activities
• Manipulate faith communities
• discredit election process
• Discredit/discourage journalists
• Attention (more drama)
• Risks / severities
• Sources
• WhatsApp
• Blogs
• Facebook pages
• Online newspapers
• Media
• Routes
• Hijacked narratives
• Whatsapp to blogs, vice versa
• Whatsapp forwarding
• facebook to whatsapp
• Social media to traditional media
• Social media to word of mouth
42
42. DISARM
Foundation
2022
Creator Behaviours
● T0007: Create fake Social Media Profiles /
Pages / Groups
● T0008: Create fake or imposter news sites
● T0022: Conspiracy narratives
● T0023: Distort facts
● T0052: Tertiary sites amplify news
● T0036: WhatsApp
● T0037: Facebook
● T0038: Twitter
43
Image: DISARM Foundation
43. DISARM
Foundation
2022
Example Response Landscape
(Needs / Work / Gaps)
Risk Reduction
● Media and influence
literacy
● information
landscaping
● Other risk reduction
Monitoring
● Radio, TV, newspapers
● Social media platforms
● Tips
Analysis
● Tier 1 (creates tickets)
● Tier 2 (creates
mitigations)
● Tier 3 (creates reports)
● Tier 4 (coordination)
Response
● Messaging
○ prebunk
○ debunk
○ counternarratives
○ amplification
● Actions
○ removal
○ other actions
● Reach
44
44. DISARM
Foundation
2022
Responder Behaviours
● C00009: Educate high profile influencers on best practices
● C00008: Create shared fact-checking database
● C00042: Address truth contained in narratives
● C00030: Develop a compelling counter narrative (truth based)
● C00093: Influencer code of conduct
● C00193: promotion of a “higher standard of journalism”
● C00073: Inoculate populations through media literacy training
● C00197: remove suspicious accounts
● C00174: Create a healthier news environment
● C00205: strong dialogue between the federal government
and private sector to encourage better reporting
● C00009: Educate high profile influencers on best
practices
● C00008: Create shared fact-checking database
● C00042: Address truth contained in narratives
● C00030: Develop a compelling counter narrative
(truth based)
● C00093: Influencer code of conduct
● C00193: promotion of a “higher standard of
journalism”
● C00073: Inoculate populations through media
literacy training
● C00197: remove suspicious accounts
● C00174: Create a healthier news environment
● C00205: strong dialogue between the federal
government and private sector to encourage
better reporting
45
Image: DISARM Foundation
45. DISARM
Foundation
2022
Practical: Resource Allocation
• Tagging needs and groups with AMITT labels
• Building collaboration mechanisms to reduce lost tips and repeated collection
• Designing for future potential surges
• Automating repetitive jobs to reduce load on humans
46
Image: DISARM Foundation
47. DISARM
Foundation
2022
DISARM Foundation: where we’ve been
Credibility Coalition Misinfosec WG
● Slack
● https://medium.com/@credibilitycoalitio
n/misinfosec-framework-99e3bff5935d
● Created AMITT models
CogSecCollab
● https://cogsec-collab.org/
● Maintained AMITT models
● Mentored new organisations
● Ran disinfo & extremism deployments
● Ran CTI League disinformation team
● MITRE branched AM!TT, as SP!CE
DISARM Foundation
● https://www.disarm.foundation/
● https://github.com/disarmfoundation
● remerge AMITT and SPICE
● Maintains DISARM models
Misinfosec’s original definition:
“deliberate promotion… of false,
misleading or mis-attributed
information
focus on online creation, propagation,
consumption of disinformation
We are especially interested in
disinformation designed to change
beliefs or emotions in a large number of
people”
48
48. DISARM
Foundation
2022
Cognitive Security course
What we’re dealing with
1. Introduction
a. disinformation reports, ethics
b. researcher risks
2. fundamentals (objects)
3. cogsec risks
Human aspects
1. human system vulnerabilities and
patches
2. psychology of influence
Building better models
1. frameworks
2. relational frameworks
3. building landscapes
Investigating incidents
8. setting up an investigation
9. misinformation data analysis
10. disinformation data analysis
Improving our responses
8. disinformation responses
9. monitoring and evaluation
10. games, red teaming and simulations
Where this is heading
8. cogsec as a business
9. future possibilities
49
49. DISARM
Foundation
2022
Sociotechnical Ethical Hacking course
First, do no harm
1. Ethics = risk management
2. Don’t harm others (harms frameworks)
3. Don’t harm yourself (permissions etc)
4. Fix what you break (purple teaming)
It’s systems all the way down
1. Infosec = systems (sociotechnical infosec)
2. All systems can be broken (with resources)
3. All systems have back doors (people, hardware, process, tech
etc)
Psychology is important
1. Reverse engineering = understanding someone else’s
thoughts
2. Social engineering = adapting someone else’s thoughts
3. Algorithms think too (adversarial AI)
Be curious about everything
1. Curiosity is a hacker’s best friend
2. Computers are everywhere (IoT etc)
3. Help is everywhere (how to search, how to ask)
Cognitive security
14. Yourself (systems thinking)
15. Social media (social engineering)
16. Elections (mixed security modes)
Physical security
14. Locksports (vulnerabilities)
15. Buildings and physical (don’t harm self)
Cyber security
14. Web, networks, PCs
15. Machine learning (adversarial AI)
16. Maps and algorithms (back doors)
17. Assembler (microcontrollers)
18. Hardware (IoT)
19. Radio (AISB etc)
Systems that move
14. Cars (canbuses and bypasses)
15. Aerospace (reverse engineering)
16. Satellites (remote commands)
17. Robotics / automation (don’t harm others)
50
51. DISARM
Foundation
2022
Exercise rules
● You’re limited by your own resources: money, people, time,
assets
● You’re allowed to outsource
● You’re aware of consequences from your actions
● You may or may not encounter countermeasures
● Any narrative, behaviour, asset you can think of is in bounds
● You *will* be asked to fix what you broke before leaving the
exercise
52
52. DISARM
Foundation
2022
Suggested actions
Follow the DISARM Red framework
● Set goals
● Gather information
○ Find weaknesses
● Plan activities
● Prepare
○ Decide on materials, narratives, behaviours, channels, influencers etc
● Exploit weaknesses
○ Deploy
● Measure (and adjust as needed)
● Leave
○ what do you leave in place? What do you keep for the next one etc
53
54. DISARM
Foundation
2022
Disinformation as a service
“Doctor Zhivago’s services were priced very specifically, as seen below:
● $15 for an article up to 1,000 characters
● $8 for social media posts and commentary up to 1,000 characters
● $10 for Russian to English translation up to 1,800 characters
● $25 for other language translation up to 2,000 characters
● $1,500 for SEO services to further promote social media posts and traditional media articles, with a time frame of 10
to 15 days
Raskolnikov, on the other hand, had less specific pricing:
● $150 for Facebook and other social media accounts and content
● $200 for LinkedIn accounts and content
● $350–$550 per month for social media marketing
● $45 for an article up to 1,000 characters
● $65 to contact a media source directly to spread material
● $100 per 10 comments for a given article or news story”
55. DISARM
Foundation
2022
Scenario 1: DaaS
● Player: You run a disinformation as a service company
○ It used to be a marketing company, but disinfo pays better
○ You’re based in the Philippines
● Brief: to run a campaign against a US company
○ Your customer is a rival company in Russia
○ They’ve paid you $10,000 for this
○ And expect results within 2 weeks because there’s a regulatory summit then
● Resources:
○ You have 5 people available
○ You have existing assets from other campaigns: Social media accounts, fake news websites
● Plan: Over to you
○ What do you do (narratives, techniques etc)? What resources do you need and use? What are your
measures of success?
56. DISARM
Foundation
2022
Scenario 2: Pink Slime
Player: you’re a high-profile individual with a network of fake news sites
● You started with one site selling alternative health treatments
● Then discovered that clicks paid you a lot - especially if you game Google’s algorithms to get top slot
Brief: adtech exchanges are cracking down on your ad funding
● What else are you going to do to make money?
● How can you maximise this?
Resources:
● You have a team of 40 people total. Many of them are managing social media and content on your sites, but you also
have web developers, strategists, and access to DaaS companies
● You control 400 fake news sites. 40 of these haven’t been found by factcheckers yet
Plan: What do you do (narratives, techniques etc)? What resources do you need and use? What are your measures of success?
57