The document discusses cognitive security and the activities of the DISARM Foundation. It defines cognitive security as applying information security principles to misinformation, disinformation, and influence operations. It outlines the information, threat, and response landscapes relevant to cognitive security. It then describes the DISARM Foundation's work in communities, collaborations, mentoring, and research over the past year related to cognitive security and disinformation risk assessment.
5. DISARM
Foundation
2022
Cognitive Security is Information Security applied to
disinformation+
“Cognitive security is the application of information security principles, practices, and tools
to misinformation, disinformation, and influence operations.
It takes a socio-technical lens to high-volume, high-velocity, and high-variety forms of
“something is wrong on the internet”.
Cognitive security can be seen as a holistic view of disinformation from a security
practitioner’s perspective
5
6. DISARM
Foundation
2022
Earlier Definitions: Cognitive Security: both of them
“Cognitive Security is the application of
artificial intelligence technologies, modeled on
human thought processes, to detect security
threats.” - XTN
MLSec - machine learning in information
security
● ML used in attacks on information
systems
● ML used to defend information
systems
● Attacking ML systems and algorithms
● Adversarial AI
“Cognitive Security (COGSEC) refers to
practices, methodologies, and efforts made to
defend against social engineering
attempts‒intentional and unintentional
manipulations of and disruptions to cognition
and sensemaking” - cogsec.org
CogSec - social engineering at scale
● Manipulation of individual beliefs,
belonging, etc
● Manipulation of human communities
● Adversarial cognition
6
7. DISARM
Foundation
2022
Earlier Definitions: Social Engineering: both of them
“the use of centralized planning in an attempt
to manage social change and regulate the
future development and behavior of a society.”
● Mass manipulation etc
“the use of deception to manipulate
individuals into divulging confidential or
personal information that may be used for
fraudulent purposes.”
● Phishing etc
7
9. DISARM
Foundation
2022
Information Landscape
● Actors
● Channels
● Influencers
● Groups
● Messaging
● Narratives and memes
● Tools
9
● Verified information
● Rumours
● Misinformation
● Conspiracies
● Information voids / deserts
People and accounts:
● Seeking information - using search,
questions, influencers etc
● Sharing information through channels
● Posting information
14. DISARM
Foundation
2022
Groups
Social media groups created to create
or spread disinformation
● Often real members, fake
creators
● Lots of themes
● Often closed groups
14
Image: https://accountabletech.org/campaign/stop-group-recs/
15. DISARM
Foundation
2022
Messaging
Narratives designed to spread fast
and be “sticky”
● Often on a theme
● Often repeated
Image: https://www.njhomelandsecurity.gov/analysis/false-text-messages-part-of-larger-covid-19-disinformation-campaign
15
17. DISARM
Foundation
2022
Hybrids, and other attack types from infosec
17
● Hybrid: cyber + cognitive + physical
● Cyber supporting cognitive
● Cognitive supporting cyber
● Cyber attack forms adapted to
cognitive
Image: Verizon DBIR https://www.verizon.com/business/resources/reports/dbir/
18. DISARM
Foundation
2022
Other attack types from psychology
Cognitive bias codex:
Chart of about 200 biases
Each of these is a vulnerability
18
Image: https://commons.wikimedia.org/wiki/File:Cognitive_bias_codex_en.svg
21. DISARM
Foundation
2022
Media view: Mis/Dis/Mal information
“deliberate promotion… of false,
misleading or mis-attributed
information
focus on online creation, propagation,
consumption of disinformation
We are especially interested in
disinformation designed to change
beliefs or emotions in a large number
of people”
21
Image: First Draft, Information Disorder, Clare Wardle, 2017
26. DISARM
Foundation
2022
Information Security vs Cognitive Security: Objects
Computers
Networks
Internet
Data
Actions
People
Communities
Internet
Beliefs
Actions
26
Image: DISARM Foundation
27. DISARM
Foundation
2022
Disinformation as a risk management problem
Manage the risks, not the artifacts
● Risk assessment, reduction, remediation
● Risks: How bad? How big? How likely?
Who to?
● Attack surfaces, vulnerabilities, potential
losses / outcomes
Manage resources
● Mis/disinformation is everywhere
● Detection, mitigation, response
● People, technologies, time, attention
● Connections
27
Image: https://www.risklens.com/infographics/fair-model-on-a-page
28. DISARM
Foundation
2022
Using the Parkerian Hexad
Confidentiality, integrity, availability
■ Confidentiality: data should only be
visible to people who authorized to see it
■ Integrity: data should not be altered in
unauthorized ways
■ Availability: data should be available to
be used
Possession, authenticity, utility
■ Possession: controlling the data media
■ Authenticity: accuracy and truth of the
origin of the information
■ Utility: usefulness (e.g. losing the
encryption key)
28
Image: Parkerian Hexad, from
https://www.sciencedirect.com/topics/computer-
science/parkerian-hexad
Image: https://www.staffhosteurope.com/blog/2019/03/cybersecurity-and-the-parkerian-hexad
29. DISARM
Foundation
2022
Digital harms frameworks
Physical harm e.g. bodily injury, damage to physical assets (hardware,
infrastructure, etc).
Psychological harm e.g. depression, anxiety from cyber bullying, cyber stalking etc
Economic harm financial loss, e.g. from data breach, cybercrime etc
Reputational harm e.g. Organization: loss of consumers; Individual: disruption of
personal life; Country: damaged trade negotiations.
Cultural harm increase in social disruption, e.g. misinformation creating real-
world violence.
Political harm e.g. disruption in political process, government services from
e.g. internet shutdown, botnets influencing votes
29
Image: https://dai-global-digital.com/cyber-harm.html)
30. DISARM
Foundation
2022
Responder Harms Management
Psychological damage
● Disinformation can be distressing material. It's not just the hate speech and _really_ bad images
that you know are difficult to look at - it's also difficult to spend day after day reading material
designed to change beliefs and wear people down. Be aware of your mental health, and take steps
to stay healthy
● (this btw is why we think automating as many processes as make sense is good - it stops people
from having to interact so much with all the raw material).
Security risks
● Disinformation actors aren't always nice people. Operational security (opsec: protecting things like
your identity) is important
● You might also want to keep your disinformation work separated from your dayjob. Opsec can help
here too.
30
31. DISARM
Foundation
2022
Ecosystem Assessment
Information
Landscape
• Information seeking
• Information sharing
• Information sources
• Information voids
Threat
Landscape
• Motivations
• Sources/ Starting points
• Effects
• Misinformation Narratives
• Hateful speech narratives
• Crossovers
• Tactics and Techniques
• Artifacts
Response
Landscape
• Monitoring organisations
• Countering organisations
• Coordination
• Existing policies
• Technologies
• etc
31
40. DISARM
Foundation
2022
Planning
Strategic
Planning
Objective
Planning
Preparation
Develop
People
Develop
Networks
Microtargeting
Develop
Content
Channel
Selection
Execution
Pump Priming Exposure
Prebunking
Humorous counter
narratives
Mark content with
ridicule /
decelerants
Expire social media
likes/ retweets
Influencer disavows
misinfo
Cut off banking
access
Dampen emotional
reaction
Remove / rate limit
botnets
Social media
amber alert
Etc
Go Physical Persistence
Evaluation
Measure
Effectiveness
Have a
disinformation
response plan
Improve
stakeholder
coordination
Make civil society
more vibrant
Red team
disinformation,
design mitigations
Enhanced privacy
regulation for social
media
Platform regulation
Shared fact
checking
database
Repair broken
social connections
Pre-emptive action
against
disinformation
team infrastructure
Etc
Media literacy
through games
Tabletop
simulations
Make information
provenance
available
Block access to
disinformation
resources
Educate influencers
Buy out troll farm
employees / offer
jobs
Legal action
against for-profit
engagement farms
Develop
compelling counter
narratives
Run competing
campaigns
Etc
Find and train
influencers
Counter-social
engineering
training
Ban incident actors
from funding sites
Address truth in
narratives
Marginalise and
discredit extremist
groups
Ensure platforms
are taking down
accounts
Name and shame
disinformation
influencers
Denigrate funding
recipient / project
Infiltrate in-groups
Etc
Remove old
and unused
accounts
Unravel Potemkin
villages
Verify project
before posting fund
requests
Encourage people
to leave social
media
Deplatform
message groups
and boards
Stop offering press
credentials to
disinformation
outlets
Free open library
sources
Social media
source removal
Infiltrate
disinformation
platforms
Etc
Fill information
voids
Stem flow of
advertising money
Buy more
advertising than
disinformation
creators
Reduce political
targeting
Co-opt
disinformation
hashtags
Mentorship: elders,
youth, credit
Hijack content
and link to
information
Honeypot social
community
Corporate
research funding
full disclosure
Real-time updates
to factcheck
database
Remove non-
relevant content
from special
interest groups
Content
moderation
Prohibit images in
political Chanels
Add metadata to
original content
Add warning labels
on sharing
Etc
Rate-limit
engagement
Redirect searches
away from disinfo
Honeypot: fake
engagement
system
Bot to engage and
distract trolls
Strengthen
verification
methods
Verified ids to
comment or
contribute to poll
Revoke whitelist /
verified status
Microtarget
likely targets
with counter
messages
Train journalists to
counter influence
moves
Tool transparency
and literacy in
followed channels
Ask media not to
report false info
Repurpose images
with counter
messages
Engage payload
and debunk
Debunk/ defuse
fake expert
credentials
Don’t engage with
payloads
Hashtag jacking
Etc
DMCA
takedown
requests
Spam domestic
actors with lawsuits
Seize and analyse
botnet servers
Poison
monitoring and
evaluation
data
Bomb link
shorteners with calls
Add random links
to network graphs
40
DISARM Blue: Countermeasures Framework
Image: DISARM Foundation
46. DISARM
Foundation
2022
Example Information Landscape
• Traditional Media
• Newspapers
• Radio - including community radio
• TV
• Social Media
• Facebook
• Whatsapp
• Twitter
• Youtube/ Telegram/ etc
• Others
• Word of mouth
46
47. DISARM
Foundation
2022
Example Threat Landscape
• Motivations
• Geopolitics mostly absent
• Party politics (internal, inter-party)
• Actors
• Activities
• Manipulate faith communities
• discredit election process
• Discredit/discourage journalists
• Attention (more drama)
• Risks / severities
• Sources
• WhatsApp
• Blogs
• Facebook pages
• Online newspapers
• Media
• Routes
• Hijacked narratives
• Whatsapp to blogs, vice versa
• Whatsapp forwarding
• facebook to whatsapp
• Social media to traditional media
• Social media to word of mouth
47
48. DISARM
Foundation
2022
Creator Behaviours
● T0007: Create fake Social Media Profiles
/ Pages / Groups
● T0008: Create fake or imposter news
sites
● T0022: Conspiracy narratives
● T0023: Distort facts
● T0052: Tertiary sites amplify news
● T0036: WhatsApp
● T0037: Facebook
● T0038: Twitter
48
Image: DISARM Foundation
49. DISARM
Foundation
2022
Example Response Landscape
(Needs / Work / Gaps)
Risk Reduction
● Media and influence
literacy
● information
landscaping
● Other risk reduction
Monitoring
● Radio, TV, newspapers
● Social media platforms
● Tips
Analysis
● Tier 1 (creates tickets)
● Tier 2 (creates
mitigations)
● Tier 3 (creates reports)
● Tier 4 (coordination)
Response
● Messaging
○ prebunk
○ debunk
○ counternarratives
○ amplification
● Actions
○ removal
○ other actions
● Reach
49
50. DISARM
Foundation
2022
Responder Behaviours
● C00009: Educate high profile influencers on best practices
● C00008: Create shared fact-checking database
● C00042: Address truth contained in narratives
● C00030: Develop a compelling counter narrative (truth
based)
● C00093: Influencer code of conduct
● C00193: promotion of a “higher standard of journalism”
● C00073: Inoculate populations through media literacy
training
● C00197: remove suspicious accounts
● C00174: Create a healthier news environment
● C00205: strong dialogue between the federal government
and private sector to encourage better reporting
● C00009: Educate high profile influencers on best
practices
● C00008: Create shared fact-checking database
● C00042: Address truth contained in narratives
● C00030: Develop a compelling counter narrative
(truth based)
● C00093: Influencer code of conduct
● C00193: promotion of a “higher standard of
journalism”
● C00073: Inoculate populations through media
literacy training
● C00197: remove suspicious accounts
● C00174: Create a healthier news environment
● C00205: strong dialogue between the federal
government and private sector to encourage
better reporting
50
Image: DISARM Foundation
51. DISARM
Foundation
2022
Practical: Resource Allocation
• Tagging needs and groups with AMITT labels
• Building collaboration mechanisms to reduce lost tips and repeated collection
• Designing for future potential surges
• Automating repetitive jobs to reduce load on humans
51
Image: DISARM Foundation
53. DISARM
Foundation
2022
Tools
DISARM objects work with all STIX-compatible systems
● MITRE ATT&CK Navigator
● EEAS using DISARM STIX objects in OpenCTI
● Compatible with many other information security tools
DISARM objects already embedded in tools
● DISARM already in every MISP instance
User-friendly versions on their way
● DISARM Foundation building DISARM Explorer app to make non-technical use of
DISARM easier.
53
54. DISARM
Foundation
2022
Cognitive Security course
What we’re dealing with
1. Introduction
a. disinformation reports, ethics
b. researcher risks
2. fundamentals (objects)
3. cogsec risks
Human aspects
1. human system vulnerabilities and
patches
2. psychology of influence
Building better models
1. frameworks
2. relational frameworks
3. building landscapes
Investigating incidents
8. setting up an investigation
9. misinformation data analysis
10. disinformation data analysis
Improving our responses
8. disinformation responses
9. monitoring and evaluation
10. games, red teaming and simulations
Where this is heading
8. cogsec as a business
9. future possibilities
54
55. DISARM
Foundation
2022
Sociotechnical Ethical Hacking course
First, do no harm
1. Ethics = risk management
2. Don’t harm others (harms frameworks)
3. Don’t harm yourself (permissions etc)
4. Fix what you break (purple teaming)
It’s systems all the way down
1. Infosec = systems (sociotechnical infosec)
2. All systems can be broken (with resources)
3. All systems have back doors (people, hardware, process,
tech etc)
Psychology is important
1. Reverse engineering = understanding someone else’s
thoughts
2. Social engineering = adapting someone else’s thoughts
3. Algorithms think too (adversarial AI)
Be curious about everything
1. Curiosity is a hacker’s best friend
2. Computers are everywhere (IoT etc)
3. Help is everywhere (how to search, how to ask)
Cognitive security
14. Yourself (systems thinking)
15. Social media (social engineering)
16. Elections (mixed security modes)
Physical security
14. Locksports (vulnerabilities)
15. Buildings and physical (don’t harm self)
Cyber security
14. Web, networks, PCs
15. Machine learning (adversarial AI)
16. Maps and algorithms (back doors)
17. Assembler (microcontrollers)
18. Hardware (IoT)
19. Radio (AISB etc)
Systems that move
14. Cars (canbuses and bypasses)
15. Aerospace (reverse engineering)
16. Satellites (remote commands)
17. Robotics / automation (don’t harm others)
55
56. DISARM
Foundation
2022
DISARM Adoption
Active users
● European Union (EEAS) - power user,
coding and sharing incidents
● MITRE (as SPICE model) - using with
clients in tabletop exercises etc
● DROG (created
HarmonySquare/BadNewsGame) -
using in tabletop exercises
● CIRCL (Computer Incident Response
Lab Luxembourg) - integrated into
MISP software
● Alliance4Europe - using in tabletop
exercises and intelligence with Open
CTI
Aware / testing
● NATO
● Canada
● US State Dept (on GEC tools list as
SPICE)
● National Science Foundation (Aware)
● NSA (Aware)
● EU/NATO Hybrid Threat Center
56
57. THANK YOU
SJ Terp @bodaceacat
Dr. Pablo Breuer @Ngree_H0bit
https://www.disarm.foundation/
57