2. INST408C: Cognitive Security
introduction
disinformation reports, ethics
researcher risks
fundamentals (objects)
cogsec risks
human system vulnerabilities and patches
psychology of influence
frameworks
relational frameworks
building landscapes
setting up an investigation
misinformation data analysis
disinformation data analysis
disinformation responses
monitoring and evaluation
games, red teaming and simulations
cogsec as a business
future possibilities
3. Cognitive Security: both of them
“Cognitive Security is the application of artificial
intelligence technologies, modeled on human
thought processes, to detect security threats.” -
XTN
MLSec - machine learning in information security
● ML used in attacks on information systems
● ML used to defend information systems
● Attacking ML systems and algorithms
● “Adversarial AI”
“Cognitive Security (COGSEC) refers to
practices, methodologies, and efforts made to
defend against social engineering
attempts‒intentional and unintentional
manipulations of and disruptions to cognition
and sensemaking” - cogsec.org
CogSec - social engineering at scale
● Manipulation of individual beliefs,
belonging, etc
● Manipulation of human communities
● Adversarial cognition
4. Social Engineering: both of them
“the use of centralized planning in an attempt to
manage social change and regulate the future
development and behavior of a society.”
● Mass manipulation etc
“the use of deception to manipulate individuals
into divulging confidential or personal
information that may be used for fraudulent
purposes.”
● Phishing etc
7. Channels
Lots of channels:
Where people seek, share, post
information
Where people are encouraged to go
Image: https://d1gi.medium.com/the-election2016-micro-
propaganda-machine-383449cc1fba
8. Influencers
Users or accounts with influence over a
network
● Not the most followers
● The most influence
● Might be large influence over smaller
groups.
9. Groups
Social media groups created to create or
spread disinformation
● Often real members, fake creators
● Lots of themes
● Often closed groups
10. Messaging
Narratives designed to spread fast and be “sticky”
● Often on a theme
● Often repeated
Image: https://www.njhomelandsecurity.gov/analysis/false-
text-messages-part-of-larger-covid-19-disinformation-
campaign
14. Media view: Mis/Dis/Mal information
“deliberate promotion… of false,
misleading or mis-attributed information
focus on online creation, propagation,
consumption of disinformation
We are especially interested in
disinformation designed to change beliefs
or emotions in a large number of people”
1
4
20. ACTION
MONITORING
RESPONSIBLE FOR
Different System Boundaries
Internet
Domains
Social Media
Platforms
Organization’s
Platforms
Lawmakers
Organization’s
Business Units
COG SOC
Infosec SOC
Organization’s
Communities
Media
22. CIA: Disinformation as an Integrity problem
• Confidentiality: only the people/systems that are supposed to
have the information do
• Integrity: the information has not been tampered with
• Availability: people can use the system as intended
29. Other work on techniques
e.g. FLICC (John Cook)
Denial tactics:
● Fake experts
● Logical fallacies
● Impossible expectations
● Cherry picking
● Conspiracy theories
Originally designed for climate change,
HIV/AIDs etc crossover
30. Planning
Strategic
Planning
Objective
Planning
Preparation
Develop
People
Develop
Networks
Microtargeting
Develop
Content
Channel
Selection
Execution
Pump Priming Exposure
Prebunking
Humorous counter
narratives
Mark content with
ridicule / decelerants
Expire social media
likes/ retweets
Influencer disavows
misinfo
Cut off banking
access
Dampen emotional
reaction
Remove / rate limit
botnets
Social media amber
alert
Etc
Go Physical Persistence
Evaluation
Measure
Effectiveness
Have a
disinformation
response plan
Improve stakeholder
coordination
Make civil society
more vibrant
Red team
disinformation, design
mitigations
Enhanced privacy
regulation for social
media
Platform regulation
Shared fact checking
database
Repair broken social
connections
Pre-emptive action
against disinformation
team infrastructure
Etc
Media literacy
through games
Tabletop simulations
Make information
provenance
available
Block access to
disinformation
resources
Educate influencers
Buy out troll farm
employees / offer
jobs
Legal action against
for-profit
engagement farms
Develop compelling
counter narratives
Run competing
campaigns
Etc
Find and train
influencers
Counter-social
engineering training
Ban incident actors
from funding sites
Address truth in
narratives
Marginalise and
discredit extremist
groups
Ensure platforms are
taking down
accounts
Name and shame
disinformation
influencers
Denigrate funding
recipient / project
Infiltrate in-groups
Etc
Remove old and
unused accounts
Unravel Potemkin
villages
Verify project before
posting fund requests
Encourage people to
leave social media
Deplatform message
groups and boards
Stop offering press
credentials to
disinformation outlets
Free open library
sources
Social media source
removal
Infiltrate
disinformation
platforms
Etc
Fill information
voids
Stem flow of
advertising money
Buy more advertising
than disinformation
creators
Reduce political
targeting
Co-opt disinformation
hashtags
Mentorship: elders,
youth, credit
Hijack content
and link to
information
Honeypot social
community
Corporate research
funding full disclosure
Real-time updates to
factcheck database
Remove non-relevant
content from special
interest groups
Content moderation
Prohibit images in
political Chanels
Add metadata to
original content
Add warning labels
on sharing
Etc
Rate-limit
engagement
Redirect searches
away from disinfo
Honeypot: fake
engagement system
Bot to engage and
distract trolls
Strengthen
verification methods
Verified ids to
comment or
contribute to poll
Revoke whitelist /
verified status
Microtarget likely
targets with
counter
messages
Train journalists to
counter influence
moves
Tool transparency
and literacy in
followed channels
Ask media not to
report false info
Repurpose images
with counter
messages
Engage payload and
debunk
Debunk/ defuse fake
expert credentials
Don’t engage with
payloads
Hashtag jacking
Etc
DMCA takedown
requests
Spam domestic
actors with lawsuits
Seize and analyse
botnet servers
Poison monitoring
and evaluation
data
Bomb link shorteners
with calls
Add random links to
network graphs
AMITT Blue: Countermeasures Framework
34. Seen in other tactical groups, e.g. Election Integrity Project
https://www.atlanticcouncil.org/in-depth-research-reports/the-long-fuse-eip-report-read/
36. Disinformation as a risk management problem
Manage the risks, not the artifacts
• Attack surfaces, vulnerabilities,
potential losses / outcomes
• Risk assessment, reduction,
remediation
• Risks: How bad? How big? How
likely? Who to?
Mis/disinformation is everywhere:
• Where do you put your resources?
• Detection, mitigation, response
• People, technologies, time,
attention
• Connections
37. Digital harms frameworks
(List from https://dai-global-digital.com/cyber-harm.html)
Physical harm e.g. bodily injury, damage to physical assets (hardware,
infrastructure, etc).
Psychological harm e.g. depression, anxiety from cyber bullying, cyber stalking etc
Economic harm financial loss, e.g. from data breach, cybercrime etc
Reputational harm e.g. Organization: loss of consumers; Individual: disruption of
personal life; Country: damaged trade negotiations.
Cultural harm increase in social disruption, e.g. misinformation creating real-
world violence.
Political harm e.g. disruption in political process, government services from
e.g. internet shutdown, botnets influencing votes
38. Responder Harms Management
Psychological damage
● Disinformation can be distressing material. It's not just the hate speech and _really_ bad images that you know
are difficult to look at - it's also difficult to spend day after day reading material designed to change beliefs and
wear people down. Be aware of your mental health, and take steps to stay healthy
● (this btw is why we think automating as many processes as make sense is good - it stops people from having
to interact so much with all the raw material).
Security risks
● Disinformation actors aren't always nice people. Operational security (opsec: protecting things like your
identity) is important
● You might also want to keep your disinformation work separated from your dayjob. Opsec can help here too.
39. Disinformation Risk Assessment
Information
Landscape
• Information seeking
• Information sharing
• Information sources
• Information voids
Threat
Landscape
• Motivations
• Sources/ Starting points
• Effects
• Misinformation Narratives
• Hateful speech narratives
• Crossovers
• Tactics and Techniques
• Artifacts
Response
Landscape
• Monitoring organisations
• Countering organisations
• Coordination
• Existing policies
• Technologies
• etc
41. CS-ISAO SERVICE OFFERING
Identification Understanding Cognitive Security to identify and manage risks (people, assets,
data, technology, capabilities, policies/ laws/regulations, vulnerabilities, supply
chain) and identification of the adversarial domain
Protection Implementing safeguards to ensure integrity and availability of information
systems and assets – Ability to limit or contain impacts – Provide awareness
and education
Detection Monitoring, detecting and sharing Cognitive Security intelligence, trends,
threats, attacks and their impacts
Response Communication of countermeasures (executing response processes, analysis,
mitigation, benefitting from lessons learned
Recovery Maintaining resilience plans, restoring impacted information, systems and
assets, benefitting from lessons learned
44. Other parts of Social Engineering
● Persuade people to do things that aren’t in their own
interests.
● Like giving away passwords and other security
information
Types:
● Phishing: spoof links / sites
● Spear phishing: highly targeted
● Vishing: by voice, e.g. fake toll-free number
● Pretexting: impersonation
● Baiting: dropping infected USB drives etc
● Tailgating: following someone in
● Quid pro quo - helping in return for info
Watering hole attacks - infect websites that targets use
45. Denial of Service
Make a system inaccessible
Distributed denial of service (DDOS): use a lot of
machines to do this, so the attack appears to
come from many places
47. Information Sharing and Analysis Centres
• Sustained by CS-ISAO Members & Sponsors
• Supported by The International Association of Certified
ISAOs (IACI)
• Connects Cognitive Security Domain Public- and Private-
Sector Stakeholders
• Private-Sector Organizations
• Government (US - Federal, State/Local/Tribal/ Territorial
(SLTT), International)
• Critical Infrastructure Owners/Operators
• Other Communities-of-Interest, Public, Disinformation
Initiatives/Programs/ Organizations, Social Medial
Organizations, Traditional Media, Relevant Technology
and Security Companies, Civil Society Groups,
Researchers/SMEs
• Led by the Private Sector, in Cooperation, Coordination
and Collaboration with Government
50. Resource Allocation and Automation
• Tagging needs and groups with AMITT labels
• Building collaboration mechanisms to reduce lost tips and repeated collection
• Designing for future potential surges
• Automating repetitive jobs to reduce load on humans
51. Other attack types from infosec
Ransomware
■ Malware gets onto your system
– (almost always, someone clicks on a link
they shouldn’t)
– Malware encrypts the files in your system
■ Actors demand ransom in exchange for
decryption / keys
■ Victim pays
– (victim almost always pays)
■ Victim decrypts files or
– Something goes wrong and files are lost
– (Victim often discovers they forget to take
backups)
52. Other attack types from psychology
Cognitive bias codex:
Chart of about 200 biases
Each of these is a vulnerability