Cognitive Security:
All the other things
SJ Terp, 2021
1
INST408C: Cognitive Security
introduction
disinformation reports, ethics
researcher risks
fundamentals (objects)
cogsec risks
human system vulnerabilities and patches
psychology of influence
frameworks
relational frameworks
building landscapes
setting up an investigation
misinformation data analysis
disinformation data analysis
disinformation responses
monitoring and evaluation
games, red teaming and simulations
cogsec as a business
future possibilities
Cognitive Security: both of them
“Cognitive Security is the application of artificial
intelligence technologies, modeled on human
thought processes, to detect security threats.” -
XTN
MLSec - machine learning in information security
● ML used in attacks on information systems
● ML used to defend information systems
● Attacking ML systems and algorithms
● “Adversarial AI”
“Cognitive Security (COGSEC) refers to
practices, methodologies, and efforts made to
defend against social engineering
attempts‒intentional and unintentional
manipulations of and disruptions to cognition
and sensemaking” - cogsec.org
CogSec - social engineering at scale
● Manipulation of individual beliefs,
belonging, etc
● Manipulation of human communities
● Adversarial cognition
Social Engineering: both of them
“the use of centralized planning in an attempt to
manage social change and regulate the future
development and behavior of a society.”
● Mass manipulation etc
“the use of deception to manipulate individuals
into divulging confidential or personal
information that may be used for fraudulent
purposes.”
● Phishing etc
What we’re dealing with
Actors
Entities behind disinformation
● Nationstates
● Individuals
● Companies
Entities part of disinformation
● DAAS companies
Image: https://gijn.org/2020/07/08/6-tools-and-6-techniques-reporters-
can-use-to-unmask-the-actors-behind-covid-19-disinformation/
Channels
Lots of channels:
Where people seek, share, post
information
Where people are encouraged to go
Image: https://d1gi.medium.com/the-election2016-micro-
propaganda-machine-383449cc1fba
Influencers
Users or accounts with influence over a
network
● Not the most followers
● The most influence
● Might be large influence over smaller
groups.
Groups
Social media groups created to create or
spread disinformation
● Often real members, fake creators
● Lots of themes
● Often closed groups
Messaging
Narratives designed to spread fast and be “sticky”
● Often on a theme
● Often repeated
Image: https://www.njhomelandsecurity.gov/analysis/false-
text-messages-part-of-larger-covid-19-disinformation-
campaign
Tools
● Bots
● IFTTT variants
● Personas
● Network analysis
● Marketing tools
Image: https://twitter.com/conspirator0/status/1249020176382779392
1000s of responders
The need for a
common language
Media view: Mis/Dis/Mal information
“deliberate promotion… of false,
misleading or mis-attributed information
focus on online creation, propagation,
consumption of disinformation
We are especially interested in
disinformation designed to change beliefs
or emotions in a large number of people”
1
4
Military View: Information Operations
Information Security view: CogSec Layer
PHYSICAL
SECURITY
CYBER
SECURITY
COGNITIVE
SECURITY
What’s different between
cogsec and cybersecurity
Information Security vs Cognitive Security: Objects
Computers
Networks
Internet
Data
Actions
People
Communities
Internet
Beliefs
Actions
Narratives replace malware
1
9
Campaigns
Incidents
Narratives and
behaviours
Artifacts
ACTION
MONITORING
RESPONSIBLE FOR
Different System Boundaries
Internet
Domains
Social Media
Platforms
Organization’s
Platforms
Lawmakers
Organization’s
Business Units
COG SOC
Infosec SOC
Organization’s
Communities
Media
What we took from
information security
CIA: Disinformation as an Integrity problem
• Confidentiality: only the people/systems that are supposed to
have the information do
• Integrity: the information has not been tampered with
• Availability: people can use the system as intended
Incident models: STIX / TAXII
COGSEC
adaptations to STIX
CAMPAIGN
INCIDENT
NARRATIVE
ARTIFACT
Mapped onto other
disinformation
models
ACTOR
BEHAVIOUR
CONTENT
NARRATIVE
Behaviour models: Cyber killchain and ATT&CK
RECON WEAPONIZE DELIVER EXPLOIT CONTROL EXECUTE MAINTAIN
Persistence
Privilege
Escalation
Defense
Evasion
Credential
Access
Discovery
Lateral
Movement
Execution Collection Exfiltration
Command
and Control
26
AMITT Red: CogSec version of KillChain and ATT&CK
Adtech: sales funnels
Other work on techniques
e.g. FLICC (John Cook)
Denial tactics:
● Fake experts
● Logical fallacies
● Impossible expectations
● Cherry picking
● Conspiracy theories
Originally designed for climate change,
HIV/AIDs etc crossover
Planning
Strategic
Planning
Objective
Planning
Preparation
Develop
People
Develop
Networks
Microtargeting
Develop
Content
Channel
Selection
Execution
Pump Priming Exposure
Prebunking
Humorous counter
narratives
Mark content with
ridicule / decelerants
Expire social media
likes/ retweets
Influencer disavows
misinfo
Cut off banking
access
Dampen emotional
reaction
Remove / rate limit
botnets
Social media amber
alert
Etc
Go Physical Persistence
Evaluation
Measure
Effectiveness
Have a
disinformation
response plan
Improve stakeholder
coordination
Make civil society
more vibrant
Red team
disinformation, design
mitigations
Enhanced privacy
regulation for social
media
Platform regulation
Shared fact checking
database
Repair broken social
connections
Pre-emptive action
against disinformation
team infrastructure
Etc
Media literacy
through games
Tabletop simulations
Make information
provenance
available
Block access to
disinformation
resources
Educate influencers
Buy out troll farm
employees / offer
jobs
Legal action against
for-profit
engagement farms
Develop compelling
counter narratives
Run competing
campaigns
Etc
Find and train
influencers
Counter-social
engineering training
Ban incident actors
from funding sites
Address truth in
narratives
Marginalise and
discredit extremist
groups
Ensure platforms are
taking down
accounts
Name and shame
disinformation
influencers
Denigrate funding
recipient / project
Infiltrate in-groups
Etc
Remove old and
unused accounts
Unravel Potemkin
villages
Verify project before
posting fund requests
Encourage people to
leave social media
Deplatform message
groups and boards
Stop offering press
credentials to
disinformation outlets
Free open library
sources
Social media source
removal
Infiltrate
disinformation
platforms
Etc
Fill information
voids
Stem flow of
advertising money
Buy more advertising
than disinformation
creators
Reduce political
targeting
Co-opt disinformation
hashtags
Mentorship: elders,
youth, credit
Hijack content
and link to
information
Honeypot social
community
Corporate research
funding full disclosure
Real-time updates to
factcheck database
Remove non-relevant
content from special
interest groups
Content moderation
Prohibit images in
political Chanels
Add metadata to
original content
Add warning labels
on sharing
Etc
Rate-limit
engagement
Redirect searches
away from disinfo
Honeypot: fake
engagement system
Bot to engage and
distract trolls
Strengthen
verification methods
Verified ids to
comment or
contribute to poll
Revoke whitelist /
verified status
Microtarget likely
targets with
counter
messages
Train journalists to
counter influence
moves
Tool transparency
and literacy in
followed channels
Ask media not to
report false info
Repurpose images
with counter
messages
Engage payload and
debunk
Debunk/ defuse fake
expert credentials
Don’t engage with
payloads
Hashtag jacking
Etc
DMCA takedown
requests
Spam domestic
actors with lawsuits
Seize and analyse
botnet servers
Poison monitoring
and evaluation
data
Bomb link shorteners
with calls
Add random links to
network graphs
AMITT Blue: Countermeasures Framework
Intelligence community: Countermeasure categories
DECEIVE
DENY
DESTROY DETER
DEGRADE
DISRUPT
DETECT
Red/Blue
teaming:
using blue
to red links
CogSec version of Tiered Security Operations Centers
Seen in other tactical groups, e.g. Election Integrity Project
https://www.atlanticcouncil.org/in-depth-research-reports/the-long-fuse-eip-report-read/
Risk
Management
Disinformation as a risk management problem
Manage the risks, not the artifacts
• Attack surfaces, vulnerabilities,
potential losses / outcomes
• Risk assessment, reduction,
remediation
• Risks: How bad? How big? How
likely? Who to?
Mis/disinformation is everywhere:
• Where do you put your resources?
• Detection, mitigation, response
• People, technologies, time,
attention
• Connections
Digital harms frameworks
(List from https://dai-global-digital.com/cyber-harm.html)
Physical harm e.g. bodily injury, damage to physical assets (hardware,
infrastructure, etc).
Psychological harm e.g. depression, anxiety from cyber bullying, cyber stalking etc
Economic harm financial loss, e.g. from data breach, cybercrime etc
Reputational harm e.g. Organization: loss of consumers; Individual: disruption of
personal life; Country: damaged trade negotiations.
Cultural harm increase in social disruption, e.g. misinformation creating real-
world violence.
Political harm e.g. disruption in political process, government services from
e.g. internet shutdown, botnets influencing votes
Responder Harms Management
Psychological damage
● Disinformation can be distressing material. It's not just the hate speech and _really_ bad images that you know
are difficult to look at - it's also difficult to spend day after day reading material designed to change beliefs and
wear people down. Be aware of your mental health, and take steps to stay healthy
● (this btw is why we think automating as many processes as make sense is good - it stops people from having
to interact so much with all the raw material).
Security risks
● Disinformation actors aren't always nice people. Operational security (opsec: protecting things like your
identity) is important
● You might also want to keep your disinformation work separated from your dayjob. Opsec can help here too.
Disinformation Risk Assessment
Information
Landscape
• Information seeking
• Information sharing
• Information sources
• Information voids
Threat
Landscape
• Motivations
• Sources/ Starting points
• Effects
• Misinformation Narratives
• Hateful speech narratives
• Crossovers
• Tactics and Techniques
• Artifacts
Response
Landscape
• Monitoring organisations
• Countering organisations
• Coordination
• Existing policies
• Technologies
• etc
Lifecycle models
CS-ISAO SERVICE OFFERING
Identification Understanding Cognitive Security to identify and manage risks (people, assets,
data, technology, capabilities, policies/ laws/regulations, vulnerabilities, supply
chain) and identification of the adversarial domain
Protection Implementing safeguards to ensure integrity and availability of information
systems and assets – Ability to limit or contain impacts – Provide awareness
and education
Detection Monitoring, detecting and sharing Cognitive Security intelligence, trends,
threats, attacks and their impacts
Response Communication of countermeasures (executing response processes, analysis,
mitigation, benefitting from lessons learned
Recovery Maintaining resilience plans, restoring impacted information, systems and
assets, benefitting from lessons learned
Emergency Lifecycle Models
From crisis management: Lifecycle management
Other parts of Social Engineering
● Persuade people to do things that aren’t in their own
interests.
● Like giving away passwords and other security
information
Types:
● Phishing: spoof links / sites
● Spear phishing: highly targeted
● Vishing: by voice, e.g. fake toll-free number
● Pretexting: impersonation
● Baiting: dropping infected USB drives etc
● Tailgating: following someone in
● Quid pro quo - helping in return for info
Watering hole attacks - infect websites that targets use
Denial of Service
Make a system inaccessible
Distributed denial of service (DDOS): use a lot of
machines to do this, so the attack appears to
come from many places
What’s still to take
from infosec
Information Sharing and Analysis Centres
• Sustained by CS-ISAO Members & Sponsors
• Supported by The International Association of Certified
ISAOs (IACI)
• Connects Cognitive Security Domain Public- and Private-
Sector Stakeholders
• Private-Sector Organizations
• Government (US - Federal, State/Local/Tribal/ Territorial
(SLTT), International)
• Critical Infrastructure Owners/Operators
• Other Communities-of-Interest, Public, Disinformation
Initiatives/Programs/ Organizations, Social Medial
Organizations, Traditional Media, Relevant Technology
and Security Companies, Civil Society Groups,
Researchers/SMEs
• Led by the Private Sector, in Cooperation, Coordination
and Collaboration with Government
Shift to trust management
Repeatable
Monitoring and
Evaluation
Resource Allocation and Automation
• Tagging needs and groups with AMITT labels
• Building collaboration mechanisms to reduce lost tips and repeated collection
• Designing for future potential surges
• Automating repetitive jobs to reduce load on humans
Other attack types from infosec
Ransomware
■ Malware gets onto your system
– (almost always, someone clicks on a link
they shouldn’t)
– Malware encrypts the files in your system
■ Actors demand ransom in exchange for
decryption / keys
■ Victim pays
– (victim almost always pays)
■ Victim decrypts files or
– Something goes wrong and files are lost
– (Victim often discovers they forget to take
backups)
Other attack types from psychology
Cognitive bias codex:
Chart of about 200 biases
Each of these is a vulnerability
THANK YOU
SJ Terp @bodaceacat
Dr. Pablo Breuer @Ngree_H0bit
53

Cognitive security: all the other things

  • 1.
    Cognitive Security: All theother things SJ Terp, 2021 1
  • 2.
    INST408C: Cognitive Security introduction disinformationreports, ethics researcher risks fundamentals (objects) cogsec risks human system vulnerabilities and patches psychology of influence frameworks relational frameworks building landscapes setting up an investigation misinformation data analysis disinformation data analysis disinformation responses monitoring and evaluation games, red teaming and simulations cogsec as a business future possibilities
  • 3.
    Cognitive Security: bothof them “Cognitive Security is the application of artificial intelligence technologies, modeled on human thought processes, to detect security threats.” - XTN MLSec - machine learning in information security ● ML used in attacks on information systems ● ML used to defend information systems ● Attacking ML systems and algorithms ● “Adversarial AI” “Cognitive Security (COGSEC) refers to practices, methodologies, and efforts made to defend against social engineering attempts‒intentional and unintentional manipulations of and disruptions to cognition and sensemaking” - cogsec.org CogSec - social engineering at scale ● Manipulation of individual beliefs, belonging, etc ● Manipulation of human communities ● Adversarial cognition
  • 4.
    Social Engineering: bothof them “the use of centralized planning in an attempt to manage social change and regulate the future development and behavior of a society.” ● Mass manipulation etc “the use of deception to manipulate individuals into divulging confidential or personal information that may be used for fraudulent purposes.” ● Phishing etc
  • 5.
  • 6.
    Actors Entities behind disinformation ●Nationstates ● Individuals ● Companies Entities part of disinformation ● DAAS companies Image: https://gijn.org/2020/07/08/6-tools-and-6-techniques-reporters- can-use-to-unmask-the-actors-behind-covid-19-disinformation/
  • 7.
    Channels Lots of channels: Wherepeople seek, share, post information Where people are encouraged to go Image: https://d1gi.medium.com/the-election2016-micro- propaganda-machine-383449cc1fba
  • 8.
    Influencers Users or accountswith influence over a network ● Not the most followers ● The most influence ● Might be large influence over smaller groups.
  • 9.
    Groups Social media groupscreated to create or spread disinformation ● Often real members, fake creators ● Lots of themes ● Often closed groups
  • 10.
    Messaging Narratives designed tospread fast and be “sticky” ● Often on a theme ● Often repeated Image: https://www.njhomelandsecurity.gov/analysis/false- text-messages-part-of-larger-covid-19-disinformation- campaign
  • 11.
    Tools ● Bots ● IFTTTvariants ● Personas ● Network analysis ● Marketing tools Image: https://twitter.com/conspirator0/status/1249020176382779392
  • 12.
  • 13.
    The need fora common language
  • 14.
    Media view: Mis/Dis/Malinformation “deliberate promotion… of false, misleading or mis-attributed information focus on online creation, propagation, consumption of disinformation We are especially interested in disinformation designed to change beliefs or emotions in a large number of people” 1 4
  • 15.
  • 16.
    Information Security view:CogSec Layer PHYSICAL SECURITY CYBER SECURITY COGNITIVE SECURITY
  • 17.
  • 18.
    Information Security vsCognitive Security: Objects Computers Networks Internet Data Actions People Communities Internet Beliefs Actions
  • 19.
  • 20.
    ACTION MONITORING RESPONSIBLE FOR Different SystemBoundaries Internet Domains Social Media Platforms Organization’s Platforms Lawmakers Organization’s Business Units COG SOC Infosec SOC Organization’s Communities Media
  • 21.
    What we tookfrom information security
  • 22.
    CIA: Disinformation asan Integrity problem • Confidentiality: only the people/systems that are supposed to have the information do • Integrity: the information has not been tampered with • Availability: people can use the system as intended
  • 23.
  • 24.
  • 25.
  • 26.
    Behaviour models: Cyberkillchain and ATT&CK RECON WEAPONIZE DELIVER EXPLOIT CONTROL EXECUTE MAINTAIN Persistence Privilege Escalation Defense Evasion Credential Access Discovery Lateral Movement Execution Collection Exfiltration Command and Control 26
  • 27.
    AMITT Red: CogSecversion of KillChain and ATT&CK
  • 28.
  • 29.
    Other work ontechniques e.g. FLICC (John Cook) Denial tactics: ● Fake experts ● Logical fallacies ● Impossible expectations ● Cherry picking ● Conspiracy theories Originally designed for climate change, HIV/AIDs etc crossover
  • 30.
    Planning Strategic Planning Objective Planning Preparation Develop People Develop Networks Microtargeting Develop Content Channel Selection Execution Pump Priming Exposure Prebunking Humorouscounter narratives Mark content with ridicule / decelerants Expire social media likes/ retweets Influencer disavows misinfo Cut off banking access Dampen emotional reaction Remove / rate limit botnets Social media amber alert Etc Go Physical Persistence Evaluation Measure Effectiveness Have a disinformation response plan Improve stakeholder coordination Make civil society more vibrant Red team disinformation, design mitigations Enhanced privacy regulation for social media Platform regulation Shared fact checking database Repair broken social connections Pre-emptive action against disinformation team infrastructure Etc Media literacy through games Tabletop simulations Make information provenance available Block access to disinformation resources Educate influencers Buy out troll farm employees / offer jobs Legal action against for-profit engagement farms Develop compelling counter narratives Run competing campaigns Etc Find and train influencers Counter-social engineering training Ban incident actors from funding sites Address truth in narratives Marginalise and discredit extremist groups Ensure platforms are taking down accounts Name and shame disinformation influencers Denigrate funding recipient / project Infiltrate in-groups Etc Remove old and unused accounts Unravel Potemkin villages Verify project before posting fund requests Encourage people to leave social media Deplatform message groups and boards Stop offering press credentials to disinformation outlets Free open library sources Social media source removal Infiltrate disinformation platforms Etc Fill information voids Stem flow of advertising money Buy more advertising than disinformation creators Reduce political targeting Co-opt disinformation hashtags Mentorship: elders, youth, credit Hijack content and link to information Honeypot social community Corporate research funding full disclosure Real-time updates to factcheck database Remove non-relevant content from special interest groups Content moderation Prohibit images in political Chanels Add metadata to original content Add warning labels on sharing Etc Rate-limit engagement Redirect searches away from disinfo Honeypot: fake engagement system Bot to engage and distract trolls Strengthen verification methods Verified ids to comment or contribute to poll Revoke whitelist / verified status Microtarget likely targets with counter messages Train journalists to counter influence moves Tool transparency and literacy in followed channels Ask media not to report false info Repurpose images with counter messages Engage payload and debunk Debunk/ defuse fake expert credentials Don’t engage with payloads Hashtag jacking Etc DMCA takedown requests Spam domestic actors with lawsuits Seize and analyse botnet servers Poison monitoring and evaluation data Bomb link shorteners with calls Add random links to network graphs AMITT Blue: Countermeasures Framework
  • 31.
    Intelligence community: Countermeasurecategories DECEIVE DENY DESTROY DETER DEGRADE DISRUPT DETECT
  • 32.
  • 33.
    CogSec version ofTiered Security Operations Centers
  • 34.
    Seen in othertactical groups, e.g. Election Integrity Project https://www.atlanticcouncil.org/in-depth-research-reports/the-long-fuse-eip-report-read/
  • 35.
  • 36.
    Disinformation as arisk management problem Manage the risks, not the artifacts • Attack surfaces, vulnerabilities, potential losses / outcomes • Risk assessment, reduction, remediation • Risks: How bad? How big? How likely? Who to? Mis/disinformation is everywhere: • Where do you put your resources? • Detection, mitigation, response • People, technologies, time, attention • Connections
  • 37.
    Digital harms frameworks (Listfrom https://dai-global-digital.com/cyber-harm.html) Physical harm e.g. bodily injury, damage to physical assets (hardware, infrastructure, etc). Psychological harm e.g. depression, anxiety from cyber bullying, cyber stalking etc Economic harm financial loss, e.g. from data breach, cybercrime etc Reputational harm e.g. Organization: loss of consumers; Individual: disruption of personal life; Country: damaged trade negotiations. Cultural harm increase in social disruption, e.g. misinformation creating real- world violence. Political harm e.g. disruption in political process, government services from e.g. internet shutdown, botnets influencing votes
  • 38.
    Responder Harms Management Psychologicaldamage ● Disinformation can be distressing material. It's not just the hate speech and _really_ bad images that you know are difficult to look at - it's also difficult to spend day after day reading material designed to change beliefs and wear people down. Be aware of your mental health, and take steps to stay healthy ● (this btw is why we think automating as many processes as make sense is good - it stops people from having to interact so much with all the raw material). Security risks ● Disinformation actors aren't always nice people. Operational security (opsec: protecting things like your identity) is important ● You might also want to keep your disinformation work separated from your dayjob. Opsec can help here too.
  • 39.
    Disinformation Risk Assessment Information Landscape •Information seeking • Information sharing • Information sources • Information voids Threat Landscape • Motivations • Sources/ Starting points • Effects • Misinformation Narratives • Hateful speech narratives • Crossovers • Tactics and Techniques • Artifacts Response Landscape • Monitoring organisations • Countering organisations • Coordination • Existing policies • Technologies • etc
  • 40.
  • 41.
    CS-ISAO SERVICE OFFERING IdentificationUnderstanding Cognitive Security to identify and manage risks (people, assets, data, technology, capabilities, policies/ laws/regulations, vulnerabilities, supply chain) and identification of the adversarial domain Protection Implementing safeguards to ensure integrity and availability of information systems and assets – Ability to limit or contain impacts – Provide awareness and education Detection Monitoring, detecting and sharing Cognitive Security intelligence, trends, threats, attacks and their impacts Response Communication of countermeasures (executing response processes, analysis, mitigation, benefitting from lessons learned Recovery Maintaining resilience plans, restoring impacted information, systems and assets, benefitting from lessons learned
  • 42.
  • 43.
    From crisis management:Lifecycle management
  • 44.
    Other parts ofSocial Engineering ● Persuade people to do things that aren’t in their own interests. ● Like giving away passwords and other security information Types: ● Phishing: spoof links / sites ● Spear phishing: highly targeted ● Vishing: by voice, e.g. fake toll-free number ● Pretexting: impersonation ● Baiting: dropping infected USB drives etc ● Tailgating: following someone in ● Quid pro quo - helping in return for info Watering hole attacks - infect websites that targets use
  • 45.
    Denial of Service Makea system inaccessible Distributed denial of service (DDOS): use a lot of machines to do this, so the attack appears to come from many places
  • 46.
    What’s still totake from infosec
  • 47.
    Information Sharing andAnalysis Centres • Sustained by CS-ISAO Members & Sponsors • Supported by The International Association of Certified ISAOs (IACI) • Connects Cognitive Security Domain Public- and Private- Sector Stakeholders • Private-Sector Organizations • Government (US - Federal, State/Local/Tribal/ Territorial (SLTT), International) • Critical Infrastructure Owners/Operators • Other Communities-of-Interest, Public, Disinformation Initiatives/Programs/ Organizations, Social Medial Organizations, Traditional Media, Relevant Technology and Security Companies, Civil Society Groups, Researchers/SMEs • Led by the Private Sector, in Cooperation, Coordination and Collaboration with Government
  • 48.
    Shift to trustmanagement
  • 49.
  • 50.
    Resource Allocation andAutomation • Tagging needs and groups with AMITT labels • Building collaboration mechanisms to reduce lost tips and repeated collection • Designing for future potential surges • Automating repetitive jobs to reduce load on humans
  • 51.
    Other attack typesfrom infosec Ransomware ■ Malware gets onto your system – (almost always, someone clicks on a link they shouldn’t) – Malware encrypts the files in your system ■ Actors demand ransom in exchange for decryption / keys ■ Victim pays – (victim almost always pays) ■ Victim decrypts files or – Something goes wrong and files are lost – (Victim often discovers they forget to take backups)
  • 52.
    Other attack typesfrom psychology Cognitive bias codex: Chart of about 200 biases Each of these is a vulnerability
  • 53.
    THANK YOU SJ Terp@bodaceacat Dr. Pablo Breuer @Ngree_H0bit 53