The document discusses cognitive security, which involves applying information security principles to disinformation and influence operations. It defines cognitive security and compares it to cyber security. The document then outlines how to assess the information, harms, and response landscapes to understand the ecosystem and risks related to cognitive security. It proposes adapting frameworks like FAIR to conduct disinformation risk assessments and manage risks rather than artifacts. Finally, it discusses tools that can be used for response, including games, red/purple teaming, and simulations.
The document discusses cognitive security and the activities of the DISARM Foundation. It defines cognitive security as applying information security principles to misinformation, disinformation, and influence operations. It outlines the information, threat, and response landscapes relevant to cognitive security. It then describes the DISARM Foundation's work in communities, collaborations, mentoring, and research over the past year related to cognitive security and disinformation risk assessment.
The document discusses narratives and disinformation. It defines narratives as stories that people tell themselves about their identity, community, and world. Narratives are important tools that can be used strategically by those creating disinformation. The document outlines several models for understanding narratives, such as the 4D model describing how disinformation dismisses, distorts, distracts, and dismay. It also discusses ways to counter disinformation narratives, such as debunking, injecting truthful information, prebunking to inoculate against false claims, and engaging respectfully with those promoting misleading narratives. The overall document provides an overview of how narratives are used in information operations and strategies for analyzing and responding to disinformation campaigns.
The document provides an overview of techniques for analyzing influence and disinformation through social network analysis. It discusses how to collect Twitter data using code, create graphs of user relationships in Gephi, and analyze the graphs to identify influential users and communities. It also describes how to use tools like Botometer to investigate suspicious accounts and explore URLs and hashtags of interest found through the network analysis. Exercises guide using search terms to explore graphs created from Twitter data, identify influential users, and investigate artifacts within the networks.
The document provides an overview of the threat environment related to disinformation and malign influence. It discusses various threat components including threat actors like nation-states, disinformation entrepreneurs, and disinformation as a service companies. It also covers threat models, narratives, behaviors, tools, scales, and automation used in disinformation campaigns. The document provides examples of threat landscapes and describes components to consider, such as motivations, actors, activities, potential harms, sources, and routes of disinformation. It also discusses business aspects of the disinformation threat including markets for disinformation as a service and adjacent markets.
This document discusses cognitive security, which involves defending against attempts to intentionally or unintentionally manipulate cognition and sensemaking at scale. It covers various topics related to cognitive security including actors, channels, influencers, groups, messaging, and tools used in disinformation campaigns. Frameworks are presented for analyzing disinformation incidents, adapting concepts from information security like the cyber kill chain. Response strategies are discussed, drawing from fields like information operations, crisis management, and risk management. The need for a common language and ongoing monitoring and evaluation is emphasized.
Distributed defense against disinformation: disinformation risk management an...Sara-Jayne Terp
This document discusses distributed defense against disinformation through cognitive security operations centers (CogSecCollab). It proposes a multi-pronged approach involving platforms, law enforcement, government, and other actors to address the complex problem of online disinformation. Key aspects include establishing disinformation security operations centers to conduct threat intelligence, incident response, risk mitigation, and enablement activities like training, tools, and processes. The centers would use frameworks to model disinformation campaigns and share indicators across heterogeneous teams in a collaborative manner. Simulations, red teaming, and other techniques are recommended to test defenses and learn from examples.
SJ Terp is an expert in cognitive security who has worked on disinformation response for the European Union, UNDP, and other organizations. They teach cognitive security courses focused on defending against disinformation, and research related topics including risk frameworks and countermeasure strategies. Their work emphasizes adapting information security principles and practices to address high-volume disinformation threats online.
The document discusses cognitive security and the activities of the DISARM Foundation. It defines cognitive security as applying information security principles to misinformation, disinformation, and influence operations. It outlines the information, threat, and response landscapes relevant to cognitive security. It then describes the DISARM Foundation's work in communities, collaborations, mentoring, and research over the past year related to cognitive security and disinformation risk assessment.
The document discusses narratives and disinformation. It defines narratives as stories that people tell themselves about their identity, community, and world. Narratives are important tools that can be used strategically by those creating disinformation. The document outlines several models for understanding narratives, such as the 4D model describing how disinformation dismisses, distorts, distracts, and dismay. It also discusses ways to counter disinformation narratives, such as debunking, injecting truthful information, prebunking to inoculate against false claims, and engaging respectfully with those promoting misleading narratives. The overall document provides an overview of how narratives are used in information operations and strategies for analyzing and responding to disinformation campaigns.
The document provides an overview of techniques for analyzing influence and disinformation through social network analysis. It discusses how to collect Twitter data using code, create graphs of user relationships in Gephi, and analyze the graphs to identify influential users and communities. It also describes how to use tools like Botometer to investigate suspicious accounts and explore URLs and hashtags of interest found through the network analysis. Exercises guide using search terms to explore graphs created from Twitter data, identify influential users, and investigate artifacts within the networks.
The document provides an overview of the threat environment related to disinformation and malign influence. It discusses various threat components including threat actors like nation-states, disinformation entrepreneurs, and disinformation as a service companies. It also covers threat models, narratives, behaviors, tools, scales, and automation used in disinformation campaigns. The document provides examples of threat landscapes and describes components to consider, such as motivations, actors, activities, potential harms, sources, and routes of disinformation. It also discusses business aspects of the disinformation threat including markets for disinformation as a service and adjacent markets.
This document discusses cognitive security, which involves defending against attempts to intentionally or unintentionally manipulate cognition and sensemaking at scale. It covers various topics related to cognitive security including actors, channels, influencers, groups, messaging, and tools used in disinformation campaigns. Frameworks are presented for analyzing disinformation incidents, adapting concepts from information security like the cyber kill chain. Response strategies are discussed, drawing from fields like information operations, crisis management, and risk management. The need for a common language and ongoing monitoring and evaluation is emphasized.
Distributed defense against disinformation: disinformation risk management an...Sara-Jayne Terp
This document discusses distributed defense against disinformation through cognitive security operations centers (CogSecCollab). It proposes a multi-pronged approach involving platforms, law enforcement, government, and other actors to address the complex problem of online disinformation. Key aspects include establishing disinformation security operations centers to conduct threat intelligence, incident response, risk mitigation, and enablement activities like training, tools, and processes. The centers would use frameworks to model disinformation campaigns and share indicators across heterogeneous teams in a collaborative manner. Simulations, red teaming, and other techniques are recommended to test defenses and learn from examples.
SJ Terp is an expert in cognitive security who has worked on disinformation response for the European Union, UNDP, and other organizations. They teach cognitive security courses focused on defending against disinformation, and research related topics including risk frameworks and countermeasure strategies. Their work emphasizes adapting information security principles and practices to address high-volume disinformation threats online.
This document provides guidance on data collection for analyzing disinformation and malign influence. It discusses supported analysis including threat intelligence, intelligence analysis, open-source intelligence (OSINT), and data science. It describes tactical tasks like credibility verification, network detection, and activity analysis. Toolsets are presented for data gathering from social media and the web, data storage and sharing, information sharing, and response. Methods of automated data collection using APIs and scrapers as well as manual data collection through OSINT techniques are covered. Formats for structured data like JSON, XML, and CSV are demonstrated.
1) The document discusses frameworks for understanding and responding to disinformation, including the AMITT and ATT&CK frameworks.
2) It describes various types of actors involved in spreading disinformation and proposes establishing Disinformation Security Operations Centers to facilitate collaboration between response efforts.
3) The goals of a CogSec SOC are outlined as informing about ongoing incidents, neutralizing disinformation, preventing future incidents, supporting organizations, and acting as a clearinghouse for incident data.
disinformation risk management: leveraging cyber security best practices to s...Sara-Jayne Terp
This document discusses leveraging cybersecurity best practices to support cognitive security goals related to disinformation and misinformation. It outlines three layers of security - physical, cyber, and cognitive security. It then provides examples of cognitive security risk assessment and mapping the risk landscape. Next, it discusses working together to mitigate and respond to risks through proposed cognitive security operations centers. Finally, it provides a hypothetical example of conducting a country-level risk assessment and designing a response strategy. The document advocates adapting frameworks and standards from cybersecurity to help conceptualize and coordinate cognitive security challenges and responses.
Cyber Threat Intelligence (CTI) primarily focuses on analysing raw data gathered from recent and past events to monitor, detect and prevent threats to an organisation, shifting the focus from reactive to preventive intelligent security measures.
MITRE ATT&CKcon 2.0: AMITT - ATT&CK-based Standards for Misinformation Threat Sharing; Sara Terp and John Gray, Credibility Coalition Misinfosec Working Group
With the focus on security, most organisations test the security defenses via pen-testing. But what about after the network has been compromised. Is there an Advance Persistent Threat (APT) sitting on the network? Will the defenses be able to detect this?
This talk will discuss some of the open source tools that can help simulate this threat. So as to test the security defenses if an APT makes it onto the network.
Cyber Threat Intelligence is a process in which information from different sources is collected, then analyzed to identify and detect threats against any environment. The information collected could be evidence-based knowledge that could support the context, mechanism, indicators, or implications about an already existing threat against an environment, and/or the knowledge about an upcoming threat that could potentially affect the environment. Credit: Marlabs Inc
This document outlines a roadmap for developing an effective actionable threat intelligence program. It discusses what threat intelligence is, how it can enable businesses, and provides recommendations for collecting intelligence from internal and external sources. The roadmap involves initially developing a foundation, then formalizing processes, and moving toward maturity with a goal of demonstrating return on investment from averted threats.
ATT&CKing Your Adversaries - Operationalizing cyber intelligence in your own ...JamieWilliams130
This document discusses operationalizing cyber threat intelligence by emulating adversary behaviors. It explains how to take cyber threat intelligence and map behaviors to the MITRE ATT&CK framework. Specific focus is given to the "Process Doppelgänging" technique, including understanding the behavior, potential detections, and emulating the behavior. The importance of fully emulating operations and expanding emulations through tools like Caldera is also covered.
This document provides guidance on handling media coverage during crises. It discusses defining crises and crisis management. Key players that can ignite crises include the public, government agencies, NGOs, and media. Common crises involving the PCG include shipwrecks and environmental disasters. The document outlines best practices for responding to media, including being prompt, honest, and providing a positive perspective of the agency. It warns against speculation and advises preparing clear messaging to control the narrative. Social media is changing how information spreads and must be monitored during crises.
The document discusses digital activism, which is defined as using digital technology, such as social networks, blogs, email, video and SMS, to achieve political or social change. It provides examples of tools for digital activism and how they can be used, such as using social networks to interact with supporters, blogs to share longer analysis, and video to get attention through emotion. The combination of a goal for change and use of digital technology is what constitutes digital activism.
Cyber Threat Intelligence - It's not just about the feedsIain Dickson
The document discusses cyber threat intelligence and how it can support defensive cyber operations. It defines cyber threat intelligence and outlines different data source types that can be used, including internal incident data and external threat intelligence. It describes the Lockheed Martin Cyber Kill Chain and Diamond Models for structuring threat information and identifying gaps. Actionable threat intelligence requires both internal and external data across the cyber kill chain phases to generate useful context. Threat intelligence can help with incident response, penetration testing, and establishing an intelligence-led defensive posture focused on the most relevant threats.
Everyone has heard of Purple Team by now, but how many have been able to quantify the value? In this talk, we cover all the roles of a Purple Team: Cyber Threat Intelligence, Red Team, Blue Team, and Exercise Coordination. We were asked to emulate various adversaries, with an increasing order of sophistication, while implementing defenses for the adversary TTPs. We were also asked to not spend any money on new technology. Instead, we had to tune the current security controls. See the results!
Where to find better ideas? +10 categories to explore with examplesBoard of Innovation
This document provides tips for finding creative ideas as a team. It suggests getting inspiration from problems users face, observing how people workaround frustrations, exploring your company's existing unused assets, tracking trends, researching history and old ideas, observing extreme users, and browsing sources randomly for Eureka moments. The overall message is that being open to diverse sources of information can trigger novel ideas.
SIDE Model and Coordinated Management of MeaningSreyoshi Dey
This presentation discusses the Social Identity model of Deindividuation Effects (SIDE) and its application to social media. SIDE argues that anonymity on social media helps users strengthen their individual and group identities through social cues, rather than lose their identity as deindividuation theory suggests. The presentation provides an example of how joining an online cello playing group could help someone discover a new identity and in-group. It also discusses how SIDE answers criticisms of computer-mediated communication by showing how identities can form online.
This document discusses media and propaganda. It defines propaganda as misleading information used to publicize a particular point of view. The document then outlines the history of propaganda, including its early uses in ancient Greece and religious contexts. It describes how propaganda has evolved through different media over time, from vocal to digital. The document also discusses different types of propaganda like wartime, religious, and political propaganda. Finally, it introduces the propaganda model of communication developed by Herman and Chomsky, which describes how propaganda operates through mass media to manipulate populations and shape public attitudes.
This document outlines a presentation on threat hunting with Splunk. The presenter is Ken Westin, a security strategist at Splunk with over 20 years of experience in technology and security. The agenda includes an overview of threat hunting basics and data sources, examining the cyber kill chain through a hands-on attack scenario using Splunk, and advanced threat hunting techniques including machine learning. Log-in credentials are provided for access to hands-on demo environments related to the presentation.
This document provides an overview of Module 2 from a training on disinformation and malign influence. It discusses managing information, influence, and response environments. It defines key concepts like information landscapes, influence landscapes, and response landscapes. It also provides examples of building these landscapes through desk research, data analysis, and interviews. The document outlines the history of information and how different technological developments have impacted the spread of information and misinformation over time. It discusses important considerations for understanding and mapping the different groups and capabilities involved in monitoring and responding to disinformation.
This document provides an introduction to a training on disinformation and malign influence hosted by the Disarm Foundation in 2022. It outlines the course structure, objectives, schedule and project. The course will define cognitive security and explain how it relates to information security. It will cover topics like influence operations, narratives, behaviors and risk assessment. Participants will complete exercises after each session and work on a course project to analyze an information harm incident of their choice. The goal is to help participants understand and respond to disinformation threats.
This document provides guidance on data collection for analyzing disinformation and malign influence. It discusses supported analysis including threat intelligence, intelligence analysis, open-source intelligence (OSINT), and data science. It describes tactical tasks like credibility verification, network detection, and activity analysis. Toolsets are presented for data gathering from social media and the web, data storage and sharing, information sharing, and response. Methods of automated data collection using APIs and scrapers as well as manual data collection through OSINT techniques are covered. Formats for structured data like JSON, XML, and CSV are demonstrated.
1) The document discusses frameworks for understanding and responding to disinformation, including the AMITT and ATT&CK frameworks.
2) It describes various types of actors involved in spreading disinformation and proposes establishing Disinformation Security Operations Centers to facilitate collaboration between response efforts.
3) The goals of a CogSec SOC are outlined as informing about ongoing incidents, neutralizing disinformation, preventing future incidents, supporting organizations, and acting as a clearinghouse for incident data.
disinformation risk management: leveraging cyber security best practices to s...Sara-Jayne Terp
This document discusses leveraging cybersecurity best practices to support cognitive security goals related to disinformation and misinformation. It outlines three layers of security - physical, cyber, and cognitive security. It then provides examples of cognitive security risk assessment and mapping the risk landscape. Next, it discusses working together to mitigate and respond to risks through proposed cognitive security operations centers. Finally, it provides a hypothetical example of conducting a country-level risk assessment and designing a response strategy. The document advocates adapting frameworks and standards from cybersecurity to help conceptualize and coordinate cognitive security challenges and responses.
Cyber Threat Intelligence (CTI) primarily focuses on analysing raw data gathered from recent and past events to monitor, detect and prevent threats to an organisation, shifting the focus from reactive to preventive intelligent security measures.
MITRE ATT&CKcon 2.0: AMITT - ATT&CK-based Standards for Misinformation Threat Sharing; Sara Terp and John Gray, Credibility Coalition Misinfosec Working Group
With the focus on security, most organisations test the security defenses via pen-testing. But what about after the network has been compromised. Is there an Advance Persistent Threat (APT) sitting on the network? Will the defenses be able to detect this?
This talk will discuss some of the open source tools that can help simulate this threat. So as to test the security defenses if an APT makes it onto the network.
Cyber Threat Intelligence is a process in which information from different sources is collected, then analyzed to identify and detect threats against any environment. The information collected could be evidence-based knowledge that could support the context, mechanism, indicators, or implications about an already existing threat against an environment, and/or the knowledge about an upcoming threat that could potentially affect the environment. Credit: Marlabs Inc
This document outlines a roadmap for developing an effective actionable threat intelligence program. It discusses what threat intelligence is, how it can enable businesses, and provides recommendations for collecting intelligence from internal and external sources. The roadmap involves initially developing a foundation, then formalizing processes, and moving toward maturity with a goal of demonstrating return on investment from averted threats.
ATT&CKing Your Adversaries - Operationalizing cyber intelligence in your own ...JamieWilliams130
This document discusses operationalizing cyber threat intelligence by emulating adversary behaviors. It explains how to take cyber threat intelligence and map behaviors to the MITRE ATT&CK framework. Specific focus is given to the "Process Doppelgänging" technique, including understanding the behavior, potential detections, and emulating the behavior. The importance of fully emulating operations and expanding emulations through tools like Caldera is also covered.
This document provides guidance on handling media coverage during crises. It discusses defining crises and crisis management. Key players that can ignite crises include the public, government agencies, NGOs, and media. Common crises involving the PCG include shipwrecks and environmental disasters. The document outlines best practices for responding to media, including being prompt, honest, and providing a positive perspective of the agency. It warns against speculation and advises preparing clear messaging to control the narrative. Social media is changing how information spreads and must be monitored during crises.
The document discusses digital activism, which is defined as using digital technology, such as social networks, blogs, email, video and SMS, to achieve political or social change. It provides examples of tools for digital activism and how they can be used, such as using social networks to interact with supporters, blogs to share longer analysis, and video to get attention through emotion. The combination of a goal for change and use of digital technology is what constitutes digital activism.
Cyber Threat Intelligence - It's not just about the feedsIain Dickson
The document discusses cyber threat intelligence and how it can support defensive cyber operations. It defines cyber threat intelligence and outlines different data source types that can be used, including internal incident data and external threat intelligence. It describes the Lockheed Martin Cyber Kill Chain and Diamond Models for structuring threat information and identifying gaps. Actionable threat intelligence requires both internal and external data across the cyber kill chain phases to generate useful context. Threat intelligence can help with incident response, penetration testing, and establishing an intelligence-led defensive posture focused on the most relevant threats.
Everyone has heard of Purple Team by now, but how many have been able to quantify the value? In this talk, we cover all the roles of a Purple Team: Cyber Threat Intelligence, Red Team, Blue Team, and Exercise Coordination. We were asked to emulate various adversaries, with an increasing order of sophistication, while implementing defenses for the adversary TTPs. We were also asked to not spend any money on new technology. Instead, we had to tune the current security controls. See the results!
Where to find better ideas? +10 categories to explore with examplesBoard of Innovation
This document provides tips for finding creative ideas as a team. It suggests getting inspiration from problems users face, observing how people workaround frustrations, exploring your company's existing unused assets, tracking trends, researching history and old ideas, observing extreme users, and browsing sources randomly for Eureka moments. The overall message is that being open to diverse sources of information can trigger novel ideas.
SIDE Model and Coordinated Management of MeaningSreyoshi Dey
This presentation discusses the Social Identity model of Deindividuation Effects (SIDE) and its application to social media. SIDE argues that anonymity on social media helps users strengthen their individual and group identities through social cues, rather than lose their identity as deindividuation theory suggests. The presentation provides an example of how joining an online cello playing group could help someone discover a new identity and in-group. It also discusses how SIDE answers criticisms of computer-mediated communication by showing how identities can form online.
This document discusses media and propaganda. It defines propaganda as misleading information used to publicize a particular point of view. The document then outlines the history of propaganda, including its early uses in ancient Greece and religious contexts. It describes how propaganda has evolved through different media over time, from vocal to digital. The document also discusses different types of propaganda like wartime, religious, and political propaganda. Finally, it introduces the propaganda model of communication developed by Herman and Chomsky, which describes how propaganda operates through mass media to manipulate populations and shape public attitudes.
This document outlines a presentation on threat hunting with Splunk. The presenter is Ken Westin, a security strategist at Splunk with over 20 years of experience in technology and security. The agenda includes an overview of threat hunting basics and data sources, examining the cyber kill chain through a hands-on attack scenario using Splunk, and advanced threat hunting techniques including machine learning. Log-in credentials are provided for access to hands-on demo environments related to the presentation.
This document provides an overview of Module 2 from a training on disinformation and malign influence. It discusses managing information, influence, and response environments. It defines key concepts like information landscapes, influence landscapes, and response landscapes. It also provides examples of building these landscapes through desk research, data analysis, and interviews. The document outlines the history of information and how different technological developments have impacted the spread of information and misinformation over time. It discusses important considerations for understanding and mapping the different groups and capabilities involved in monitoring and responding to disinformation.
This document provides an introduction to a training on disinformation and malign influence hosted by the Disarm Foundation in 2022. It outlines the course structure, objectives, schedule and project. The course will define cognitive security and explain how it relates to information security. It will cover topics like influence operations, narratives, behaviors and risk assessment. Participants will complete exercises after each session and work on a course project to analyze an information harm incident of their choice. The goal is to help participants understand and respond to disinformation threats.
The document discusses risk measurement in the context of disinformation and malign influence training. It begins by outlining why and what organizations aim to measure, such as the effectiveness and value of cognitive security programs. Existing monitoring and evaluation approaches are examined, including logframes commonly used to track outputs and outcomes. The document then reviews existing cognitive security measures like the UK government's RESIST framework and UNICEF's infodemic metrics. It concludes by providing suggestions for different types of performance and effectiveness metrics that can be used, as well as tools for gathering metrics like data analysis, surveys and chatbots.
This document outlines the courses taught by the author in 2021-2022 on topics related to cybersecurity, cognitive security, and sociotechnical systems thinking. It describes two courses in particular - a Sociotechnical Ethical Hacking course and a Cognitive Security course. For each course, it provides an overview of topics covered and approaches taken, which emphasize a holistic view of security that considers both technical and human aspects of systems. It also discusses the author's other related work over the past year, including research, collaborations, mentoring, and community involvement activities.
The document discusses disinformation behaviors and response strategies. It describes building behavior models to understand influence chains and map the disinformation risk landscape. It then covers mitigation behaviors like developing counter-narratives and response behaviors such as prebunking, applying warning labels, and improving coordination between stakeholders. The document advocates developing response plans and counter-techniques to disrupt all phases of the "kill chain" influence model, from planning to evaluation.
The document discusses the present and future of the digital world. It defines digital as using data to make decisions as well as electronic tools that process data. It describes how digital is used socially like social media as well as in business for marketing. The common digital devices are smartphones, tablets, computers and consoles. Some dangers of the digital world are AI ethics, insecure systems, surveillance, cyber criminals and loss of trust in technology. Looking ahead, the digital world will become more complex and risky but we can teach future generations to safely use technology.
This document discusses using drones and digital skills to enhance project-based learning for Generation Z students. It defines digital skills and the digital taxonomy, noting how these can engage Gen Z students who prefer visual and team-based learning. The document suggests digital skills educators should explore, like digital literacy, creativity, and safety. Tips are provided on enhancing lessons with commercial aspects and AI, while addressing the generation gap by learning from different perspectives.
The document discusses the role of CIOs in combating terrorism through cybersecurity. It outlines how terrorists now use the internet and social media to recruit, fundraise, and plan attacks. CIOs must secure corporate networks and share threat information to prevent their networks from being used by terrorists. The document proposes establishing a regional cybersecurity cooperation center to facilitate collaboration between companies, governments, and law enforcement in addressing cyber threats.
Dr. Nefertiti Jackson from the NSA and Mrs. Deirdre Peters from Lockheed Martin gave a presentation on cybersecurity and careers in the field. They discussed the importance of cybersecurity in protecting against cybercrimes, attacks, and terrorism. Cyber threats can affect governments, industries, and individuals. Careers in cybersecurity require skills in computer science, data science, math, and engineering. Diversity is important for developing unique solutions to cybersecurity challenges. The presenters encouraged preparing for cybersecurity careers through education and taking advantage of opportunities across many industries.
The role of big data, artificial intelligence and machine learning in cyber i...Aladdin Dandis
The document discusses the role of big data, artificial intelligence, and machine learning in cyber intelligence. It provides definitions of cyber intelligence and distinguishes between raw threat data and true threat intelligence. The document also outlines drivers for adopting AI-based cybersecurity technologies, including accelerating incident detection and response as well as improving risk communication and situational awareness. A cyber intelligence framework is proposed that involves collecting security data from various sources, processing the data using machine learning algorithms, and generating reports and alerts. Challenges with implementing such a framework are also noted.
FORUM 2013 Social media - a risk management challengeFERMA
This document summarizes a presentation on managing risks related to social media. The presentation covers: opportunities and threats of social media; implications for business models; challenges and opportunities in controlling social media risks; and how to manage those risks. It discusses risks to governments, individuals, and enterprises from social media and provides examples of insurance solutions and best practices for risk management.
Cyber Security Awareness introduction. Why is Cyber Security important? What do I have to do to protect me from Cyber attacks? How to create a IT Security Awareness Plan ?
This document discusses threat information sharing to strengthen human rights. It defines threat information as knowledge that can help protect against harm, such as attack indicators, tactics, and security alerts. Threat information is created through detection, analysis, and data collection. It is important to share selectively based on trust and whether the information will help others defend themselves. Information can be shared with threat researchers, practitioners, and at-risk communities and groups through a "traffic light" framework. The document proposes making threat information more actionable by informing risk management, creating awareness materials from research reports, and data-driven defense improvements.
Social Media Security Risk Slide Share Versionfamudal
The document discusses the security risks posed by increased social media usage in enterprises. It notes that social media platforms have seen rapid growth and are now commonly used by employees for both personal and business purposes. However, this expanded usage introduces new security threats around malware, data leakage, and reputation damage that must be managed. The document advocates developing an integrated social media security strategy that addresses key risks through policies, user training, and technological controls and monitoring.
Risk, SOCs, and mitigations: cognitive security is coming of ageSara-Jayne Terp
This document discusses cognitive security and disinformation risk assessments. It outlines three layers of security - physical, cyber, and cognitive. It describes various disinformation strategies and risks, including different types of misleading information like disinformation, misinformation, and malinformation. It then discusses approaches for assessing and managing disinformation risks, including analyzing the information, threat, and response landscapes in a country. It provides frameworks for classifying disinformation incidents and objects. Finally, it discusses how to set up a cognitive security operations center (CogSOC) to conduct near real-time monitoring, analysis, and response to disinformation threats.
This document provides an overview of issues related to crisis management on the internet. It discusses how issues can quickly escalate to crises online due to the speed and reach of information sharing. It emphasizes the importance of monitoring social networks and key influencers, having response plans in place, and being prepared to respond across multiple online channels in a timely manner. Effective crisis response requires understanding normal communications patterns and being able to identify variations that could signal emerging issues. Network analysis and semantic evaluation of online conversations can help with monitoring and issue detection.
The document discusses setting up a project to respond to disinformation and malign influence. It covers establishing safety protocols to avoid harming targets, responders, or other stakeholders. It also discusses organizing resources like people, evidence collection, tools, and analysis. The document recommends planning the project using a lifecycle model to identify threats and establish processes for monitoring, detection, response, recovery and learning from lessons. It provides examples like the NIST cybersecurity framework and WHO Europe's full lifecycle risk model.
HijackLoader Evolution: Interactive Process HollowingDonato Onofri
CrowdStrike researchers have identified a HijackLoader (aka IDAT Loader) sample that employs sophisticated evasion techniques to enhance the complexity of the threat. HijackLoader, an increasingly popular tool among adversaries for deploying additional payloads and tooling, continues to evolve as its developers experiment and enhance its capabilities.
In their analysis of a recent HijackLoader sample, CrowdStrike researchers discovered new techniques designed to increase the defense evasion capabilities of the loader. The malware developer used a standard process hollowing technique coupled with an additional trigger that was activated by the parent process writing to a pipe. This new approach, called "Interactive Process Hollowing", has the potential to make defense evasion stealthier.
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
7. DISARM
Foundation
2022
Cognitive Security is Information Security applied to
disinformation+
“Cognitive security is the application of information security principles, practices, and tools
to misinformation, disinformation, and influence operations.
It takes a socio-technical lens to high-volume, high-velocity, and high-variety forms of
“something is wrong on the internet”.
Cognitive security can be seen as a holistic view of disinformation from a security
practitioner’s perspective
7
8. DISARM
Foundation
2022
Earlier Definitions: Cognitive Security: both of them
“Cognitive Security is the application of
artificial intelligence technologies, modeled on
human thought processes, to detect security
threats.” - XTN
MLSec - machine learning in information
security
● ML used in attacks on information
systems
● ML used to defend information
systems
● Attacking ML systems and algorithms
● Adversarial AI
“Cognitive Security (COGSEC) refers to
practices, methodologies, and efforts made to
defend against social engineering
attempts‒intentional and unintentional
manipulations of and disruptions to cognition
and sensemaking” - cogsec.org
CogSec - social engineering at scale
● Manipulation of individual beliefs,
belonging, etc
● Manipulation of human communities
● Adversarial cognition
8
9. DISARM
Foundation
2022
Earlier Definitions: Social Engineering: both of them
“the use of centralized planning in an attempt
to manage social change and regulate the
future development and behavior of a society.”
● Mass manipulation etc
“the use of deception to manipulate
individuals into divulging confidential or
personal information that may be used for
fraudulent purposes.”
● Phishing etc
9
11. DISARM
Foundation
2022
Cyber Security vs Cognitive Security: Objects
Computers
Networks
Internet
Data
Actions
People
Communities
Internet
Beliefs
Actions
11
Image: DISARM Foundation
12. DISARM
Foundation
2022
Things to worry about: Hybrid incidents
12
● Hybrid: cyber + cognitive + physical
● Cyber supporting cognitive
● Cognitive supporting cyber
● Cyber attack forms adapted to
cognitive
Image: Verizon DBIR https://www.verizon.com/business/resources/reports/dbir/
15. DISARM
Foundation
2022
Ecosystem Assessment
Information
Landscape
• Information seeking
• Information sharing
• Information sources
• Information voids
Harms
Landscape
• Motivations
• Sources/ Starting points
• Effects
• Misinformation Narratives
• Hateful speech narratives
• Crossovers
• Tactics and Techniques
• Artifacts
Response
Landscape
• Monitoring organisations
• Countering organisations
• Coordination
• Existing policies
• Technologies
• etc
15
16. DISARM
Foundation
2022
Information Landscape
● Actors
● Channels
● Influencers
● Groups
● Messaging
● Narratives and memes
● Tools
16
● Verified information
● Rumours
● Misinformation
● Conspiracies
● Information voids / deserts
People and accounts:
● Seeking information - using search,
questions, influencers etc
● Sharing information through channels
● Posting information
17. DISARM
Foundation
2022
Harms Component Landscape
● Actors
○ Nationstates, individuals, companies, DAAS
companies
● Channels
○ Where people seek, share, post information
○ Where people are encouraged to go
● Influencers
○ Not about followers: might be large influence over
smaller groups
● Groups
○ Created to create or spread disinfo. Often real
members, fake creators. Lots of themes. Often
closed groups.
● Messaging
○ Cognitive bias codex of about 200 biases: each of
these is a vulnerability
● Narratives and memes
○ Narratives designed to spread fast / be sticky.
Often on a theme, often repeated
● Tools
○ Bots, personas, network analysis, marketing tools,
IFTTT etc
17
● Verified information
● Rumours
● Misinformation
● Conspiracies
● Information voids / deserts
People and accounts:
● Seeking information - using search,
questions, influencers etc
● Sharing information through channels
● Posting information
18. DISARM
Foundation
2022
Response landscape
18
Image: DISARM Foundation
1000s of response groups. Many more
potential groups. Sporadic coordination
● Media view: MDM (mis/dis/mal-information);
falseness and intent to harm
● Military view: psyops/MISO
● Communications view: management of trust
● Infosec view: information protection
20. DISARM
Foundation
2022
Tools: desk surveys, collaboration, GAMES
Learning game
● fun experience that teaches you
something
● Useful for training large numbers
of people simultaneously
Red team / Purple team
● test an organisation’s defences by
thinking like a bad guy
● Useful for finding system
vulnerabilities, and predicting future
moves
Tabletop exercise
● key people responding to a
simulated event
● Useful for creating cohesive
teams. Often large scale
Simulation
● imitation of processes and
environment
● Useful for “what if” automated
tests
20
● http://www.theknowledgeguru.com/games-vs-simulations-choosing-right-approach/
● https://www.edutopia.org/sims-vs-games
● Roozenbeek, van der Linder “Inoculation theory and Misinformation”, NATO report, 2021
22. DISARM
Foundation
2022
Disinformation as a risk management problem
Manage the risks, not the artifacts
● Risk assessment, reduction, remediation
● Risks: How bad? How big? How likely? Who to?
● Attack surfaces, vulnerabilities, potential losses / outcomes
Manage resources
● Mis/disinformation is everywhere
● Detection, mitigation, response
● People, technologies, time, attention
● Connections
22
Image: https://www.risklens.com/infographics/fair-model-on-a-page
23. DISARM
Foundation
2022
FAIR adaptation - sneak peek
23
● Assets: what are you protecting?
○ E.g. election authority reputation
● Threats: protecting from whom?
○ We know this bit…
● Threat effects: CIA+
● Losses: what do you stand to lose?
○ What are the harms?
○ How do you estimate them?
● Stakeholders: who should care?
● Vulnerabilities: what increases your
likelihood of an event?
○ Unaware population
○ Information voids etc
● Controls: what can you do to reduce
risk?
24. DISARM
Foundation
2022
Risk Effect: Parkerian Hexad
Confidentiality, integrity, availability
■ Confidentiality: data should only be visible
to people who authorized to see it
■ Integrity: data should not be altered in
unauthorized ways
■ Availability: data should be available to be
used
Possession, authenticity, utility
■ Possession: controlling the data media
■ Authenticity: accuracy and truth of the
origin of the information
■ Utility: usefulness (e.g. losing the
encryption key)
24
Image: Parkerian Hexad, from
https://www.sciencedirect.com/topics/computer-
science/parkerian-hexad
Image: https://www.staffhosteurope.com/blog/2019/03/cybersecurity-and-the-parkerian-hexad
25. DISARM
Foundation
2022
Risk component: Digital harms frameworks
Physical harm e.g. bodily injury, damage to physical assets (hardware,
infrastructure, etc).
Psychological harm e.g. depression, anxiety from cyber bullying, cyber stalking etc
Economic harm financial loss, e.g. from data breach, cybercrime etc
Reputational harm e.g. Organization: loss of consumers; Individual: disruption of
personal life; Country: damaged trade negotiations.
Cultural harm increase in social disruption, e.g. misinformation creating real-
world violence.
Political harm e.g. disruption in political process, government services from
e.g. internet shutdown, botnets influencing votes
25
Image: https://dai-global-digital.com/cyber-harm.html)
Plus responder harms: psychological damage, security risks
34. DISARM
Foundation
2022
Planning
Strategic
Planning
Objective
Planning
Preparation
Develop
People
Develop
Networks
Microtargeting
Develop
Content
Channel
Selection
Execution
Pump Priming Exposure
Prebunking
Humorous counter
narratives
Mark content with
ridicule /
decelerants
Expire social media
likes/ retweets
Influencer disavows
misinfo
Cut off banking
access
Dampen emotional
reaction
Remove / rate limit
botnets
Social media
amber alert
Etc
Go Physical Persistence
Evaluation
Measure
Effectiveness
Have a
disinformation
response plan
Improve
stakeholder
coordination
Make civil society
more vibrant
Red team
disinformation,
design mitigations
Enhanced privacy
regulation for social
media
Platform regulation
Shared fact
checking
database
Repair broken
social connections
Pre-emptive action
against
disinformation
team infrastructure
Etc
Media literacy
through games
Tabletop
simulations
Make information
provenance
available
Block access to
disinformation
resources
Educate influencers
Buy out troll farm
employees / offer
jobs
Legal action
against for-profit
engagement farms
Develop
compelling counter
narratives
Run competing
campaigns
Etc
Find and train
influencers
Counter-social
engineering
training
Ban incident actors
from funding sites
Address truth in
narratives
Marginalise and
discredit extremist
groups
Ensure platforms
are taking down
accounts
Name and shame
disinformation
influencers
Denigrate funding
recipient / project
Infiltrate in-groups
Etc
Remove old
and unused
accounts
Unravel Potemkin
villages
Verify project
before posting fund
requests
Encourage people
to leave social
media
Deplatform
message groups
and boards
Stop offering press
credentials to
disinformation
outlets
Free open library
sources
Social media
source removal
Infiltrate
disinformation
platforms
Etc
Fill information
voids
Stem flow of
advertising money
Buy more
advertising than
disinformation
creators
Reduce political
targeting
Co-opt
disinformation
hashtags
Mentorship: elders,
youth, credit
Hijack content
and link to
information
Honeypot social
community
Corporate
research funding
full disclosure
Real-time updates
to factcheck
database
Remove non-
relevant content
from special
interest groups
Content
moderation
Prohibit images in
political Chanels
Add metadata to
original content
Add warning labels
on sharing
Etc
Rate-limit
engagement
Redirect searches
away from disinfo
Honeypot: fake
engagement
system
Bot to engage and
distract trolls
Strengthen
verification
methods
Verified ids to
comment or
contribute to poll
Revoke whitelist /
verified status
Microtarget
likely targets
with counter
messages
Train journalists to
counter influence
moves
Tool transparency
and literacy in
followed channels
Ask media not to
report false info
Repurpose images
with counter
messages
Engage payload
and debunk
Debunk/ defuse
fake expert
credentials
Don’t engage with
payloads
Hashtag jacking
Etc
DMCA
takedown
requests
Spam domestic
actors with lawsuits
Seize and analyse
botnet servers
Poison
monitoring and
evaluation
data
Bomb link
shorteners with calls
Add random links
to network graphs
35
DISARM Blue: Countermeasures Framework
Image: DISARM Foundation
36. DISARM
Foundation
2022
Tools
DISARM objects work with all STIX-compatible systems
● MITRE ATT&CK Navigator
● EEAS using DISARM STIX objects in OpenCTI
● Compatible with many other information security tools
DISARM objects already embedded in tools
● DISARM already in every MISP instance
User-friendly standalone tools
● DISARM Foundation building DISARM Explorer app to make non-technical use of DISARM
easier.
37
40. DISARM
Foundation
2022
Example Information Landscape
• Traditional Media
• Newspapers
• Radio - including community radio
• TV
• Social Media
• Facebook
• Whatsapp
• Twitter
• Youtube/ Telegram/ etc
• Others
• Word of mouth
41
41. DISARM
Foundation
2022
Example Threat Landscape
• Motivations
• Geopolitics mostly absent
• Party politics (internal, inter-party)
• Actors
• Activities
• Manipulate faith communities
• discredit election process
• Discredit/discourage journalists
• Attention (more drama)
• Risks / severities
• Sources
• WhatsApp
• Blogs
• Facebook pages
• Online newspapers
• Media
• Routes
• Hijacked narratives
• Whatsapp to blogs, vice versa
• Whatsapp forwarding
• facebook to whatsapp
• Social media to traditional media
• Social media to word of mouth
42
42. DISARM
Foundation
2022
Creator Behaviours
● T0007: Create fake Social Media Profiles /
Pages / Groups
● T0008: Create fake or imposter news sites
● T0022: Conspiracy narratives
● T0023: Distort facts
● T0052: Tertiary sites amplify news
● T0036: WhatsApp
● T0037: Facebook
● T0038: Twitter
43
Image: DISARM Foundation
43. DISARM
Foundation
2022
Example Response Landscape
(Needs / Work / Gaps)
Risk Reduction
● Media and influence
literacy
● information
landscaping
● Other risk reduction
Monitoring
● Radio, TV, newspapers
● Social media platforms
● Tips
Analysis
● Tier 1 (creates tickets)
● Tier 2 (creates
mitigations)
● Tier 3 (creates reports)
● Tier 4 (coordination)
Response
● Messaging
○ prebunk
○ debunk
○ counternarratives
○ amplification
● Actions
○ removal
○ other actions
● Reach
44
44. DISARM
Foundation
2022
Responder Behaviours
● C00009: Educate high profile influencers on best practices
● C00008: Create shared fact-checking database
● C00042: Address truth contained in narratives
● C00030: Develop a compelling counter narrative (truth based)
● C00093: Influencer code of conduct
● C00193: promotion of a “higher standard of journalism”
● C00073: Inoculate populations through media literacy training
● C00197: remove suspicious accounts
● C00174: Create a healthier news environment
● C00205: strong dialogue between the federal government
and private sector to encourage better reporting
● C00009: Educate high profile influencers on best
practices
● C00008: Create shared fact-checking database
● C00042: Address truth contained in narratives
● C00030: Develop a compelling counter narrative
(truth based)
● C00093: Influencer code of conduct
● C00193: promotion of a “higher standard of
journalism”
● C00073: Inoculate populations through media
literacy training
● C00197: remove suspicious accounts
● C00174: Create a healthier news environment
● C00205: strong dialogue between the federal
government and private sector to encourage
better reporting
45
Image: DISARM Foundation
45. DISARM
Foundation
2022
Practical: Resource Allocation
• Tagging needs and groups with AMITT labels
• Building collaboration mechanisms to reduce lost tips and repeated collection
• Designing for future potential surges
• Automating repetitive jobs to reduce load on humans
46
Image: DISARM Foundation
47. DISARM
Foundation
2022
DISARM Foundation: where we’ve been
Credibility Coalition Misinfosec WG
● Slack
● https://medium.com/@credibilitycoalitio
n/misinfosec-framework-99e3bff5935d
● Created AMITT models
CogSecCollab
● https://cogsec-collab.org/
● Maintained AMITT models
● Mentored new organisations
● Ran disinfo & extremism deployments
● Ran CTI League disinformation team
● MITRE branched AM!TT, as SP!CE
DISARM Foundation
● https://www.disarm.foundation/
● https://github.com/disarmfoundation
● remerge AMITT and SPICE
● Maintains DISARM models
Misinfosec’s original definition:
“deliberate promotion… of false,
misleading or mis-attributed
information
focus on online creation, propagation,
consumption of disinformation
We are especially interested in
disinformation designed to change
beliefs or emotions in a large number of
people”
48
48. DISARM
Foundation
2022
Cognitive Security course
What we’re dealing with
1. Introduction
a. disinformation reports, ethics
b. researcher risks
2. fundamentals (objects)
3. cogsec risks
Human aspects
1. human system vulnerabilities and
patches
2. psychology of influence
Building better models
1. frameworks
2. relational frameworks
3. building landscapes
Investigating incidents
8. setting up an investigation
9. misinformation data analysis
10. disinformation data analysis
Improving our responses
8. disinformation responses
9. monitoring and evaluation
10. games, red teaming and simulations
Where this is heading
8. cogsec as a business
9. future possibilities
49
49. DISARM
Foundation
2022
Sociotechnical Ethical Hacking course
First, do no harm
1. Ethics = risk management
2. Don’t harm others (harms frameworks)
3. Don’t harm yourself (permissions etc)
4. Fix what you break (purple teaming)
It’s systems all the way down
1. Infosec = systems (sociotechnical infosec)
2. All systems can be broken (with resources)
3. All systems have back doors (people, hardware, process, tech
etc)
Psychology is important
1. Reverse engineering = understanding someone else’s
thoughts
2. Social engineering = adapting someone else’s thoughts
3. Algorithms think too (adversarial AI)
Be curious about everything
1. Curiosity is a hacker’s best friend
2. Computers are everywhere (IoT etc)
3. Help is everywhere (how to search, how to ask)
Cognitive security
14. Yourself (systems thinking)
15. Social media (social engineering)
16. Elections (mixed security modes)
Physical security
14. Locksports (vulnerabilities)
15. Buildings and physical (don’t harm self)
Cyber security
14. Web, networks, PCs
15. Machine learning (adversarial AI)
16. Maps and algorithms (back doors)
17. Assembler (microcontrollers)
18. Hardware (IoT)
19. Radio (AISB etc)
Systems that move
14. Cars (canbuses and bypasses)
15. Aerospace (reverse engineering)
16. Satellites (remote commands)
17. Robotics / automation (don’t harm others)
50
51. DISARM
Foundation
2022
Exercise rules
● You’re limited by your own resources: money, people, time,
assets
● You’re allowed to outsource
● You’re aware of consequences from your actions
● You may or may not encounter countermeasures
● Any narrative, behaviour, asset you can think of is in bounds
● You *will* be asked to fix what you broke before leaving the
exercise
52
52. DISARM
Foundation
2022
Suggested actions
Follow the DISARM Red framework
● Set goals
● Gather information
○ Find weaknesses
● Plan activities
● Prepare
○ Decide on materials, narratives, behaviours, channels, influencers etc
● Exploit weaknesses
○ Deploy
● Measure (and adjust as needed)
● Leave
○ what do you leave in place? What do you keep for the next one etc
53
54. DISARM
Foundation
2022
Disinformation as a service
“Doctor Zhivago’s services were priced very specifically, as seen below:
● $15 for an article up to 1,000 characters
● $8 for social media posts and commentary up to 1,000 characters
● $10 for Russian to English translation up to 1,800 characters
● $25 for other language translation up to 2,000 characters
● $1,500 for SEO services to further promote social media posts and traditional media articles, with a time frame of 10
to 15 days
Raskolnikov, on the other hand, had less specific pricing:
● $150 for Facebook and other social media accounts and content
● $200 for LinkedIn accounts and content
● $350–$550 per month for social media marketing
● $45 for an article up to 1,000 characters
● $65 to contact a media source directly to spread material
● $100 per 10 comments for a given article or news story”
55. DISARM
Foundation
2022
Scenario 1: DaaS
● Player: You run a disinformation as a service company
○ It used to be a marketing company, but disinfo pays better
○ You’re based in the Philippines
● Brief: to run a campaign against a US company
○ Your customer is a rival company in Russia
○ They’ve paid you $10,000 for this
○ And expect results within 2 weeks because there’s a regulatory summit then
● Resources:
○ You have 5 people available
○ You have existing assets from other campaigns: Social media accounts, fake news websites
● Plan: Over to you
○ What do you do (narratives, techniques etc)? What resources do you need and use? What are your
measures of success?
56. DISARM
Foundation
2022
Scenario 2: Pink Slime
Player: you’re a high-profile individual with a network of fake news sites
● You started with one site selling alternative health treatments
● Then discovered that clicks paid you a lot - especially if you game Google’s algorithms to get top slot
Brief: adtech exchanges are cracking down on your ad funding
● What else are you going to do to make money?
● How can you maximise this?
Resources:
● You have a team of 40 people total. Many of them are managing social media and content on your sites, but you also
have web developers, strategists, and access to DaaS companies
● You control 400 fake news sites. 40 of these haven’t been found by factcheckers yet
Plan: What do you do (narratives, techniques etc)? What resources do you need and use? What are your measures of success?
57