The document discusses narratives and disinformation. It defines narratives as stories that people tell themselves about their identity, community, and world. Narratives are important tools that can be used strategically by those creating disinformation. The document outlines several models for understanding narratives, such as the 4D model describing how disinformation dismisses, distorts, distracts, and dismay. It also discusses ways to counter disinformation narratives, such as debunking, injecting truthful information, prebunking to inoculate against false claims, and engaging respectfully with those promoting misleading narratives. The overall document provides an overview of how narratives are used in information operations and strategies for analyzing and responding to disinformation campaigns.
The document provides an overview of techniques for analyzing influence and disinformation through social network analysis. It discusses how to collect Twitter data using code, create graphs of user relationships in Gephi, and analyze the graphs to identify influential users and communities. It also describes how to use tools like Botometer to investigate suspicious accounts and explore URLs and hashtags of interest found through the network analysis. Exercises guide using search terms to explore graphs created from Twitter data, identify influential users, and investigate artifacts within the networks.
This document provides guidance on data collection for analyzing disinformation and malign influence. It discusses supported analysis including threat intelligence, intelligence analysis, open-source intelligence (OSINT), and data science. It describes tactical tasks like credibility verification, network detection, and activity analysis. Toolsets are presented for data gathering from social media and the web, data storage and sharing, information sharing, and response. Methods of automated data collection using APIs and scrapers as well as manual data collection through OSINT techniques are covered. Formats for structured data like JSON, XML, and CSV are demonstrated.
The document discusses disinformation behaviors and response strategies. It describes building behavior models to understand influence chains and map the disinformation risk landscape. It then covers mitigation behaviors like developing counter-narratives and response behaviors such as prebunking, applying warning labels, and improving coordination between stakeholders. The document advocates developing response plans and counter-techniques to disrupt all phases of the "kill chain" influence model, from planning to evaluation.
The document provides an overview of the threat environment related to disinformation and malign influence. It discusses various threat components including threat actors like nation-states, disinformation entrepreneurs, and disinformation as a service companies. It also covers threat models, narratives, behaviors, tools, scales, and automation used in disinformation campaigns. The document provides examples of threat landscapes and describes components to consider, such as motivations, actors, activities, potential harms, sources, and routes of disinformation. It also discusses business aspects of the disinformation threat including markets for disinformation as a service and adjacent markets.
The document discusses risk assessment for disinformation and malign influence operations. It covers several risk frameworks including FAIR and FullFact. FAIR involves assessing likelihood, exposure, and loss to determine risk levels. FullFact separates risk into 5 levels based on criteria like reach and urgency. The document also discusses calculating potential harms across different domains. Contributor frameworks for analyzing influence operations like ABC and ABCDE are presented. Finally, the document describes conducting a purple team exercise to evaluate risk assessment scenarios.
The document discusses cognitive security, which involves applying information security principles to disinformation and influence operations. It defines cognitive security and compares it to cyber security. The document then outlines how to assess the information, harms, and response landscapes to understand the ecosystem and risks related to cognitive security. It proposes adapting frameworks like FAIR to conduct disinformation risk assessments and manage risks rather than artifacts. Finally, it discusses tools that can be used for response, including games, red/purple teaming, and simulations.
This document provides an introduction to a training on disinformation and malign influence hosted by the Disarm Foundation in 2022. It outlines the course structure, objectives, schedule and project. The course will define cognitive security and explain how it relates to information security. It will cover topics like influence operations, narratives, behaviors and risk assessment. Participants will complete exercises after each session and work on a course project to analyze an information harm incident of their choice. The goal is to help participants understand and respond to disinformation threats.
The document provides an overview of techniques for analyzing influence and disinformation through social network analysis. It discusses how to collect Twitter data using code, create graphs of user relationships in Gephi, and analyze the graphs to identify influential users and communities. It also describes how to use tools like Botometer to investigate suspicious accounts and explore URLs and hashtags of interest found through the network analysis. Exercises guide using search terms to explore graphs created from Twitter data, identify influential users, and investigate artifacts within the networks.
This document provides guidance on data collection for analyzing disinformation and malign influence. It discusses supported analysis including threat intelligence, intelligence analysis, open-source intelligence (OSINT), and data science. It describes tactical tasks like credibility verification, network detection, and activity analysis. Toolsets are presented for data gathering from social media and the web, data storage and sharing, information sharing, and response. Methods of automated data collection using APIs and scrapers as well as manual data collection through OSINT techniques are covered. Formats for structured data like JSON, XML, and CSV are demonstrated.
The document discusses disinformation behaviors and response strategies. It describes building behavior models to understand influence chains and map the disinformation risk landscape. It then covers mitigation behaviors like developing counter-narratives and response behaviors such as prebunking, applying warning labels, and improving coordination between stakeholders. The document advocates developing response plans and counter-techniques to disrupt all phases of the "kill chain" influence model, from planning to evaluation.
The document provides an overview of the threat environment related to disinformation and malign influence. It discusses various threat components including threat actors like nation-states, disinformation entrepreneurs, and disinformation as a service companies. It also covers threat models, narratives, behaviors, tools, scales, and automation used in disinformation campaigns. The document provides examples of threat landscapes and describes components to consider, such as motivations, actors, activities, potential harms, sources, and routes of disinformation. It also discusses business aspects of the disinformation threat including markets for disinformation as a service and adjacent markets.
The document discusses risk assessment for disinformation and malign influence operations. It covers several risk frameworks including FAIR and FullFact. FAIR involves assessing likelihood, exposure, and loss to determine risk levels. FullFact separates risk into 5 levels based on criteria like reach and urgency. The document also discusses calculating potential harms across different domains. Contributor frameworks for analyzing influence operations like ABC and ABCDE are presented. Finally, the document describes conducting a purple team exercise to evaluate risk assessment scenarios.
The document discusses cognitive security, which involves applying information security principles to disinformation and influence operations. It defines cognitive security and compares it to cyber security. The document then outlines how to assess the information, harms, and response landscapes to understand the ecosystem and risks related to cognitive security. It proposes adapting frameworks like FAIR to conduct disinformation risk assessments and manage risks rather than artifacts. Finally, it discusses tools that can be used for response, including games, red/purple teaming, and simulations.
This document provides an introduction to a training on disinformation and malign influence hosted by the Disarm Foundation in 2022. It outlines the course structure, objectives, schedule and project. The course will define cognitive security and explain how it relates to information security. It will cover topics like influence operations, narratives, behaviors and risk assessment. Participants will complete exercises after each session and work on a course project to analyze an information harm incident of their choice. The goal is to help participants understand and respond to disinformation threats.
The document discusses cognitive security and the activities of the DISARM Foundation. It defines cognitive security as applying information security principles to misinformation, disinformation, and influence operations. It outlines the information, threat, and response landscapes relevant to cognitive security. It then describes the DISARM Foundation's work in communities, collaborations, mentoring, and research over the past year related to cognitive security and disinformation risk assessment.
SJ Terp is an expert in cognitive security who has worked on disinformation response for the European Union, UNDP, and other organizations. They teach cognitive security courses focused on defending against disinformation, and research related topics including risk frameworks and countermeasure strategies. Their work emphasizes adapting information security principles and practices to address high-volume disinformation threats online.
disinformation risk management: leveraging cyber security best practices to s...Sara-Jayne Terp
This document discusses leveraging cybersecurity best practices to support cognitive security goals related to disinformation and misinformation. It outlines three layers of security - physical, cyber, and cognitive security. It then provides examples of cognitive security risk assessment and mapping the risk landscape. Next, it discusses working together to mitigate and respond to risks through proposed cognitive security operations centers. Finally, it provides a hypothetical example of conducting a country-level risk assessment and designing a response strategy. The document advocates adapting frameworks and standards from cybersecurity to help conceptualize and coordinate cognitive security challenges and responses.
Cyber threat intelligence (CTI) involves collecting, evaluating, and analyzing cyber threat information using expertise and all-source information to provide insight and understanding of complex cyber situations. CTI can include tactical, operational, and strategic intelligence about security events, indicators of compromise, malware behavior, threat actors, and mapping online threats to geopolitical events over short, medium, and long timeframes. Implementing CTI enables organizations to prepare for and respond to existing and unknown threats through evidence-based knowledge and actionable advice beyond just reactive defense measures.
This document discusses cyber threat intelligence and strategies for defense. It begins with an introduction to cyber threat intelligence and discusses the cyber attack life cycle model from Lockheed Martin. It then addresses questions to consider regarding cyber threats. The document outlines threat intelligence standards and tools like STIX and TAXII, and discusses challenges with SIEM systems. It proposes architectures that incorporate threat intelligence to provide preventive, detective, and fusion capabilities. The presentation concludes with a discussion of data sources and architectures to support cyber threat analysis.
Cyber threat Intelligence and Incident Response by:-Sandeep SinghOWASP Delhi
The broad list of topics include (but not limited to):
- What is Threat Intelligence?
- Type of Threat Intelligence?
- Intelligence Lifecycle
- Threat Intelligence - Classification & Vendor Landscape
- Threat Intelligence Standards (STIX, TAXII, etc.)
- Open Source Threat Intel Tools
- Incident Response
- Role of Threat Intel in Incident Response
- Bonus Agenda
Talk on Kaspersky lab's CoLaboratory: Industrial Cybersecurity Meetup #5 with @HeirhabarovT about several ATT&CK practical use cases.
Video (in Russian): https://www.youtube.com/watch?v=ulUF9Sw2T7s&t=3078
Many thanks to Teymur for great tech dive
1) The document discusses frameworks for understanding and responding to disinformation, including the AMITT and ATT&CK frameworks.
2) It describes various types of actors involved in spreading disinformation and proposes establishing Disinformation Security Operations Centers to facilitate collaboration between response efforts.
3) The goals of a CogSec SOC are outlined as informing about ongoing incidents, neutralizing disinformation, preventing future incidents, supporting organizations, and acting as a clearinghouse for incident data.
The vulnerability allows remote code execution through a malformed Content-Type header, requiring no authentication. It affects Apache Struts versions and can be exploited to gain full system privileges. Workarounds include upgrading to a fixed version or changing the multipart parser implementation. The vulnerability was exploited to breach Equifax's systems through a web application, potentially compromising sensitive personal data for over 140 million people.
Cyber Threat Intelligence - It's not just about the feedsIain Dickson
The document discusses cyber threat intelligence and how it can support defensive cyber operations. It defines cyber threat intelligence and outlines different data source types that can be used, including internal incident data and external threat intelligence. It describes the Lockheed Martin Cyber Kill Chain and Diamond Models for structuring threat information and identifying gaps. Actionable threat intelligence requires both internal and external data across the cyber kill chain phases to generate useful context. Threat intelligence can help with incident response, penetration testing, and establishing an intelligence-led defensive posture focused on the most relevant threats.
Measure What Matters: How to Use MITRE ATTACK to do the Right Things in the R...MITRE - ATT&CKcon
This document discusses how to use the MITRE ATT&CK framework to help quantify cybersecurity risk and prioritize security projects. It outlines some of the challenges in measuring risk impact and likelihood, and how ATT&CK can provide standardized threat data to help estimate risk reduction from security controls. Examples are given showing how ATT&CK tactics and techniques can be mapped to existing security solutions to help compare solutions and demonstrate risk reduction through quantitative metrics. Some limitations are also discussed around needing time to calibrate estimates and the simplifications in the examples.
Threat Hunting Procedures and Measurement MatriceVishal Kumar
This document will provide the basics of Cyber Threat Hunting and answers of some Q such as; What is Threat Hunting?, What is the Importance of Threat Hunting, and How it can be start....Bla..Bla..Bla...
The Diamond Model for Intrusion Analysis - Threat IntelligenceThreatConnect
The Diamond Model provides a systematic framework for characterizing organized cyber threats by modeling intrusions as a series of interconnected events. It represents intrusions as a graph of events (diamonds) connected by their core features of personas, network assets, malware, and tools. This allows analysts to consistently track threats over time, correlate related incidents, and infer adversary capabilities. The model also incorporates meta-features to provide additional context for understanding threats at different levels, from singular events to coordinated campaigns. By grouping similar intrusion patterns into activity groups, the Diamond Model enables identifying adversary infrastructure and techniques to better counter evolving threats.
Distributed defense against disinformation: disinformation risk management an...Sara-Jayne Terp
This document discusses distributed defense against disinformation through cognitive security operations centers (CogSecCollab). It proposes a multi-pronged approach involving platforms, law enforcement, government, and other actors to address the complex problem of online disinformation. Key aspects include establishing disinformation security operations centers to conduct threat intelligence, incident response, risk mitigation, and enablement activities like training, tools, and processes. The centers would use frameworks to model disinformation campaigns and share indicators across heterogeneous teams in a collaborative manner. Simulations, red teaming, and other techniques are recommended to test defenses and learn from examples.
This document discusses the integration of the PRE-ATT&CK framework with the existing ATT&CK framework. It outlines how PRE-ATT&CK covers adversary activities in the pre-compromise phases of reconnaissance and resource development. Draft tactics and techniques are provided for these phases, including gathering victim information through searches, scans, and spearphishing as well as developing capabilities through acquiring or creating infrastructure, accounts, and tools. The goal is to expand ATT&CK to cover a wider range of the adversary lifecycle.
How to Build a Fraud Detection Solution with Neo4jNeo4j
This document discusses how to build a fraud detection solution using Neo4j graph database. It covers typical fraudsters and types of fraud, challenges with traditional fraud detection methods, and how graph databases can provide a more holistic view of relationships to better detect fraud rings and organized crime. The document also outlines a typical fraud detection architecture with Neo4j at the core to power a 360-degree view of transactions in real-time and help detect patterns. It concludes with a demo and Q&A section.
Role of artificial intelligence in cyber security | The Cyber Security ReviewFreelancing
Emerging technologies put cybersecurity at risk. Even the new advancements in defensive strategies of security professionals fail at some point. Let's see what the latest AI technology in cybersecurity is.
Representation Learning on Graphs with Complex Structures
Invited talk, Deep Learning for Graphs and Structured Data Embedding Workshop
WWW2019, San Francisco, May 13, 2019
The document presents a security reference architecture with use cases. It includes sections on user/device security, application security, network security, SASE integration, common identity, converged multi-cloud policy, and securing IoT/OT environments. Diagrams show how different security tools and services fit together across networks, users, applications, and clouds to provide a zero trust architecture.
This document provides an overview of Module 2 from a training on disinformation and malign influence. It discusses managing information, influence, and response environments. It defines key concepts like information landscapes, influence landscapes, and response landscapes. It also provides examples of building these landscapes through desk research, data analysis, and interviews. The document outlines the history of information and how different technological developments have impacted the spread of information and misinformation over time. It discusses important considerations for understanding and mapping the different groups and capabilities involved in monitoring and responding to disinformation.
The document discusses risk measurement in the context of disinformation and malign influence training. It begins by outlining why and what organizations aim to measure, such as the effectiveness and value of cognitive security programs. Existing monitoring and evaluation approaches are examined, including logframes commonly used to track outputs and outcomes. The document then reviews existing cognitive security measures like the UK government's RESIST framework and UNICEF's infodemic metrics. It concludes by providing suggestions for different types of performance and effectiveness metrics that can be used, as well as tools for gathering metrics like data analysis, surveys and chatbots.
The document discusses cognitive security and the activities of the DISARM Foundation. It defines cognitive security as applying information security principles to misinformation, disinformation, and influence operations. It outlines the information, threat, and response landscapes relevant to cognitive security. It then describes the DISARM Foundation's work in communities, collaborations, mentoring, and research over the past year related to cognitive security and disinformation risk assessment.
SJ Terp is an expert in cognitive security who has worked on disinformation response for the European Union, UNDP, and other organizations. They teach cognitive security courses focused on defending against disinformation, and research related topics including risk frameworks and countermeasure strategies. Their work emphasizes adapting information security principles and practices to address high-volume disinformation threats online.
disinformation risk management: leveraging cyber security best practices to s...Sara-Jayne Terp
This document discusses leveraging cybersecurity best practices to support cognitive security goals related to disinformation and misinformation. It outlines three layers of security - physical, cyber, and cognitive security. It then provides examples of cognitive security risk assessment and mapping the risk landscape. Next, it discusses working together to mitigate and respond to risks through proposed cognitive security operations centers. Finally, it provides a hypothetical example of conducting a country-level risk assessment and designing a response strategy. The document advocates adapting frameworks and standards from cybersecurity to help conceptualize and coordinate cognitive security challenges and responses.
Cyber threat intelligence (CTI) involves collecting, evaluating, and analyzing cyber threat information using expertise and all-source information to provide insight and understanding of complex cyber situations. CTI can include tactical, operational, and strategic intelligence about security events, indicators of compromise, malware behavior, threat actors, and mapping online threats to geopolitical events over short, medium, and long timeframes. Implementing CTI enables organizations to prepare for and respond to existing and unknown threats through evidence-based knowledge and actionable advice beyond just reactive defense measures.
This document discusses cyber threat intelligence and strategies for defense. It begins with an introduction to cyber threat intelligence and discusses the cyber attack life cycle model from Lockheed Martin. It then addresses questions to consider regarding cyber threats. The document outlines threat intelligence standards and tools like STIX and TAXII, and discusses challenges with SIEM systems. It proposes architectures that incorporate threat intelligence to provide preventive, detective, and fusion capabilities. The presentation concludes with a discussion of data sources and architectures to support cyber threat analysis.
Cyber threat Intelligence and Incident Response by:-Sandeep SinghOWASP Delhi
The broad list of topics include (but not limited to):
- What is Threat Intelligence?
- Type of Threat Intelligence?
- Intelligence Lifecycle
- Threat Intelligence - Classification & Vendor Landscape
- Threat Intelligence Standards (STIX, TAXII, etc.)
- Open Source Threat Intel Tools
- Incident Response
- Role of Threat Intel in Incident Response
- Bonus Agenda
Talk on Kaspersky lab's CoLaboratory: Industrial Cybersecurity Meetup #5 with @HeirhabarovT about several ATT&CK practical use cases.
Video (in Russian): https://www.youtube.com/watch?v=ulUF9Sw2T7s&t=3078
Many thanks to Teymur for great tech dive
1) The document discusses frameworks for understanding and responding to disinformation, including the AMITT and ATT&CK frameworks.
2) It describes various types of actors involved in spreading disinformation and proposes establishing Disinformation Security Operations Centers to facilitate collaboration between response efforts.
3) The goals of a CogSec SOC are outlined as informing about ongoing incidents, neutralizing disinformation, preventing future incidents, supporting organizations, and acting as a clearinghouse for incident data.
The vulnerability allows remote code execution through a malformed Content-Type header, requiring no authentication. It affects Apache Struts versions and can be exploited to gain full system privileges. Workarounds include upgrading to a fixed version or changing the multipart parser implementation. The vulnerability was exploited to breach Equifax's systems through a web application, potentially compromising sensitive personal data for over 140 million people.
Cyber Threat Intelligence - It's not just about the feedsIain Dickson
The document discusses cyber threat intelligence and how it can support defensive cyber operations. It defines cyber threat intelligence and outlines different data source types that can be used, including internal incident data and external threat intelligence. It describes the Lockheed Martin Cyber Kill Chain and Diamond Models for structuring threat information and identifying gaps. Actionable threat intelligence requires both internal and external data across the cyber kill chain phases to generate useful context. Threat intelligence can help with incident response, penetration testing, and establishing an intelligence-led defensive posture focused on the most relevant threats.
Measure What Matters: How to Use MITRE ATTACK to do the Right Things in the R...MITRE - ATT&CKcon
This document discusses how to use the MITRE ATT&CK framework to help quantify cybersecurity risk and prioritize security projects. It outlines some of the challenges in measuring risk impact and likelihood, and how ATT&CK can provide standardized threat data to help estimate risk reduction from security controls. Examples are given showing how ATT&CK tactics and techniques can be mapped to existing security solutions to help compare solutions and demonstrate risk reduction through quantitative metrics. Some limitations are also discussed around needing time to calibrate estimates and the simplifications in the examples.
Threat Hunting Procedures and Measurement MatriceVishal Kumar
This document will provide the basics of Cyber Threat Hunting and answers of some Q such as; What is Threat Hunting?, What is the Importance of Threat Hunting, and How it can be start....Bla..Bla..Bla...
The Diamond Model for Intrusion Analysis - Threat IntelligenceThreatConnect
The Diamond Model provides a systematic framework for characterizing organized cyber threats by modeling intrusions as a series of interconnected events. It represents intrusions as a graph of events (diamonds) connected by their core features of personas, network assets, malware, and tools. This allows analysts to consistently track threats over time, correlate related incidents, and infer adversary capabilities. The model also incorporates meta-features to provide additional context for understanding threats at different levels, from singular events to coordinated campaigns. By grouping similar intrusion patterns into activity groups, the Diamond Model enables identifying adversary infrastructure and techniques to better counter evolving threats.
Distributed defense against disinformation: disinformation risk management an...Sara-Jayne Terp
This document discusses distributed defense against disinformation through cognitive security operations centers (CogSecCollab). It proposes a multi-pronged approach involving platforms, law enforcement, government, and other actors to address the complex problem of online disinformation. Key aspects include establishing disinformation security operations centers to conduct threat intelligence, incident response, risk mitigation, and enablement activities like training, tools, and processes. The centers would use frameworks to model disinformation campaigns and share indicators across heterogeneous teams in a collaborative manner. Simulations, red teaming, and other techniques are recommended to test defenses and learn from examples.
This document discusses the integration of the PRE-ATT&CK framework with the existing ATT&CK framework. It outlines how PRE-ATT&CK covers adversary activities in the pre-compromise phases of reconnaissance and resource development. Draft tactics and techniques are provided for these phases, including gathering victim information through searches, scans, and spearphishing as well as developing capabilities through acquiring or creating infrastructure, accounts, and tools. The goal is to expand ATT&CK to cover a wider range of the adversary lifecycle.
How to Build a Fraud Detection Solution with Neo4jNeo4j
This document discusses how to build a fraud detection solution using Neo4j graph database. It covers typical fraudsters and types of fraud, challenges with traditional fraud detection methods, and how graph databases can provide a more holistic view of relationships to better detect fraud rings and organized crime. The document also outlines a typical fraud detection architecture with Neo4j at the core to power a 360-degree view of transactions in real-time and help detect patterns. It concludes with a demo and Q&A section.
Role of artificial intelligence in cyber security | The Cyber Security ReviewFreelancing
Emerging technologies put cybersecurity at risk. Even the new advancements in defensive strategies of security professionals fail at some point. Let's see what the latest AI technology in cybersecurity is.
Representation Learning on Graphs with Complex Structures
Invited talk, Deep Learning for Graphs and Structured Data Embedding Workshop
WWW2019, San Francisco, May 13, 2019
The document presents a security reference architecture with use cases. It includes sections on user/device security, application security, network security, SASE integration, common identity, converged multi-cloud policy, and securing IoT/OT environments. Diagrams show how different security tools and services fit together across networks, users, applications, and clouds to provide a zero trust architecture.
This document provides an overview of Module 2 from a training on disinformation and malign influence. It discusses managing information, influence, and response environments. It defines key concepts like information landscapes, influence landscapes, and response landscapes. It also provides examples of building these landscapes through desk research, data analysis, and interviews. The document outlines the history of information and how different technological developments have impacted the spread of information and misinformation over time. It discusses important considerations for understanding and mapping the different groups and capabilities involved in monitoring and responding to disinformation.
The document discusses risk measurement in the context of disinformation and malign influence training. It begins by outlining why and what organizations aim to measure, such as the effectiveness and value of cognitive security programs. Existing monitoring and evaluation approaches are examined, including logframes commonly used to track outputs and outcomes. The document then reviews existing cognitive security measures like the UK government's RESIST framework and UNICEF's infodemic metrics. It concludes by providing suggestions for different types of performance and effectiveness metrics that can be used, as well as tools for gathering metrics like data analysis, surveys and chatbots.
We set out to answer these questions and ended up writing “Our Playbook for Digital Crisis Management 3.0.” Born out of our global experience preparing for and responding to brand and corporate crises, it’s now part of our global training program.
We wanted to understand how social media was fundamentally changing the way we approach crisis management. We wanted to marry established crisis practices with the most evolved thinking in social media marketing and social business practices. We also wanted to be highly practical – today’s experts need a suite of apps they can quickly access when a crisis threatens to break.
The document discusses setting up a project to respond to disinformation and malign influence. It covers establishing safety protocols to avoid harming targets, responders, or other stakeholders. It also discusses organizing resources like people, evidence collection, tools, and analysis. The document recommends planning the project using a lifecycle model to identify threats and establish processes for monitoring, detection, response, recovery and learning from lessons. It provides examples like the NIST cybersecurity framework and WHO Europe's full lifecycle risk model.
1. Tips and tricks to remove your own bias from your map Experiences are holistic, personal, and situational, and while you choose a point of view, as a mapmaker it’s up to you to decide which aspects to include and which to leave out.
2. Taking the risk Testing your map, assume it’s wrong, but don’t get stuck in planning.
3. Business Model and Strategic POVs are needed Realize the experts involved in creating the product will probably not be the best experts you will need to deliver the product to the users.
4. Tools to help you Understand common gotchas, and what to avoid.
5. And many more strategies Gain an outside-in view of the individuals' experience with the service.
Our Playbook for Digital Crisis and Issue Management 3.0Ogilvy Consulting
We set out to answer these questions and ended up writing “Our Playbook for Digital Crisis Management 3.0.” Born out of our global experience preparing for and responding to brand and corporate crises, it’s now part of our global training program.
We wanted to understand how social media was fundamentally changing the way we approach crisis management. We wanted to marry established crisis practices with the most evolved thinking in social media marketing and social business practices. We also wanted to be highly practical – today’s experts need a suite of apps they can quickly access when a crisis threatens to break.
How Connected is your Cause? - Fundraising through Fans, Followers & Friends.Dell Social Media
Carly Tatum, International Social Media Manager at Dell, shares how family foundations can use social media to raise awareness and money for their cause. Presented on May 17, 2012, at the Neuroblastoma and Medulloblastoma Translational Research Consortium at Dell Children’s Hospital in Austin, Texas.
Intro to Social Media (WPSP 2012 Summer Institute)Amy Tennison
This document provides an introduction to social media and strategies for developing an effective social media presence. It discusses what social media is, how organizations can leverage social media to support causes and movements, and examples of successful social media campaigns. It then outlines tips for building a social media plan, including starting by listening, identifying relevant platforms, developing compelling content, engaging influencers and fans, and maintaining a personal brand. Resources and tools for implementing each step of a social media strategy are also provided.
FORUM 2013 Social media - a risk management challengeFERMA
This document summarizes a presentation on managing risks related to social media. The presentation covers: opportunities and threats of social media; implications for business models; challenges and opportunities in controlling social media risks; and how to manage those risks. It discusses risks to governments, individuals, and enterprises from social media and provides examples of insurance solutions and best practices for risk management.
This year’s edition highlights five critical trends for communicators in the next 12-18 months. Each is brought to life with real-world examples, implications for businesses and a carefully curated selection of classes from innovative institutions worldwide.
The Study Guide is designed as both a primer and a resource to allow for deep-dives. We hope it piques your curiosity and gives you fluency in new elements of modern media and communications.
How Generative AI can combat disinformation? TitanEurope1
Despite the availability of sophisticated tools and professional fact-checking services, the average citizen remains weaponless against disinformation campaigns. TITAN leverages AI to create a conversational coach that helps users think through a piece of information, question its validity and come to their own decisions around accuracy and truthfulness. This presentation by Antonis Ramfos from ATC, given at EU Regions Week 2023, outlines TITAN's technological approach to support human capabilities. Subscribe for news/updates at www.titanthinking.eu
Designing Social Learning: "Informal" Does Not Mean "Unplanned"Christopher King
Presentation delivered at the GMU 7th Annual Innovations in e-Learning Symposium, Fairfax, VA, 8 June 2011
Where does Social Learning fit on the methodology matrix? That is, when is appropriate to select Social Learning as an instructional strategy? Instructional Designers have a toolbox of learning interactions for all kinds of modalities, situations, topics, audiences, and experience levels; if social learning is more than just a fad (which we think it is), it's time to make room on the ISD shelf for social learning instructional strategies. The session will include short-duration small-group breakouts, some brainstorming within a defined framework, and lots of audience participation. Participants will leave the session with a better understanding of social learning interactions, more resources to help develop social learning events, and greater awareness of their ability to design informal learning.
Misinformation, Disinformation & Hate speech
Tackling Misinformation,
Disinformation, and Hate Speech:
Empowering South Sudanese Youth, a presentation by Emmanuel Bida Thomas a fact-checker at 211 Check a fact-checking and information verification platform in South Sudan dedicated to countering misinformation, disinformation and hate speech.
Communicating the Role of Extension in the Era of MisinformationAmy Cole
The document discusses the challenges of combating misinformation and proposes solutions for extension services to address this problem. It notes that misinformation spreads quickly online and is often simplistic, while factual information can be slow to disseminate. To more effectively counter misinformation, the document recommends that extension services speed up their response time, get ahead of misinformation campaigns through proactive monitoring, and establish an "Extension Content Hub" to control the message and establish their authority on topics. This hub would feature various types of educational content to engage audiences and increase extension's online visibility.
This document outlines the agenda and objectives for a workshop on developing a nonprofit organization's social media strategy. The workshop aims to help participants integrate social media with overall communications plans and address challenges that arise when new technologies are introduced. The agenda includes introductions, presentations on social media strategy principles, small group simulations to develop strategies for different nonprofit scenarios, and time for groups to report out and reflect. The document provides guidance on listening to audiences, engaging stakeholders, identifying influencers, creating and sharing content, and selecting appropriate metrics and platforms to support organizational goals.
Targeted disinformation warfare how and why foreign efforts arearchiejones4
The document discusses targeted disinformation campaigns by foreign actors and provides recommendations for government action. It outlines how disinformation actors create and spread false content on social media to exacerbate societal divisions and undermine democracy. Specifically, it analyzes Russian disinformation tactics used during the Cold War and how they evolved to target liberal democracies using online platforms. The document recommends a four-pronged government response framework to address each stage of the disinformation process by allocating responsibilities, increasing information sharing, making platforms more accountable, and building public resilience against false narratives.
Strategic Digital Marketing (Digital Marketing '15 @ Oulu University)Joni Salminen
The document discusses strategic considerations for digital marketing and channel choice. It begins by defining strategy and outlining some strategic issues in digital marketing, such as choice of channels/platforms and how digital fits in a company's business model. It then discusses factors to consider when choosing marketing channels like customer behavior, profitability, and reach. Approaches to channel choice are compared to Roman military strategies of focusing resources or dividing them. The document also covers adoption of new platforms over time, managing a portfolio of digital marketing channels at the strategic, tactical, and operational levels, and how "operative" marketing can become strategic by being linked to goals and customer interface.
Meet up Milano 14 _ Axpo Italia_ Migration from Mule3 (On-prem) to.pdfFlorence Consulting
Quattordicesimo Meetup di Milano, tenutosi a Milano il 23 Maggio 2024 dalle ore 17:00 alle ore 18:30 in presenza e da remoto.
Abbiamo parlato di come Axpo Italia S.p.A. ha ridotto il technical debt migrando le proprie APIs da Mule 3.9 a Mule 4.4 passando anche da on-premises a CloudHub 1.0.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
Understanding User Behavior with Google Analytics.pdfSEO Article Boost
Unlocking the full potential of Google Analytics is crucial for understanding and optimizing your website’s performance. This guide dives deep into the essential aspects of Google Analytics, from analyzing traffic sources to understanding user demographics and tracking user engagement.
Traffic Sources Analysis:
Discover where your website traffic originates. By examining the Acquisition section, you can identify whether visitors come from organic search, paid campaigns, direct visits, social media, or referral links. This knowledge helps in refining marketing strategies and optimizing resource allocation.
User Demographics Insights:
Gain a comprehensive view of your audience by exploring demographic data in the Audience section. Understand age, gender, and interests to tailor your marketing strategies effectively. Leverage this information to create personalized content and improve user engagement and conversion rates.
Tracking User Engagement:
Learn how to measure user interaction with your site through key metrics like bounce rate, average session duration, and pages per session. Enhance user experience by analyzing engagement metrics and implementing strategies to keep visitors engaged.
Conversion Rate Optimization:
Understand the importance of conversion rates and how to track them using Google Analytics. Set up Goals, analyze conversion funnels, segment your audience, and employ A/B testing to optimize your website for higher conversions. Utilize ecommerce tracking and multi-channel funnels for a detailed view of your sales performance and marketing channel contributions.
Custom Reports and Dashboards:
Create custom reports and dashboards to visualize and interpret data relevant to your business goals. Use advanced filters, segments, and visualization options to gain deeper insights. Incorporate custom dimensions and metrics for tailored data analysis. Integrate external data sources to enrich your analytics and make well-informed decisions.
This guide is designed to help you harness the power of Google Analytics for making data-driven decisions that enhance website performance and achieve your digital marketing objectives. Whether you are looking to improve SEO, refine your social media strategy, or boost conversion rates, understanding and utilizing Google Analytics is essential for your success.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
6. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Narratives
Narratives are what you believe; stories are how you communicate that.
● Stories don’t have to be true or believed. They can be used as signals
● We tell ourselves stories about
○ who we are and want to be (identity narratives);
○ who we belong to and don’t belong to (in-groups and out-groups);
○ what our world is, and
○ what is happening around us.
Narratives define us as individuals and groups: families, communities, nations.
6
8. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Stories carry meaning, not truth
“the currency of story is not truth, but meaning.
That is, what makes a story powerful is not necessarily facts, but how the story creates
meaning in the hearts and minds of the listeners.
Therefore, the obstacle to convincing people is often not what they don’t yet know but
actually what they already do know.
In other words, people’s existing assumptions and beliefs can act as narrative filters to
prevent them from hearing social change messages.”
- Beautiful Trouble
8
20. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Marketing: Brand Metrics
● Share of Voice - how much this brand is mentioned relative to competing brands.
● Consumer engagement
● Net sentiment (against competitor brands), and co-occurrence of brand with specific positive concepts (“brand health”),
across consumers and influencers (and the deltas between these).
● Volume (against competitor brands)
● Message uptake
● Issue tracking - how much this brand is mentioned in conjunction with other narratives, and the themes inside those
narratives.
● Relative visibility of brand executives (CEO etc), both relative to each other, and to executives from competing brands.
● Posts, reach, and engagement of mentions by influencers (journalists etc).
○ paid media is bought by the brand (paid search optimisation, advertising online, in print, TV, direct mail, affiliate
marketing),
○ owned media is controlled by the brand (own websites, blogs, mobile apps, social media accounts, brochures,
stores),
○ earned media is publicity generated through PR (targeting influencers, creating word of mouth discussion
through engaging in social media, community conversations, blogs, and other user-generated content, brand
advocates, and viral marketing in these spaces).
● The simplest sets of brand measures are counts of brand mentions, news mentions, social mentions, and
engagement, and the changes in these over time.
20
21. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Basics: election example
● What: rumours from people’s uncertainties.
○ based on truth and community stories, unknown untruths, community
signals, false information, false contexts, existing narratives.
● Who: people looking for information, political operators, countries
● When: before, during, after events
● How: seeded across user-generated content sites, nation state-controlled media,
influencers
● Why: Goals include boosting one candidate/party, reducing support for other
candidates/parties, creating panic, preventing people from voting, reducing trust
in the election system, etc.
21
23. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
4D Model of disinformation campaigns (Nimmo)
Dismiss: if you don’t like what your critics say, insult them.
Distort: if you don’t like the facts, twist them.
Distract: if you’re accused of something, accuse someone else of the same thing.
Dismay: if you don’t like what someone else is planning, try to scare them off.
31. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Prebunking
“Inoculates” people against disinfo
FirstDraft lists 3 types:
● fact-based: correcting a specific false
claim or narrative
● logic-based: explaining tactics used
to manipulate
● source-based: pointing out bad
sources of information
Image: https://firstdraftnews.org/articles/a-guide-to-prebunking-a-promising-way-to-inoculate-against-misinformation/ 31
34. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Engage: Dennett’s 4 points
1. Attempt to re-express your target’s position so clearly, vividly and
fairly that your target says: “Thanks, I wish I’d thought of putting it that
way.”
2. List any points of agreement (especially if they are not matters of
general or widespread agreement).
3. Mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or
criticism.
37. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
List of popular narratives and counters
37
For elections, these include:
● Election date and location changes
● Scandals around politicians
● An election being won by a different candidate to the one declared winner.
Work with your communities to create this list
● You will likely have top-level narratives with subnarratives, and narratives
linked to different stages of an election
● List against prebunk/debunk
43. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
NCFG catechism
Situation:
● 1. situation or opportunity the narrative campaign is being conducted in response to or in preparation for?
● Mission:
● 2. desired end-state of the campaign?
Approach:
● 3. relevant target audience?
● 4. attitudes, behaviors, conclusions, and decisions within target audience to be influenced or generated?
● 5. sources and information channels considered most credible by the target audience? Why?
● 6. medium(s) and format(s) target audience most familiar with?
● 7. adversarial campaigns?
● 8. messages target audience already being exposed to?
● 9. methods used to influence target audience?
● 10. resources necessary to carry out this approach?
● 11. desired end-state of campaign? How is end-state measurable?
Milestones:
● 1. metrics used to indicate success of campaign?
● 2. milestones that indicate progress toward desired end-state?
43
49. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Text as “bags of words”
● Sentences:
○ “Oh really?”,
○ ”Just like with radiation poisoning then.”
● Words: “just”, “like”, “with”, etc
● Trigrams: “jus”, “ust”, “st “, “t l”, “ li”, “lik” “ike”
● Bigrams: “just like”, “like with”, “with radiation”, “radiation
poisoning”
● Stopwords: “with”, “on”, “then”, “what”, “have”, “they”,
“been”, “out”, “in”, “the”, etc
"RT @KateShemirani: Oh really? Just like
with radiation poisoning then. Put enough
symptoms down on the diagnosis sheet
and you can just abo… RT
@Walletwalking1: @Sterling2143
@AAureilus Anyone noticed COVID
symptoms are same as 5G exposure.
What have they been rolling out in the…
RT @ADDiane: Let's tell the people who
won't wear masks that it's not for covid,
it's for tricking the facial recognition
software that dee… Discourse "
50. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Word Importance: Named Entity Recognition
Finds names of people, organisations, locations etc in text
Can use to create social graphs
import spacy
nlp = spacy.load('en_core_web_sm')
sentence = "Bill Gates is selling 5G Covid19 data to Microsoft"
doc = nlp(sentence)
for ent in doc.ents:
print(ent.text, ent.label_)
Bill Gates PERSON
5 CARDINAL
Microsoft ORG
51. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Sentiment: (some of) the feels
● Word-based: give (some) words positive/negative scores
○ Use an existing ‘sentiment dictionary’
○ Score some words, use machine learning on the rest
● Document-based: score documents and use machine learning
○ ‘positive’/’negative’ for each sentence
● Semantic/pragmatic: use natural language processing
○ Satire is hard to detect
○ “Nice work bro!”
○ Emoticons are a language too
Sentiment dictionaries:
● Wordstat:
https://provalisresearch.com/products/content-analysis-software/wordstat-dictionary/sentiment-dictionaries/
● Sentiwordnet: https://github.com/aesuli/SentiWordNet
● Emoticon sentiment lexicon: http://people.few.eur.nl/hogenboom/files/EmoticonSentimentLexicon.zip
Very positive
Positive
Neutral
Negative
Very negative
58. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Json array to Pandas dataframe
Reading Twitter json can be a pain.
● I wrote library clean_twitter.py to convert these to a CSV file
● Or you can use “df = pd.DataFrame([tweetdata]).transpose().reset_index()”
(If you use Spacy, you might get a run error: fix (“python =m spacy download en” from the terminal
window) is at
https://stackoverflow.com/questions/54334304/spacy-cant-find-model-en-core-web-sm-on-windows-10-
and-python-3-5-3-anacon )
58
59. Disinformation/Malign
Influence
Training,
Disarm
Foundation
|
2022
Topic detection using Latent Dirichlet Analysis
no_features = 1000
no_topics = 7
tfidf_vectorizer = TfidfVectorizer(max_df=0.95,
min_df=2, max_features=no_features,
stop_words=stop_words)
tfidf =
tfidf_vectorizer.fit_transform(dftweets['text'])
tfidf_feature_names =
tfidf_vectorizer.get_feature_names()
59
lda =
LatentDirichletAllocation(n_components=no_topics
, max_iter=5, learning_method='online',
learning_offset=50.,random_state=0).fit(tfidf)
no_top_words = 10
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:{}".format(topic_idx))
print(" ".join([feature_names[i]
for i in topic.argsort()[:-no_top_words
- 1:-1]]))
from https://blog.mlreview.com/topic-modeling-with-scikit-learn-e80d33668730