Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.



Published on

Artificial Intelligence, Computational Propaganda and What It Means for Marketers

Published in: Technology
  • Be the first to comment

  • Be the first to like this


  1. 1. Artificial Intelligence, Computational Propaganda and Cognitive Security Matt Chessen Senior Technology Policy Adviser Office of the Science and Technology Adviser to the Secretary U.S. Department of State Opinions expressed are those of the author and don’t necessarily represent the views of the Department of State or the US government
  2. 2. In five years, you won’t be able to determine whether you’re interacting with a human or a machine online. Eventually, most online speech could be machines talking to machines.
  3. 3. Artificial Intelligence (AI) Intelligence model AI is an evolving constellation of technologies that enable computers to simulate cognitive processes, such as elements of human thinking; or AI is systems that think like humans; or systems that act like humans; or systems that think rationally; or systems that act rationally. Discipline/Field model AI is a discipline centered on creating machines that can make decisions well under uncertainty; or AI is a field centered on problems of designing agents that perceive, learn and act to satisfy some objective.
  4. 4. Artificial Intelligence Waves Wave 1: Handcrafted knowledge - e.g. Turbotax Good reasoning over narrow domains, but no learning or handling of uncertainty. Wave 2: Statistical learning - e.g. Siri, Alexa (machine learning) Good classification and prediction, poor context and reasoning. Wave 3: Contextual Adaptation - the future of AI Can build and improve models to explain decisions. Source: A DARPA Perspective on Artificial Intelligence
  5. 5. Artificial Intelligence (AI) Machine learning extracts patterns from unlabeled data (unsupervised learning) or efficiently categorizes data according to pre-existing definitions embodied in a labeled data set (supervised learning). Deep learning is a type of machine learning that uses additional, hierarchical layers of processing (analogous to human neuron structure) and large data sets to model high-level abstractions and recognize patterns in complex data. Narrow AI is an expert system in a specific task, like image recognition. Artificial General Intelligence (AGI) is an AI that matches human intelligence. Artificial Superintelligence (ASI) is an AI that exceeds human capabilities. We are not talking about AGI or ASI - they are still science-fiction.
  6. 6. Cognitive Security: (COGSEC) defending against the unauthorized or illegal use of information and communication technologies (ICTs) for malicious psychological effects.
  7. 7. Computational Propaganda What is it? ➔ Machine-driven communications designed to manipulate public opinion ➔ Political bots: the automation of political engagement designed to manipulate public opinion What are the key technologies? ● Social media platforms ● Big data analytics ● Autonomous agents or bots ● Emerging artificial intelligence tools...
  8. 8. Here come the bots... Types of computational propaganda bots: ● Propaganda bots: manipulate through disinformation and misinformation ● Follower bots: fake the appearance of broad support (astroturfing) ● Roadblock bots: undermine speech through intimidation or diversions Common traits: ● High volume, high speed ● Triggers or continuous ● Imitate people, or just the egg ● Breadth of platforms ● Cheap and easy The evil egg
  9. 9. MADCOMs: Artificial Intelligence enhanced, machine-driven communications, used for computational propaganda
  10. 10. Propagandists + Chatbots + Machine Learning= manipulative bots that can engage in human-like conversations.
  11. 11. AI Chatbots + Dynamically Generated Content + Debating Technologies= bots that can create distracting, persuasive or intimidating speech in real time.
  12. 12. AI Chatbots + Affective Computing = Bots that can accurately detect our emotions and simulate emotions through voice, speech or text, optimizing tone for maximum impact.
  13. 13. Big Data Analytics + Psychometric Profiling + Machine Learning= highly personalized propaganda based on your personality, gender, political preferences, sexual orientation, income, and religion that gets more and more effective over time.
  14. 14. Video Modification + Voice Conversion Technologies= Enhanced disinformation creates a pliable reality. Lyrebird Demo
  15. 15. Machine Speed and Availability + Digital Economies of Scale = propaganda bots that: could be deployed across millions/billions of accounts; are available 24/7/365; and can react instantly to events.
  16. 16. Overwhelming numbers of propaganda bots that converse like humans, sense and manipulate emotions, generate personalized, persuasive content dynamically, learn constantly, and react to events at machine speed. What does this add up to?
  17. 17. The MADCOM future: AI chatbots may not know they are talking to other AI chatbots. Machines will run propaganda against other machines. The Internet could be overrun by machines talking to machines.
  18. 18. Behavior Modification and Gamified Obedience
  19. 19. Facebook Emotional Contagion ● 2014 Unauthorized Human Experiment on 689, 000 users ● Two tests: ○ Reduced users exposure to positive emotional content ■ Users then posted less positive content ○ Reduced users exposure to negative emotional content ■ Users then posted less negative content “Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks." What’s to keep Facebook from manipulating your feed to keep you happy, clicking and spending more time online?
  20. 20. What if your credit score didn’t just measure your financial reliability, but also measured how good a citizen you are?
  21. 21. Sesame Credit / Social Credit Score ● Sesame owned by Tencent and Alibaba in partnership with Chinese government ● A system for generating social ratings for businesses and individuals ● Voluntary Now. Mandatory by 2020 ● Works by ranking people based on their loyalty to the Chinese government and loyalty to Chinese brands. ○ Rewards: easier approval for loans; access to better jobs; priority during bureaucratic paperwork; faster travel authorizations ○ Punishments: lower Internet speeds; denied access to job offers; ineligibility for loans; slower bureaucratic paperwork Your score is affected by people in your social network
  22. 22. Black Mirror: Nosedive
  23. 23. Black Mirror: Nosedive
  24. 24. Gamified Obedience The system relies on users to pressure others in their network to raise their score… or they might be ostracized.
  25. 25. Behavior Modification + Gamified Obedience What happens when authoritarian governments use online behavior modification to generate obedience by subtly altering the online environment for low score citizens?
  26. 26. Implications ● All of these tools are ‘dual-use’ and can be used in marketing or propaganda... ○ don’t be evil. ● Machine generated speech will increase in sophistication, volume, and persuasiveness. ○ Machines will be vying for human attention. ○ Humans won’t be able to compete without their own AI-enabled tools.
  27. 27. Implications ● Unscrupulous competitors may use these tools to undermine brands ○ Should laws require that all paid advertising be attributed? ● How will you know such an attack is underway? ○ Attribution tools/services will be important ● The volume of machine speech will disrupt traditional social media analytics
  28. 28. Implications ● Cybersecurity/COGSEC are crucial ○ Subtle changes to content can have serious negative consequences. ● Machines will shape culture through music, memes, and other dynamic content. ○ The machines will program us, through culture. ● Data is the new oil/electricity/everything! ○ Collection less important than use.
  29. 29. “World War III will be a guerrilla information war, with no divisions between military and civilian participation.” —Marshall McLuhan
  30. 30. Weaponized Narrative Efforts outside of—but often complementary to—traditional military operations that seek to undermine an opponent’s civilization, identity, and will by using information and ideas to generate complexity, confusion, and political and social schisms. Source: ASU Weaponized Narrative Initiative
  31. 31. Weaponized Narrative The Enlightenment was based on a search for truth through reason. The US Constitution is an Enlightenment Document. The Founding Fathers were Enlightenment Thinkers Adversaries use weaponized narrative to attack Enlightenment thinking, by undermining expertise, critical thought, and a belief in objective truth. Fight weaponized narratives by focusing on the stories and themes that bind Americans together.
  32. 32. Recommended Reading AI and Computational Propaganda ● Can Public Diplomacy Survive the Internet?, (section on Digital’s Dark Side), The Advisory Commission on Public Diplomacy ● The MADCOM Future (Sept 2017, the Atlantic Council), Matt Chessen ● How Russia Hacks Our Democracy, Matt Chessen ● Computational Propaganda Worldwide, Cognitive Security and Weaponized Narrative ● The Weaponization of Information: the Need for Cognitive Security, Rand Waltzman ● Weaponized Narrative is the New Battlespace, Brad Allenby ● Weaponized Narrative White Paper, ASU Weaponized Narrative Initiative ● The Rise of the Weaponized AI Propaganda Machine, Scout.AI Russian propaganda methods ● The Firehose of Falsehood, Chris Paul and Miriam Matthews ● The Menace of Unreality, Peter Pomerantsev and Michael Weiss
  33. 33. When reality becomes fully pliable, how will people even know they’re being manipulated? Hyper Reality by Keiichi Matsuda
  34. 34. Artificial Intelligence, Computational Propaganda and Cognitive Security Matt Chessen Senior Technology Policy Adviser Office of the Science and Technology Adviser to the Secretary U.S. Department of State Opinions expressed are those of the author and don’t necessarily represent the views of the Department of State or the US government
  35. 35. Backup slides
  36. 36. Are cybersecurity models a good analogue for countering these threats? ● Real time threat tracking ● Information exchange protocols and standards ● 24/7 incident response ● Collaborative threat mitigation There is no department or agency of the US government tasked with protecting the US public from foreign computational propaganda, or weaponized narratives.
  37. 37. ● Bot detection tools ● Disinformation and propaganda campaign attribution tools ● Exposing malign actors ● Fact checking articles ● Disinformation site blacklists ● Default browser features ● AI counter-disinformation tools ● Crowdsourced ratings of information ● Democratic: anyone can contribute ● Reputation scores to deter trolls All tied together through open source information sharing standards How can the public sort through propaganda, disinformation and manufactured complexity? Is a collective intelligence system the answer?