Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

JD McCreary Presentation to Williams Foundation, March 22, 2018

111 views

Published on

JD McCreary, Chief, Disruptive Technology Programs, Georgia Tech Research Institute, focused on the centrality of effective decision making in high intensity conflict and on leveraging technologies like artificial intelligence to do so much more effectively.

Published in: Government & Nonprofit
  • Be the first to comment

  • Be the first to like this

JD McCreary Presentation to Williams Foundation, March 22, 2018

  1. 1. Combat Decision-Making in High Intensity Conflict Williams Foundation March 2018 JD McCreary Chief, Disruptive Technology Programs Georgia Tech Research Institute
  2. 2. “Engage with what they expect; occupying their minds while you wait for that which they cannot anticipate.” – Sun Tzu • Decision superiority is a key element to (future) conflict • Complex decision space will require focus on technologies that mitigate limitations on human sensory and cognitive capacity. • Must develop trust in AI and its reliability through exposure and transparency
  3. 3. AI Arms Race • “Artificial intelligence is the future, not only for Russia, but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.” – Vladimir Putin, Sep 2017 • Artificial intelligence will be able to replace a soldier on the battlefield and a pilot in an aircraft cockpit - Viktor Bondarev (former commander of Russia’s Aerospace Force and Chairman of the Federation Council Defense and Security Committee) Nov 2017 • China’s State Council issued plan to lead the world in AI by 2030; “Next Generation Artificial Intelligence Development Plan” - Jul 2017 • Highlights an approach of military-civil fusion (or civil-military integration) to ensure that advances in AI can be rapidly leveraged for national defense • Pursue advances in big data, human-machine hybrid intelligence, swarm intelligence, and automated decision-making, along with in autonomous unmanned systems and intelligent robotics
  4. 4. AI-assisted decision-making: platforms to personal assistant, tactical to strategic • Russia builds lethal autonomous weapons systems, platforms, swarms • China plans to use AI to assist submarine commanders • U.S. Data to Decision, Fleet Tactical Grid, Multi-Domain Command and Control • Individuals, sensor/weapon/platform (man-unmanned teaming), Force design/employment • Time scales, scope of complexity: human, machine • Continuous learning, transfer of knowledge (mission, ready room, deployment) • Integrated Tasking Order cycle (72 hrs), time sensitive targeting, kinetic or non-kinetic • Grey Zone, Hybrid Warfare • Deception, unconstrained solutions (not all playing the same game) • Decision superiority preparedness (requirements)
  5. 5. Operational AI-assisted decision-making • Cognitive, netted, distributed force (manned/unmanned systems, BM, MDC2) • Algorithmic or prototype warfare (speed of software development) • Human judgment will remain essential, but decision allocation between humans and machines will be shifting • Man-machine teaming (Centaur or R2D2 at individual, platform, AOC/MOC) • Human expertise/experience, workload, pace, complexity • AI-assisted learning, COA generation, decision-making • Trust, transparency, training • Live, Virtual, Constructive (LVC) • AI assist blue with complexity/scale: training, T&E, ops rehearsal, force/TTP design/validation • AI augment/supplant red/white force, dynamic adaptation of force behaviors, environment • Extend speed of command, decision superiority • Computer, network, (sensor) data…knowledge, learning, decisions
  6. 6. Discussion
  7. 7. OODA vs OODA Observe Orient Decide Act Observe Orient DecideAct Traditional investment • Decision superiority • Man-machine teaming • AI, visualization • Big data, COAs • Keep in orient phase • Introduce “false” decisions
  8. 8. Force Design • Outpace adversary decision speed • Enhance battlespace awareness, decision speed, and dynamic synchronized actions • Computer aided data fusion, info analysis, present warfighting options…enable decision speed • Automated, intelligent analytics and AI designed to improve Commander/Warfighter resource allocation decisions…providing COA analysis, planning, automated synchronization, as a means of creating effective human-machine teaming
  9. 9. The Intelligent Digital Mesh • Intelligent: How AI is seeping into virtually every technology and with a defined, well-scoped focus can allow more dynamic, flexible and potentially autonomous systems. • Digital: Blending the virtual and real worlds to create an immersive digitally enhanced and connected environment. • Mesh: The connections between an expanding set of people, processes, devices, content and services to deliver operational outcomes. • Data to Decision, Fleet Tactical Grid, Multi-domain C2 • Sensors, networks, data, knowledge, decisions
  10. 10. Decision support technology • The current wave of progress and enthusiasm for AI has been driven by three mutually reinforcing factors: the availability of big data which provided raw material for dramatically improved machine learning approaches and algorithms, which in turn relied on the capabilities of more powerful computers. – Dec 2016 White House report • Task vs data representation • Supervised (classification), unsupervised (pattern analysis) • Deep learning neural network • Value, policy optimization • Rational reasoning (IBM, Google) • Memory augmented (deliberative; Differentiable Neural Computer) • Imitation learning (Simulated data) • Reinforcement learning (Deep Mind - Alpha Go Zero, OpenAI - DOTA 2, selfplay) • Transfer learning
  11. 11. User Interface (UI) • Enders Game, Star Trek, Interstellar, Minority Report, Watson, Alexa, chat bots, Hololens • Computer’s job: interpret the user’s instructions, process the data, present to user. • Human’s job: provide Commander’s intent, understand the information presented in context of intended goals or surrounding environment. • Moving away from designing Graphical User Interfaces (GUIs), which require the user’s full attention, and moving towards designing less obtrusive interaction • proactively initiate conversation about goals • limit interaction to a few natural questions and responses • factor in a large number of observations and assumptions • present with the results • Ipad vs laptop, mobile apps • Conversational, visual, tactile/haptic • Personalized UI/AI
  12. 12. • Develop 4 slides for APC • 2 tech • Computing, edge, fog, neural nets, AI algorithms, learning, transfer, DNC (inflective) • Interface/trust, AR/VR/MR, conversational, haptic, LVC (transition to ops) • 2 ops value • Tactical/Operational (indiv/collective, platforms, ops centers) • Force (execution/design/experiment) • “static” decision making (pre-conflict) vs “dynamic” (real-time) • 5th gen – more valuable as “decision makers”? Persistence of combat loads (kinetic vs non-kinetic; persistence of info) • Centaur approach • Russia – hybrid warfare; China – informationized warfare (info dom) • Even if you are not a fan of cyber/ew, you must respect manned/unmanned teaming potential • Algorithmic or prototype warfare (speed of software development) • Human judgment will remain essential, but the line of decision allocation between humans and machines will be shifting in coming years.
  13. 13. AI beats pilot • The secret to ALPHA's superhuman flying skills is a decision-making system called a genetic fuzzy tree, a subtype of fuzzy logic algorithms. The system approaches complex problems much like a human would, says Ernest, breaking the larger task into smaller subtasks, which include high-level tactics, firing, evasion, and defensiveness. By considering only the most relevant variables, it can make complex decisions with extreme speed. As a result, the A.I. can calculate the best maneuvers in a complex, dynamic environment, over 250 times faster than its human opponent can blink. • ALPHA is currently viewed as a research tool for manned and unmanned teaming in a simulation environment.
  14. 14. • Hmi – nlp, perception, visualization, audio, verbal, conversational, tactile • Decisions - Autonomous machines, coa generation • The software agent is an intelligent nonhuman agent (IA) that has clear objectives, able to monitor its environment, and autonomous in the sense that it can generate courses of action (COAs) to obtain its objectives • mixed initiative autonomy wherein humans and IAs share decision-making. The IA has a degree of autonomy and communicates with its human supervisor. The focus of this report is on the decision-making relationship between IAs and its human supervisor(s) in future battle spaces that are dynamic, dangerous, and complex • Trust, transparency • Industry, gaming
  15. 15. • operator’s understanding of the IA’s decision process: L1) operator perception of the IA’s actions and plans, L2) comprehension of the IA’s reasoning process, and L3) understanding of the IA’s predicted outcomes. • Whatever the IA’s knowledge representation, it must translate to formats that humans understand to help foster a common language between the IA and its operator. • 3 gradations of language processing: command processing, controlled language processing, and open-ended NLP.
  16. 16. Decisions • We have not been making all the wrong decisions. AI just presents us with available options based on more data and with less instinct than we are accustomed to. • Assessing situations without perfect information is not just a human trait but a daily necessity and something we may not realize is constantly happening. From negotiations to ranking new business opportunities, we make informed decisions towards our desired outcomes. Moving the needle upward involves having more relevant details from more relevant sources.
  17. 17. Discussion of COP • Crew and user specific task-related information, rather than commonality. • Implications: The system needs to understand each users’ task context and what data would be relevant to those tasks, and to present the relevant data elements in a way that makes sense to the task(s). • Data for decision support tools and user views of networks and effects, rather than a single (picture) portrayal of platforms. • Implications: We need to agree on open standards for data models, and protocols for exchanging that data, not just a standard set of icons. • Prioritising near over distant, and aggregating distant things, rather than treating all information the same way. This is in accordance with Tobler’s law: ‘everything is related to everything else, but near things are more related than distant things’. In geospatial terms, these distances are in metres; in network terms, they are in milliseconds. • Implications: We need a system architecture that provides decentralised fusion and aggregation, to support decentralised execution, and to manage throughput in a congested or contested network connection when things are getting emotional. This requires adaptive rules for filtering; policy for when and how aggregation can support security outcomes for “special” capabilities; and detailed understanding of how the filtering and aggregation affect decision support tools. • Alerting of exceptions, rather than providing ‘as expected’. • Implications: System definition of what is exceptional given the operational context, and the level of alerting that is appropriate given the user needs, network capability and the other alerts.
  18. 18. Interactions • On the Rise • Human Augmentation • Ambient Experiences • Spatial Computing • Gait Analysis • Volumetric Displays • Brain-Computer Interface • Electrovibration • VPA-Enabled Wireless Speakers • At the Peak • Smart Robots • Transparent Displays • Gesture Control Devices • Emotion Detection/Recognition • Virtual Assistants • Machine Learning • Muscle-Computer Interface • NLP • Intelligent Apps • Sliding Into the Trough • Mixed Reality • Gaze Control • Speech-to-Speech Translation • Flexible Display • Smart Fabrics • Augmented Reality • Head-Mounted Displays • Sensor Fusion • Climbing the Slope • Face Recognition • Virtual Reality • Biometric Authentication Methods • Ambient Displays • Entering the Plateau • Speech Recognition • Handwriting Recognition
  19. 19. Trust • Data scientists want to build models with high accuracy, but they also want to understand the workings of the model so that they can communicate their findings to their target audience. • Users want to know why a model gives a certain prediction. They want to know how they will be affected by those decisions and whether they are being treated fairly. They need to have sufficient information to decide whether to opt out. • Regulators and lawmakers want to protect people and make sure the decisions made by models are fair and transparent.
  20. 20. • Department of Defense (DOD) Directive 3000.09 (2012) mandates safeguards for autonomous weapons, as shown in the following: • “Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.” • “The system design . . . addresses and minimizes the probability or consequences of failures that could lead to unintended engagements or to loss of control of the system.” • “In order for operators to make informed and appropriate decisions in engaging targets, the interface between people and machines for autonomous and semi- autonomous weapon systems shall: (a) be readily understandable to trained operators, (b) provide traceable feedback on system status, and (c) provide clear procedures for trained operators to activate and deactivate system functions.”
  21. 21. AI & ML Terminology Mythology Centaur AI Centaur Close the orient & decision loop Outpace adversary decision speed
  22. 22. Terms • The term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving" • Goals of AI research – perception, knowledge, reasoning, planning, learning, natural language processing, and the ability to move and manipulate objects • Capabilities generally classified as AI include understanding human speech, images and videos, autonomous cars, competing at a high level in strategic game systems (e.g – DOTA 2 and Go), military simulations, and interpreting complex data.

×