AN ASSESSMENT FRAMEWORK FOR FULL MOTION VIDEO ANALYSIS-
BASED PRODUCTION: DEFINING INTELLIGENCE PRODUCT MEASURES OF
EFFECTIVENESS AND PERFORMANCE
A Master Thesis
Submitted to the Faculty
of
American Public University System
by
Tori Hendrix Duckworth
In Partial Fulfillment of the
Requirement for the Degree
of
Master of Arts
July 2015
American Public University System
Charles Town, WV
i
The author hereby grants the American Public University System the right to display
these contents for educational purposes.
The author assumes total responsibility for meeting the requirements set by United States
copyright law for the inclusion of any materials that are not the author’s creation or in the
public domain.
© Copyright 2015 by Tori Hendrix Duckworth
All rights reserved.
ii
DEDICATION
I dedicate this thesis to my husband, Phillip Duckworth. Without his patience, support,
and barista skills the completion of this thesis would never have been possible.
iii
ACKNOWLEDGMENTS
I wish to thank Professor Christian Shuler of the American Public University System,
both for his dedication to my academic progression and for his mentorship on this project. I
would also like to thank Dr. Roger Travis of the University of Connecticut for investing the
time required to develop my formal research and writing techniques early in my academic life.
Thanks also to Mr. Michael Bailey, Mr. William Elkins, and Mr. Jeremy Jarrell, Mr.
Linn Wisner, Mr. Bill Majors, Mr. Mikel Gault, Mr. Daniel Thomas and Mr. Donray Kirkland
for their personal and professional investments toward the continuous improvement of
intelligence analysis and production. The above gentlemen have always provided a sounding
board, a support network and a sanity check for my intelligence research and analysis endeavors,
this thesis proving no exception.
iv
ABSTRACT OF THE THESIS
AN ASSESSMENT FRAMEWORK FOR FULL MOTION VIDEO ANALYSIS-
BASED PRODUCTION: DEFINING INTELLIGENCE PRODUCT MEASURES OF
EFFECTIVENESS AND PERFORMANCE
by
Tori Hendrix Duckworth
American Public University System, July 5, 2015
Charles Town, West Virginia
Professor Christian Shuler, Thesis Professor
This paper discusses an original mixed-method analytic approach to assessing and
measuring analyst performance and product effectiveness in Air Force Distributed Common
Ground Station (DCGS) Full Motion Video (FMV) analysis-based production. The method
remains consistent with Joint Publication doctrine on Measures of Effectiveness (MOE) and
Measures of Performance (MOP) and considers established methodologies for assessment in
Combat and Stability Operations, but refines the processes down to the micro-tactical level in
order to theoretically assess intelligence support functions and how MOP and MOE can be
determined for intelligence products. The research focuses specifically on those products derived
from FMV analysis produced per customer request in support of real-time contingency
operations. The outcome is an assessment process, based on Department of Defense Attributes of
Intelligence Excellence. The assessment framework can not only be used to assess FMV
analysis-based production, but prove a valuable assessment tool in other production-heavy
sectors of the Intelligence Community. The next phase of research will be to validate the
developed method through the observation and analysis of its practical application.
v
TABLE OF CONTENTS
SECTION PAGE
I. INTRODUCTION ........................................................................................................ 1
II. LITERATURE REVIEW ........................................................................................... 12
DCGS Operations .................................................................................................... 13
Mission Assessment................................................................................................. 16
Variable Identification and Control ......................................................................... 22
Measuring Performance and Effectiveness.............................................................. 25
III. METHODOLOGY ..................................................................................................... 31
Research Design Overview...................................................................................... 31
Collection Methodology .......................................................................................... 33
Specification of Operational Variables.................................................................... 34
Assessment Methodology........................................................................................ 36
IV. ANALYSIS AND FINDINGS ................................................................................... 38
Measures of Effectiveness ....................................................................................... 42
Measures of Performance ........................................................................................ 54
Theoretical Application ........................................................................................... 59
V. CONCLUSION........................................................................................................... 64
vi
LIST OF FIGURES
FIGURE PAGE
1. FMV Analysis-Based Production Mission Assessment………………………………..29
2. Measures of Effectiveness.……………………………………………………………..42
3. Timeliness Factor Levels……………………………………………………………….44
4. Accuracy Factor Levels..………………………………………………………………48
5. Customer Satisfaction Factor Levels…………………………………………………...54
6. Task and Effectiveness Assessment…………………………………………………....55
7. Measures of Performance……………………………………………………………....56
1
SECTION I
INTRODUCTION
Assessment is an essential tool for those leaders and decisions makers responsible for the
successful execution of military operations, actions and programs. The process of operationally
assessing military operations is crucial to the identification and implementation of those
adjustments required to meet intermediate and long term goals (Center for Army Lessons
Learned 2010, 1). Assessment supports the mission by charting progress in evolving situations.
Assessment allows for the determination of event impacts as they relate to overall mission
accomplishment and provides data points and trends for judgments about progress in monitored
mission areas (Center for Army Lessons Learned 2010, 4). In this way, assessment assists
military commanders in determining where opportunities are where course corrections,
modifications and adjustments must be made (U.S. Joint Chiefs of Staff 2011b, III-44). Without
operational assessment, both the commander and his staff would be beleaguered to keep pace
with quickly evolving situations and the commander’s decision making cycle could lack focus
due to a lack of understanding about the dynamic environment (Center for Army Lessons
Learned 2010, 4). Assessments are the commander’s keystone to maintaining accurate situational
understanding and are vital to appropriately revising their visualization and operational approach
(U.S. Joint Chiefs of Staff 2011b, III-44). From major strategic operations to the smallest of
tactical missions, assessment is a crucial aspect of mission management; the Air Force
Intelligence mission and its associated mission subsets are no exception.
Much as assessment helps commanders and staffs understand mission progress, intelligence
helps commanders and staffs understand the operational environment. Intelligence identifies
enemy capabilities and vulnerabilities, projects probable intentions and actions, and is a critical
2
aspect of both the planning process and the execution of operations. Intelligence provides
analyses that help commanders decide which forces or munitions to deploy and how to employ
them in a manner that accomplishes mission objectives (U.S Joint Chiefs of Staff 2013, I-18).
Most importantly, intelligence is a prerequisite for achieving information superiority, the state
that is attained when a competitive advantage is derived from the ability to exploit a superior
information position (Alberts, Garstka, and Stein 2000, 34). The role of intelligence in Air Force
operations it to provide warfighters, policy makers, and the acquisition community
comprehensive intelligence across the conflict spectrum. This role is designed to build the
foundation for information superiority and mission success by delivering on-time, tailored
intelligence (U.S Department of the Air Force 1997, 1). Key to operating in today’s information
dominated environment and getting the right information to the right people in order for them to
make the best possible decisions is the intelligence mission subset Intelligence, Surveillance and
Reconnaissance (ISR) (U.S. Department of the Air Force 2012b, 1).
The U.S Joint Chiefs of Staff Joint Publication 1-02, Department of Defense (DOD)
Dictionary of Military and Associated Terms defines ISR as “an activity that synchronizes and
integrates the planning and operation of sensors, assets, processing, exploitation, and
dissemination systems in direct support of current and future operations,” (U.S. Joint Chiefs of
Staff 2010, X). ISR is a strategic resource that enables commanders the ability to both make
decisions, and execute those decisions more rapidly and effectively than the adversary (U.S Joint
Chiefs of Staff 2011a, III-3). The fundamental job of U.S Air Force Airmen supporting the ISR
mission is to analyze, inform and provide commanders at every level with the knowledge
required to prevent surprise, make decisions, command forces and employ weapons (U.S
Department of the Air Force 2013a, 6). In addition to enabling information superiority, ISR is
3
critical to the maintenance of U.S. decision advantage, the ability to prevent strategic surprise,
understand emerging threats and track known threats, while adapting to the changing world
(Report of the DNI Authorities Hearing 2008, 1).
The joint ISR mission consists of multiple separate elements, spread across uniformed
services, component commands and intelligence specialties; each element in its own right must
be successful for the integrated ISR whole to support operations as required (U.S. Department of
the Air Force 2012b, 1). One such ISR element is Full-Motion Video (FMV) Analysis and
Production. FMV analysis and production is a tactical intelligence function. Tactical intelligence
is used by commanders, planners, and operators for planning and conducting engagements, and
special missions. Relevant, accurate, and timely tactical intelligence allows tactical units to
achieve positional and informational advantage over their adversaries (U.S. Joint Chiefs of Staff
2013, I-25). Tactical intelligence addresses the threat across the range of military operations by
identifying and assessing the adversary’s capabilities, intentions, and vulnerabilities, as well as
describing the physical environment. Tactical intelligence seeks to identify when, where, and in
what strength the adversary will conduct their tactical level operations. Physical identification of
the adversary and their operational networks as FMV analysis and production can facilitate,
allows for enhanced situational awareness, targeting, and watchlisting to track, hinder, and/or
prevent insurgent movements at the regional, national, or international levels (U.S. Joint Chiefs
of Staff 2013, I-25). FMV analysis and production is conducted, in part, by the Air Force
Distributed Common Ground System (DCGS) Enterprise.
The Air Force DCGS, also referred to as the AN/GSQ-272 SENTINEL weapon system,
is the Air Force’s primary ISR collection, processing, exploitation, analysis and dissemination
system. The DCGS functions as a distributed ISR operations capability which provides
4
worldwide, near real-time intelligence support simultaneously to multiple theaters of operation
through various reachback communication architectures. ISR provides the capability to collect
raw data from many different intelligence sensors across the terrestrial, aerial, and space layers
of the intelligence enterprise (U.S Department of the Army 2012b, 4-12).The Air Force DCGS
currently produces intelligence from data collected by a variety of sensors on the U-2, RQ-4
Global Hawk, MQ-1 Predator, MQ-9 Reaper, MC-12 and other ISR platforms (Air Force
Distributed Common Ground System, 2014). ISR synchronization involves the employment of
assigned and allocable platforms and sensors, to include FMV sensors, against specified
collection targets; once the raw data is collected, it is routed to the appropriate processing and
exploitation system, such as a DCGS node, so that it may be converted via exploitation and
analysis into usable information and disseminated to the customer in a timely fashion (U.S Joint
Chiefs of Staff 2013, I-11). The analysis and production portion of the intelligence process
involves integrating, evaluating analyzing, and interpreting information into a finished
intelligence product. Properly formatted intelligence products are then disseminated to the
requester, who integrates the intelligence into the decision-making and planning processes (U.S.
Joint Chiefs of Staff 2012, III-33). The usable information disseminated to the customer is the
FMV analysis-based product. In the DCGS, FMV analysis-based products are produced by the
GA and MSA mission crew positions (U.S. Air Force ISR Agency 2013b, 59).
The combination of the above process is routinely combined into the acronym PED,
Processing, Exploitation and Dissemination. Every intelligence discipline and complementary
intelligence capability is different, but each conducts PED activities to support timely and
effective intelligence operations. PED activities facilitate timely, relevant, usable, and tailored
intelligence (U.S. Department of the Army 2012, 4-13). PED activities are prioritized and
5
focused on intelligence processing, analysis, and assessment to quickly support specific
intelligence collection requirements and facilitate improved intelligence operations. PED began
as a processing and analytical support structure for unique systems and capabilities like FMV
derived from unmanned aircraft systems (U.S Department of the Army 2012, 4-12).
Intelligence is essential to maintaining aerospace power, information dominance and
decisions advantage; accordingly, the Air Force provides premier intelligence products and
services to all its aerospace customers including warfighters, policy makers and acquisition
managers (U.S Department of the Air Force 2004, 1). Intelligence is produced and provided to
combat forces even at the lowest level via the integration of data collected by the various ISR
platforms with Production, Exploitation and Dissemination (PED) crews. The PED crews are
comprised of intelligence specialists capable of robust, multi-intelligence analysis and
production (Air Force Distributed Common Ground System, 2014). DCGS mission sets depend
greatly on which type of sensor the PED crew is tasked to exploit; the scope of this research will
focus exclusively on Full Motion Video (FMV) analysis-based production, that is, products
based on the exploitation and analysis of Full Motion Video sensors.
Air Force ISR, with capabilities such as satellites and unmanned aerial vehicles, is a mission
set ever more integral to global operational success. The President’s 2012 Defense Strategic
Guidance directs the US military to begin transitioning from current operations to future
challenge preparation; as that transition into what will likely be a highly volatile, unpredictable
future occurs, Air Force ISR will be fundamental to ensuring U.S. and Allied force freedom of
action (U.S Department of the Air Force 2013a, 2013). Further complicating such a transition is
the reality of the currently austere fiscal environment. Balancing intelligence and remote sensing
capabilities across multiple Areas of Responsibility (AOR) with limited resources will demand
6
the development of more efficient ISR mission management and execution processes.
Intelligence organizations do recognize the need to continually improve their process
performance and the analytical arm of ISR Operations, the DCGS, is no exception. Having a
significant role in the maximization of ISR capabilities, the DCGS Enterprise should take every
step to ensure system-wide PED efficiency, an objective that requires an established mission
assessment framework.
Assessment is an essential tool that allows mission managers to monitor the performance of
tactical actions and to determine whether the desired effects are created in order to support goal
achievement. Assessment entails two distinct tasks: continuously monitoring the situation and
the progress of the operations and evaluating the operation against Measures of Effectiveness
(MOEs) and Measures of Performance (MOPs) to determine progress relative to the mission,
objectives, and end states. (U.S Joint Chiefs of Staff 2011b, III-45). All assessments should
answer two questions: Are things being done right? Are the right things being done? Those two
questions are perhaps best answered by the MOP and MOE assessment criteria. It is by
monitoring available information and using MOP and MOE as assessment tools during planning,
preparation, and mission execution that commanders and staffs determine progress toward
completing tasks, achieving objectives and attaining the military end state (U.S Joint Chiefs of
Staff 2011b, D-10). It is accepted that effectiveness is a measure associated with the problem
domain (what is to be achieved) and that performance measures are associated with the solution
domain (how is the problem to be solved) (Smith and Clark 2006, N -2). In other words MOE
help determine if a task is achieving its intended results (if the task the right thing to do) and
MOPs help determine if a task is completed properly (if the task is done right).
More specifically, MOE concern how well a system tracks against its purpose (Sproles 1997,
7
17). MOEs are criteria used to assess changes in system behavior, capability, or operational
environment and are tied to measuring the attainment of an end state, achievement of an
objective, or creation of an effect. MOEs are based on effects assessment, the process of
determining whether an action, program, or campaign is productive and can achieve the desired
results in operations (Center for Army Lessons Learned 2010, 6). Should the desired result be
achieved, then the outcome may be deemed effective; however, simply achieving the result may
not measure the degree of effectiveness. For example, within context of computer programming,
an effective program is one which produces the correct outcome, based on various constraints.
Given that the desired result was achieved, it is generally accepted that a program which is faster
and uses less resources is more effective (Smith and Clark 2006, N -2).
MOPs area discrete assessment tool based on task assessment. MOPs are criteria used to
assess actions that are tied to measuring task accomplishment and essentially confirm or deny
that a task was performed properly. Agencies, organizations, and military units will usually
perform a task assessment, based on established MOPs, to ensure work is performed to at least a
minimum standard (Center for Army Lessons Learned 2010, 5).
When discussing product effectiveness, it important to distinguish between how
effectively the product was produced and how effective the produced product was in achieving
its objective. Task assessment does have an effect on effects assessment, as any task not
performed to standard will have a negative impact on effectiveness (Center for Army Lessons
Learned 2010, 5). Thus, both effectiveness and consequently performance are significant in this
study. It is the objective of this research to demonstrate that analyst performance can be assessed
using MOPs and that product effectiveness can be measured, at least in part, by the identification
of repeatable MOEs. To provide accurate assessments of performance and effectiveness, any
8
identified MOPs and MOEs as assessment standards must be measurable, relevant and
responsive.
Whether conducted explicitly or implicitly, measurement is the mechanism for extracting
information from empirical observation (Bullock 2006, 17). Assessment measures must have
qualitative or quantitative standards, or metrics, against which they can be measured. To
effectively measure change, whether positive or negative, a baseline measurement should be
established prior to execution in order to facilitate accurate assessment (U.S Joint Chiefs of Staff
2011b, D-10). MOEs and MOPs measure a distinct aspect of an operation and thus require
distinct and discrete qualitative or quantitative metrics against which to be measured (Center for
Army Lessons Learned 2010, 8). Determining and applying qualitative and quantitative metrics
is doubly difficult for intelligence support operations, like FMV analysis-based production
because few traditional war fighting metrics apply (Darilek et al. 2001, 101). Determining
effectiveness is not as straight forward as assessing the number of tanks destroyed by an
offensive and weighing that against the objective number of tanks destroyed. While metrics can
be defined, they are generally more one-sided. The focus is centered more on the ability of the
analyst to operate in an operational environment than on how well an analyst operates relative to
an enemy force (Darilek et al. 2001, 101). Obtaining assessment insight is dependent on having
feasible implementation methods, as well as reliable models, for approaching the task of
measurement (Sink 1991, 25). Each MOP and MOE discussed in this paper will be assigned
either a qualitative or a quantitative metric and each metric will be discrete.
In addition to being measurable, MOPs and MOEs must also be relevant to tactical,
operational, or strategic objectives as applicable as well as to the operational environment and
the desired end state (Center for Army Lessons Learned 2010, 8). The relevancy criterion
9
prevents the collection and analysis of information that is of no value and helps ensure efficiency
by eliminating redundant and futile efforts (U.S Joint Chiefs of Staff 2011b, D-9). The relevancy
of each MOP and MOE presented in this paper will be demonstrated through a discussion of its
relationship to either the performance or effectiveness of FMV analysis-based production.
Lastly, MOP and MOE must be responsive. Both MOP and MOE must identify changes
quickly and accurately enough to allow commanders and staffs to respond effectively (Center for
Army Lessons Learned 2010, 9). When considering the time required for an action to produce a
desired result within the operational environment, the designated MOP and MOE should respond
accordingly; the result of action implementation should be capable of being measured in a timely
manner (U.S Joint Chiefs of Staff 2011b, D-10). DCGS FMV analysis-based production occurs
in near real-time. As such, any proposed production MOP or MOE should be readily derived
from available data or mission observation.
MOEs and MOPs are simply assessment criteria; in and of themselves they do not
represent assessment. MOEs and MOPs require relevant information in the form of indicators for
evaluation (U.S Joint Chiefs of Staff 2011b, D-3). In the context of assessments, an indicator is
an item of information that provides insight into MOP or MOE. A single indicator may inform
multiple MOPs and MOEs and MOPs and MOEs may be made up of several observable and
collectable indicators (U.S. Joint Chiefs of Staff 2011b, D-4). For MOPs, indicators should
consider the capability required to perform assigned tasks. Indicators used to perform MOE
analysis should inform changes to the operational environment. Indicators provide evidence that
a certain condition exists or certain results have or have not been attained, and enable decision
makers to direct changes to ongoing operations to ensure the mission remains focused on the end
state (U.S. Joint Chiefs of Staff 2011b, D-4). Indicators inform commanders and staff on the
10
current status of operational elements which may be used to predict performance and efficiency.
Air Force production centers produce and provide those intelligence products based on
validated/approved customer requirements (U.S. Department of the Air Force 2012b, 4).
Tailoring, adapting a product to ensure it meets the customer’s format and content requests, helps
to ensure the relevancy of the intelligence product. For example, if a customer requests a highly
tailored product versus a standard baseline product, such a request may indicate an increased
production time potentially impacting the timeliness, and thus the effectiveness, of the product.
In this way, the level of customization typically requested by customers for a certain product
type would serve as an indicator for the timeliness MOE, which is further discussed in the MOE
subsection of Section IV, Analysis and Findings.
Well-devised MOPs and MOEs, supported by effective information management, help
commanders and staffs understand the linkage between specific tasks, the desired effects, the
commander’s objectives and end state (U.S Joint Chiefs of Staff 2011b, D-10). Among the most
important analytical tasks in the intelligence mission set is effectiveness and performance
analysis, and yet, DCGS FMV operations have no enterprise-wide framework for mission
assessment in which clear production MOPs or MOEs are defined. The highly dynamic nature of
intelligence analysis, where even basic tasks are further complicated by analyst discretion and
technique, makes the systematic evaluation of DCGS product effectiveness highly problematic
but yet, not impossible.
The question then arises: what repeatable data can be identified and how can the FMV
analysis-based production mission be assessed in order to derive MOPs and MOEs? This paper
proposes an assessment framework specifically identified to enable the quantification and
qualification of DCGS analyst production performance and product effectiveness. The demands
11
of the modern operational environment require that intelligence products disseminated to the
customer are anticipatory, timely, accurate, usable, complete, relevant, objective, and available
(U.S Joint Chiefs of Staff 2013, III-33). Also known as Attributes of Intelligence Excellence, it is
from those intelligence product requirements that performance and product effectiveness
variables will be derived.
Given their importance to industries, organizations and institutions, the topics of MOE, MOP
and operational assessment are very well researched and sound methodologies for deriving MOE
and MOP exist even for most major military operation types. From the global strategic ISR
mission down, FMV analysis-based production is micro-tactical in nature. As discussed in
Section II, Literature Review, though the provision of products are a basic intelligence function,
analytic methodologies for the performance of product effectiveness analysis has not been
refined for such a niche subset of the intelligence mission. Expanded upon in Section III,
Methodology, this paper presents a theoretical process for identifying both MOP and MOE for
DCGS FMV analysis-based production. Following an outline of the assessment methodology,
Section V, Analysis and Findings will demonstrate that repeatable, measurable DCGS FMV
analysis-based production can be identified.
12
SECTION II
LITERATURE REVIEW
No shortage of literature exists concerning the importance of MOEs and MOPs and how
MOEs and MOPs may be assessed, especially with respect to military operations. While
assessment is required at every level of military operations, strategic, operational and tactical, a
majority of the relevant literature available focuses either on operational, or combat tactical
assessments. FMV analysis-based production, an operational function of the DCGS Enterprise, is
not a combat, but rather a support function, and when compared to operational functions such as
Village Stability Operations, may be better classified as micro-tactical (when assessed
independent of the operation being supporting). Thus, for such a small scale, niche function, no
predominant school of thought exists on the matter, i.e., no prior research has attempted to
answer this research question and no theoretical framework exists which can be applied wholly
to the problem set. At least, no prior research exists on the research topic that is publically
available. Due to the highly sensitive nature of DCGS Operations, any amount of relevant
literature may exist on classified networks; however, only publically available publications will
be utilized in this research. Many open source publications exist which are critical to
understanding how FMV-analysis based production MOP and MOE may be derived. Most are
not academic or peer-reviewed research, but rather are military doctrinal publications. Prior
existing research concerning the agglomerately developed theoretical framework of this study
also include military doctrinal publications, but more so include research-based case studies.
Publications critical to understanding the nature of the problem set, considered
background information, is detailed in the sections DCGS Operations and Variable Identification
and Control. To assess production, at is conducted during DCGS Operations, the operating
13
environment (OE) must be understood to the greatest degree possible. Given the volume of
possible variables and the challenges associated with illuminating their relationships, an
understanding of operational variables affecting analyst production, performance and product
effectiveness are similarly critical. Publications relating to the theoretical framework for deriving
FMV-analysis based MOPs and MOEs are discussed in the section Measuring Performance and
Effectiveness. The uniqueness of the problem set precludes the application of any pre-proposed
theoretical framework in its entirety. Rather, the problem set in question required the
development of a new model for determining MOPs and MOEs, one which examines the
relationship between the two assessment tolls as well as the effect of some critically impacting
variables on each.
DCGS Operations
One prerequisite for identifying DCGS mission assessment metrics and deriving FMV
analysis-based production MOPs and MOEs is a thorough understanding of DCGS mission
operations. DCGS mission operations for this research focuses on the production aspect of
operations and includes production training, evaluation and execution; whereas executing
production may most predominantly determine product effectiveness, how well the production
analyst was trained, and how well they performed during their evaluation prove useful as MOP
indicators.
The four most pertinent publications to DCGS training are Air Force Instruction (AFI)
14-202 Volume 1 (2015) concerning intelligence training, the Air Force ISR Agency (AFISRA)
supplement to AFI 14-202 Volume 1 (2013) specifying ISR intelligence training, AFISRA
Instruction 14-153 Volume 1 (2013) concerning AF DCGS intelligence training and the 480th
ISR Wing supplement to AFISRA Instruction 14-153 Volume 1 (2013) which further refines the
14
document for DCGS units, all of which are subordinate to the 480th
ISR Wing. The stated
objective of AFI 14-202 Volume 1 (2015) and its AFIRSA supplement (2013) is “to develop and
maintain a high state of readiness… by ensuring all personnel performing intelligence duties
attain and maintain the qualifications and currencies needed to support their unit’s mission
effectively…” (U.S Department of the Air Force 2015b, 4). The publications, having stated the
impact of training on mission effectiveness, discuss types of training, the flow of training,
individual training and currency requirements, mission readiness levels as well as the grading
rubric for training tasks, all data critical to identifying analyst/production performance metrics
and effectiveness indicators. The same holds true for AFISRA Instruction 14-153 Volume 1, Air
Force DCGS Intelligence Training (2013) and its 480th
ISR Wing Supplement (2013). Naturally
more applicable to DCGS training than the aforementioned AFI and AFISRA supplements, the
publications not only concur with the previous publications, but additionally provides guidance
for the assessment of training programs and outlines mission training elements, more so in the
480th
ISR Wing Supplement (Air Force ISR Agency 2013b, 7-31). The Training Task List
(TTL) defines training requirements for skill set development as they support the operational
duties necessary for each position. Those duties, for some positions, include the creation, quality
control and dissemination of data, information and textual/graphic intelligence products (Air
Force ISR Agency 2013b, 17; 28-29). These training publications are detailed to the point of
exacting passing grades for training tasks, a quantitative standard of great interest to MOP
research.
AFI 14-202 Volume 2 (2015) and the 480th
ISR Wing Supplement to the AFISRA
Supplement to AFI 14-202 Volume 2 (2014) are the Air Force and DCGS doctrinal publications
covering the Intelligence unit Commander’s Standardization/Evaluation Program. The purpose
15
of these publications is “to provide commanders a tool to validate mission readiness and the
effectiveness of intelligence personnel, including documentation of individual member
qualifications and capabilities specific to their duty position,” (U.S Department of the Air Force
2014, 7). The publications, having stated the importance of standardization and evaluation on
measuring personnel and mission effectiveness, discuss how the Standardization/Evaluation
Program evaluates intelligence personnel on their knowledge of the duty position and applicable
procedures wherein the task (mission performance) phase includes evaluating a demonstrated
task, like that of production (U.S Department of the Air Force 2014, 38). Because the evaluation
profile used to fulfill the task phase requisites must incorporate all appropriate requirements and
allow accurate measurements of the proficiency of the evaluatee, as with training, the
quantitative data collected during evaluations must be considered when defining MOP indicators
for FMV-analysis based production (U.S Department of the Air Force 2014, 44).
In addition to serving as a baseline for DCGS mission assessment and providing insight
into performance and effectiveness affecting variables as discussed in their respective sections
below, AFI 14-153 Volume 3 (2013), “AF DCGS Operations Procedures” and its 480th
ISR
Supplement (2013) define the DCGS mission and provide guidance and requirements for the
execution of DCGS operations. Aligning well with this research is the specifically stated DCGS
program objective to “enhance Air Force DCGS mission effectiveness,” (U.S Department of the
Air Force 2014, 6). Moreover, when researching elements that may affect analyst performance
during production efforts, knowing which analysts conduct the production is imperative. AFI 14-
153 Volume 3 (2013) breaks down mission execution roles and responsibilities by mission crew
position. Of most interest are the positions of the Geospatial Analyst (GA), responsible for the
creation of FMV-analysis based products, and that of the Multi-Source Analyst (MSA),
16
responsible for the creation of multi-intelligence report products (U.S Department of the Air
Force 2014, 13-18). Because final products are created and disseminated during the mission
execution phase of DCGS Operations, the DCGS Operations publications above are perhaps of
more research value than either the Training or the Standardization/Evaluations Air Force
publications with respect to contextualizing DCGS operations for assessment.
Publications concerning the DCGS are limited; especially those published academically
and not as military doctrine. Two peripheral sources concerning the DCGS are Brown’s 2009
“Operating the Distributed Common Ground System,” and Langley’s 2012
“Occupational Burnout and Retention of Distributed Common Ground System (DCGS)
Intelligence Personnel.” Brown’s article focuses on the human factors of the AN/GSQ
SENTINEL (DCGS) Weapons System and how the human factor influences net-centric
operations (Brown 2009, 2). The tone of the article is that of a story, full of details that
experience may corroborate but that the referenced sources did not. Beyond this thesis’ scope
and questionable in supporting its details, the article will not be incorporated in this research
beyond mentioning that nothing in the article contradicted any of the other referenced sources.
Though much more professional in tone and with all points very well supported, Langley’s 2012
report similarly fell beyond the scope of this research, but importantly, also did not contradict
any information presented in related research. In formulating a background knowledge of DCGS
operations, military doctrinal publications must exclusively be relied upon.
Mission Assessment
Before MOE and MOP can be valuable to mission assessment, the desired end
state must first be defined and mission success criteria developed. Mission success criteria
describe the standards for determining mission accomplishment and should be applied to all
17
phases of all operations, including FMV analysis-based production (U.S Joint Chiefs of Staff.
2011b, IV-10). The initial set of success criteria determined during mission analysis, usually in
the mission planning phase, becomes the basis for assessment. If the mission is unambiguous and
limited in time and scope, mission success criteria could be readily identifiable and linked
directly to the mission statement (U.S Joint Chiefs of Staff. 2011b, IV-10). The Air Force DCGS
mission statement states that the “Air Force DCGS provides actionable, multi-discipline
intelligence derived from multiple ISR platforms” (U.S Air Force ISR Agency 2013d, 5). That
the mission statement does not detail which attributes comprise actionable intelligence
complicates the mission success criteria and requires mission analysis to indicate several
potential success criteria, analyzed via MOE and MOP, for the desired effect and task
respectively (U.S Joint Chiefs of Staff. 2011b, IV-10).
Research into mission assessment for intelligence production, specifically production
derived from FMV, is significant not only because requirements and precedents exist for
intelligence product metric collection and assessment, but also because of the impact such
products have on the operating environment and the extent to which those products are poised
to evolve over the next few decades. Requirements, precedents and assessment guidance are
outlined across several Air Force and Joint doctrinal and exploratory military publications. The
importance of ISR, and of the DCGS, as well as a look at future operations is explored in the
Air Force publication Air Force ISR 2023 (2013) and in a corroborating journal article, A
Culminating Point for Air Force Intelligence Surveillance and Reconnaissance, published in the
Air & Space Power Journal by Kimminau (2012). In sum, the literature reviewed in this section
provides an overview of existing military mission assessment and lays the groundwork for the
increasing importance of intelligence production task (MOP) and effect (MOE) assessment.
18
The military publications reviewed which outline requirements or precedents for
intelligence product assessment can be divided into Air Force publications and Joint
publications. Air Force publications include AFI 14-201, Intelligence Production and
Applications, the aforementioned AFI 14-153 Volume 3, Air Force Distributed Common
Ground System (AF DCGS) Operations Procedures, along with its 480th
ISR Wing Supplement
(2014) and the 480th ISR Wing Supplement to AFISRA Instruction 14-121 (2013), Geospatial
Intelligence Exploitation and Dissemination Quality Control Program Management. AFI 41-201
applies to all Air Force organizations from whom intelligence production is requested, a
category into which the DCGS, as a Production Center, falls; however, the AFI is directed more
towards those intelligence agencies producing “stock products,” which the DCGS most usually
does not (U.S Department of the Air Force 2002, 20). The AFI does, however, reveal an
intelligence production precedent in requiring Headquarters (HQ), Director of ISR to conduct
annual reviews of production processes and metrics and to measure customer (requirement)
satisfaction (U.S Department of the Air Force 2002, 6-20). AFI 14-153 Volume 3 (2013) and its
480th
ISR Wing Supplement (2013) deal directly with AF DCGS Operations and includes
instructions for mission assessment, most usefully outlining assessment roles and
responsibilities, albeit vaguely. While every analyst is responsible for conducting mission
management activities during mission execution, the party responsible for documenting mission
highlights and deficiencies, keeping mission statistics, performing trend analysis and quality
control, and overall assessing mission effectiveness is the Department of Operations Mission
Management (U.S Department of the Air Force 2014, 39). Mission management efforts are
further discussed in AFISRA Instruction 14-121 (2013), which details the quality control and
documentation process for DCGS products, a process which must include the identification and
19
analysis of trends by types or errors, flights or work centers (U.S Air Force ISR Agency 2013e,
7). All three Air Force publications set a standard requiring intelligence producers to conduct
mission analysis; neither of the three however, mention MOPs, MOEs, possible metrics or
specifically suggest how such mission analysis should be conducted. While the Air Force
requires mission analysis towards greater effectiveness, the responsibility appears to fall to
individual organizations and units to conduct mission assessment in accordance with the broad
guidance set forth by Air Force doctrine as they see fit. How DCGS FMV units conduct mission
analysis and how they may do so better is central to this research, thus, AFI 14-153 Volume 3 is
an invaluable publication that sets the baseline for existing DCGS mission assessment guidance.
Joint Publications (JP) 3-0 (2011) and 5-0 (2011), both concerning Joint (multi-service)
operations, discuss mission assessment as well as how MOPs and MOEs may be used as
assessment criteria, albeit with an exclusive focus on the assessment of combat operations. JP 5-
0, in its direction of assessment processes and measures, defines evaluation, assessment, MOE
and MOP, sets the criteria for assessment, and discusses the value of MOPs and MOEs at every
level of operations (U.S Joint Chiefs of Staff 2011b, xvii-D-10). JP 3-0, though less in depth
when addressing mission assessment than JP 5-0, JP 3-0 similarly defines MOE and MOP
lending continuity between the documents and further discusses assessment at the strategic,
operational and tactical levels. Of most concern to this research is assessment at the tactical
level, the assessment level about which JP 3-0 and JP 5-0 diverge. In JP 5-0, tactical level
assessment can be conducted using MOPs to evaluate task accomplishment, a discussion which
reflects the impact of assessment on operations and more so makes the case for comprehensive,
integrated assessment plans (U.S Joint Chiefs of Staff 2011b, D-6). In JP 3-0, tactical-level
assessment is described as using both MOPs and MOEs to evaluate task accomplishment where
20
the rest of the paragraph aligns with JP 3-0 almost verbatim (U.S Joint Chiefs of Staff 2011a, II-
10). Both JPs define MOE and MOP in exactly the same way, where MOEs are a tool with
which conduct effects assessment and MOPs are a tool with which to conduct task assessment;
JP 3-0, however, later suggests that MOE and MOP are both utilized for task assessment
leaving out and contradicting its earlier guidance about MOE being a discrete tool concerning
effects assessment (U.S Joint Chiefs of Staff 2011a, GL-13; U.S Joint Chiefs of Staff 2011b,
III-45). The MOP for task and MOE for effect assessment is corroborated by every other
relevant source and so JP 3-0’s contradiction will be considered an oversight.
In both Air Force and Joint Publications, a strong emphasis is placed on the conduction
of mission assessment, the latter going so far as to incorporate MOPs and MOEs specifically.
The importance of mission assessment is apparent in published literature, not only by military
requirements, precedent and guidance, but also because of the impact of intelligence production
on operations and the importance of assessment in quickly evolving mission sets, like that of
ISR.
Sentinel Force 2000+ (1997), a quick reference document for the intelligence career
field, defines the Air Force Intelligence vision, identifies intelligence consumers and highlights
the importance of intelligence in Air Force operations: with respect to intelligence, “this role is
designed build the foundation for information superiority and mission success by delivering on-
time, tailored intelligence,” (U.S Department of the Air Force 1997, 2). The publication
additionally designates seven Goal Focus Areas of Air Force Intelligence designed to assure
U.S. information dominance. One of the Focus Areas is Products and Services; the
corresponding goal is to “focus on customer needs…modernize and continuously improve our
products and services to see, shape and dominate the operating environment,” (U.S Department
21
of the Air Force 1997, 3). Intelligence is critical to information superiority and mission success.
Production is a critical component of intelligence and an Air Force goal for almost two decades
has been product improvement. The objective for improvement exists, and yet, how can
intelligence product improvement be assessed/measured?
Air Force ISR 2023: Delivering Decision Advantage (2013) accounts for both the
importance of intelligence, specifically ISR, to military operations and discusses the projected
evolution of ISR over the next decade. ISR provides commanders with the knowledge they need
to prevent surprise, make decisions, command forces and employ weapons; it enables them to
maintain decision advantage, protect friendly forces, hold targets and apply deliberate,
discriminate kinetic and non-kinetic combat power (U.S Department of Air Force 2013a, 6).
The challenge for ISR is to maintain those competencies required to provide commanders with
the knowledge they need while reconstituting and refocusing ISR toward future operations, all
under the constraints of an austere fiscal environment (U.S Department of Air Force 2013a, 2-
4). Focusing heavily on ISR capabilities, the normalization of operations and the strengthening
of collaborative partnerships, the report focuses on the revolution of analysis and exploitation, a
topic well within the scope of this research. “The most important and challenging part of our
analysis and exploitation revolution is the need to shift to a new model of intelligence analysis
and production,” (U.S Department of Air Force 2013a, 13). With an actively evolving mission
set where intelligence production is on the brink of great change, any requirement for tactical
mission assessment becomes all the more important; otherwise, how can positive or negative
changes in production or products be assessed relative to the baseline? A mission subset as
critical as ISR production, especially when poised to rapidly evolve, requires established,
repeatable, assessment measures.
22
Closely paralleling Air Force ISR 2023 (2013) with respect to the future of the ISR
Enterprise is Kimminau’s 2012 Air & Space Power Journal article, A Culminating Point for Air
Force Intelligence Surveillance and Reconnaissance. Kimminau (2013) concurs with the
imperative for ISR: “success in war depends on superior information. ISR underpins every
mission that the DOD executes,” (Kimminau 2012, 117). The article further discusses the future
of ISR through 2030. While Kimminau focuses less on an analytical and production revolution,
he does elaborate on several of the projected evolution areas that do overlap with Air Force ISR
2023 (2013). Though not as useful to this research without a discuss of analysis and production,
Kimminau’s article serves best in corroborating Air Force ISR 2023 (2013) in its view of ISR as
dynamic, revolutionary, ever more important.
Variable Identification and Control
Perhaps the most challenging aspect of researching MOPs and MOEs in non-combat
micro-tactical mission operations is the identification and control of variables. Production is just
one element of DCGS operations, albeit an extremely interconnected element dependent on
human analytical training and preferences, the quality of received data, management style, the
status of communications, mission parameters and potentially several hundred other variables.
One significant research challenge is in determining which variables to hold constant and which
variables directly affect the task of production (MOPs) and/or product effectiveness (MOEs).
As discussed in the DCGS Operations section above, several Air Force doctrinal
publications provide guidance on DCGS training, evaluation, and mission execution and
mission management. Those same publications provide insight into which variables may affect
each respective aspect of mission operations. Another publication, AFI 14-132, Geospatial-
Intelligence (2012), offers insight into additional mission variables, not through references to
23
DCGS operations, but rather to the field of Geospatial-Intelligence (GEOINT) itself. GEOINT
encompasses all imagery, Imagery Intelligence (IMINT) and geospatial information, a category
into which FMV exploitation and production falls (U.S Department of the Air Force 2012a, 3).
In fact, a prerequisite to qualify as a GA, the position which creates FMV-analysis based
products, is to be in the Geospatial Intelligence Analyst career field, Air Force Specialty Code
(AFSC) 1N1X1 (Air Force ISR Agency 2013b, 24). AFI 14-132 (2012) mentions the DCGS as
a key exploiter and producer of Air Force GEOINT wherein the DCGS as a GEOINT
production center must report up the chain of command an analysis of the quality and timeliness
of GEOINT products, revealing timeliness and quality to be significant variables in GEOINT
product effectiveness (U.S Department of the Air Force 2012a, 14). The AFI additionally
discusses GEOINT baseline and core functions revealing the types of operations GEOINT
supports and the types of GEOINT exploited for that support, variables necessary to
contextually consider. For example, would the timeliness variable have the same impact on
performance and effectiveness across all mission types supported, or, do some mission types
necessarily value timeliness more or less than others?
The Raytheon Company’s 2001 Establishing System Measures of Effectiveness by John
Green corroborates the assessment that timeliness factors into product effectiveness. In addition
to addressing the imperative for performance analysis and developing definitions for system
MOPs and MOEs, the report most importantly weighs in on timeliness as an assessment
variable. “Time is often used… [and] is attractive as an effectiveness measure,” (Green 2001,
4). Green (2001) goes on say that: “time is the independent variable and outcomes of processes
occur with respect to time,” (Green 2001, 4). Similarly, in this research, the only dependent
variable is effectiveness where all other uncontrolled variables, to include time, are considered
24
independent. Green (2001) not only nods to the common practice of using time to measure
effectiveness, but highlights the logic behind time as an independent rather than dependent
variable.
The most comprehensive categorization of intelligence mission variables can be found in
a 2003 Central Intelligence Agency publication by Rob Johnson, Developing a Taxonomy of
Intelligence Analysis Variables. Johnson (2003) provides a classification of 136 variables
potentially affecting intelligence analysis broken down into four exclusive categories: systemic
variables, systematic variables, idiosyncratic variables and communicative variables. The list
does not suggest that any of the variables affect human performance or product effectiveness,
rather it serves to identification purposes so that experimental methods may isolate and control
some variables in order to further isolate those variable affecting analysis (Johnson 2003, 3-4).
FMV products are dependent on FMV analysis, hence FMV-analysis based production. The
focus of this research is on the effectiveness of products and also on analyst performance in
producing said products, thus, many of those variables identified as potentially affecting
analysis will also potentially affect production. The list focuses more on cognitive and
situational-type variables such as analytic methodology, decision strategies, collaboration, etc.
rather than variables like timeliness, quality, precision, etc. which are the focus of this research;
however, the idea of controlling some variables in order to assess what impact others may have
on the dependent variable, here effectiveness, is a goal of this research.
RAND’s 1995 publication, A Method for Measuring the Value of Scout/Reconnaissance,
by Veit and Callero explores the topic of variable identification and control, the most relevant
piece of which concerned variable definitions. The case study well defined two variables of
interest to FMV analysis-based production: timeliness and precision. Timeliness was defined as
25
the “time between information collection and its availability for use by the situation assessor”
and precision was defined as “the level of detail about enemy weapons systems reported,” (Veit
and Callero 1995, xv). The timeliness definition is applicable verbatim; the precision definition
is geared exclusively towards scout/reconnaissance but can be easily amended to fit analysis
and production precision. Most interestingly, Veit and Callero (1995) introduced the concept of
variable factor levels. To better understand the relationship between variables, several factor
levels can be assigned to a variable in order to understand the degree of impact. For example,
the precision variable can be further specified by categorically establishing three different levels
of detail. Factor levels are further discussed in Analysis and Findings. While useful for its
variable definitions and factor level methodology, Veit and Callero (1995) was found to be
more valuable in its proposed methodology for deriving MOEs.
The most significant publication concerning potential variables for the FMV production
mission subset was JP 2-0 (2013), Joint Intelligence. Though the variables discussed were not
strictly identified as variables or put directly into the context of mission assessment, JP 2-0
introduced the concept of Attributes of Intelligence Excellence, a list of variables thought to be
critical in the production of effective products. Attributes of Intelligence Excellence include
anticipatory, timely, accurate, usable, complete, relevant, objective and available (U.S. Joint
Chiefs of Staff 2013, III-33). JP 2-0 additionally acknowledged that the effectiveness of
intelligence operations is assessed by devising metrics and indicators associated with those
attributes (U.S. Joint Chiefs of Staff 2013, I-22). Over the course of this research, along with
precision, each attribute listed will be assessed as a potential MOP, MOE or MOP/MOE
indicator.
Measuring Performance and Effectiveness
26
“There is no more critical juncture in implementing [a] successful assessment…than the
moment of methods selection,” (Johnson et al. 1993, 153). Joint Publication 3-0 (2011), Joint
Operations, and Joint Publication 5-0 (2011), Joint Operations Planning, both discuss the
importance of conducting assessment at every phase of military operations, both at the task
assessment level (MOPs) and effects assessment level (MOEs). Much attention has been paid to
defining MOP and MOE for Combat Operations, Stability Operations, Reconnaissance and even
Intelligence Simulations; however, publically available data assessing the niche operational field
of FMV analysis-based production is lacking. No “one size fits all” methodology exists for
determining MOPs and MOEs, nor does any existing methodology adequately apply to FMV
analysis-based production assessment problem set. Several publications do detail MOP and/or
MOE derivation methodologies, portions of which do relate to this problem set and aggregately
those publications enable the foundation of a new theoretical framework.
The Center for Army Lessons Learned 2010 Handbook 10-41, Assessments and
Measures of Effectiveness in Stability Operations outlines the Tactics, Techniques and
Procedures (TTPs) for conducting mission assessment and for determining MOEs for stability
operations. The handbook details the necessity for assessment and clearly distinguishes between
task-based (doing things right) and effect-based (doing the right thing) and delineates when
MOPs and MOEs are appropriately applied. The handbook provides insight into the Army’s
Stability Operations Assessment Process, including how task and effect-based assessment
support deficiency analysis. “The process of assessment and measuring effectiveness of military
operations, capacity building programs, and other actions is crucial in identifying and
implementing the necessary adjustments to meet intermediate and long term goals in stability
operations,” (Center for Army Lessons Learned 2010, 1). All such guidance is highly relevant to
27
the assessment process herein proposed for FMV analysis-based production, despite the Stability
Operation-specific assessment is beyond the scope of this research.
Greitzer’s 2005 Methodology, Metrics and Measures for Testing and Evaluation of
Intelligence Analysis Tools is a case study involving a practical framework for assessing
intelligence analysis tools based on almost-exclusively quantitative assessment. While the case
study’s framework did not align with this research’s mixed-method theoretical assessment
approach, the case study did identify several potentially replicable performance measures, both
qualitative (customer satisfaction, etc.) and quantitative (accuracy, etc.) all of which pertain
specifically to intelligence analysis. Such performance measures are pertinent to the discussion
of MOPs across any intelligence function and are especially pertinent to the analysis of FMV
analysis-based production. The case study also defined and distinguished between measurement
and metrics, definitions which were adopted for the FMV-analysis based production assessment.
The highly applicable discussion of quantitative and qualitative performance measures, as well
as of metrics, heavily influenced the theoretical framework of this research.
Bullock’s 2006 Theory of Effectiveness Measurement dissertation titularly appears
applicable to FMV-analysis based production effectiveness measurement, especially when
defining effectiveness measurement as “the difference, or conceptual distance from a given
system state to some reference system state (e.g. desired end-state);” however, the foundation of
Bullock’s effectiveness measurement was applied mathematics with all relationships between
variables explained as mathematically stated relational systems (Bullock 2006, iv-12). The
Bullock (2006) study approached the problem of effectiveness measurement quantitatively, a
method which may have applied to quantifiable FMV-analysis based production variables, but
which did not provide an applicable framework for unquantifiable (qualitative and quasi-
28
quantitative) variables.
Chapter Seven of Measures of Effectiveness for the Information-Age Army, “MOEs for
Stability and Support Operations” (Darilek et al. 2001) is a case study revolving around MOEs
for non-combat operations, a category applicable to intelligence support functions (when
assessed independently of the operation they are supporting). The case study theoretically
examined how Dominant Maneuver in Humanitarian Assistance might be assessed and how
MOE might be derived. The method of theoretical examination perfectly aligned with the
proposed theoretical examination of FMV-analysis based production MOE, although the
specifics of assessing Humanitarian Assistance were too disparate for the analytical framework
to be duplicated. As with the ISR mission, Darelik et al. (2001) emphasized the changing nature
of Assistance Operations, partially to highlight the necessity of assessment. Of all the reviewed
publications, Darelik et al. (2001), had the methodology which most closely applied to the FMV-
analysis based production mission set, most probably because it was the only theoretical case
study to focus on non-combat operations.
Evaluating the effectiveness of any intelligence analysis-related problem set presents
several methodological challenges (Greitzer and Allwein 2005, 1). Because intelligence
production cannot avoid the human elements of analyst preference and technique, assessing
analyst performance (was the product created correctly) and product effectiveness (did the
product meet its objective) is far more subjective than traditional combat operations, such as
whether a projectile hit and incapacitated it's intended target or whether or not a route was
successfully cleared. Coupled with the difficulties associated with valuing intelligence in a
strictly quantitative, i.e., numerical fashion, the pre-proposed MOP/MOE assessment
frameworks above cannot be wholly adopted. Rather, this unique problem set required the
29
development of a new model for determining MOP and MOE, one which examines the
relationship between the two as well as the effect of all variables when examined in the FMV
production operating environment. The model proposed does include effectiveness criteria
(variable factors) adapted from Method for Measuring the Value of Scout/Reconnaissance (Veit
and Callero, 1995) and a flow of assessment loosely based upon Chapter Seven of Measures of
Effectiveness for the Information-Age Army, “MOEs for Stability and Support Operations”
(Darilek et al., 2001).
The overarching hypothesis in this research, that MOP and MOE are identifiable,
measurable and can be utilized to evaluate FMV analysis-based products in the context of
circumstances defined by mission variables, is dependent on several sub-hypotheses:
 Sub-Hypothesis 1: At least one MOE and one MOP can be identified for DCGS FMV
analysis-based products
 Sub-Hypothesis 2: For each DCGS FMV production MOE and MOP, at least one
indicator can be defined based on variable and objective analysis
 Sub-Hypothesis 3: A discrete, measurable metric can be assigned to each identified
MOE and MOP
Figure 1 demonstrates the relationship between MOP, MOE, indicators and factor levels. The
outline below illustrates the theoretical framework via which the above hypotheses above will be
tested. Using a top-down approach, the research will:
1. Identify MOP and MOE: which independent variables impact product quality or
effectiveness and meet MOE/MOP criteria (repeatable, measurable, relevant and
responsive)?
2. Identify MOP and MOE factor levels: what factor levels are required to assess degree of
contribution to operational effectiveness?
3. Identify performance and effectiveness indicators: what items of information provide
insight into MOP and MOE analysis?
4. Identify MOP and MOE metrics: how will MOE and MOP be measured?
30
Figure 1: FMV Analysis-Based Production Mission Assessment
31
SECTION III
METHODOLOGY
In support of determining FMV analysis-based production MOP and MOE, this
case study’s theoretical analysis will consider how multiple independent variables affect one
dependent effectiveness variable, a precedent set by multiple studies assessing organization
dependence where performance is defined as a dependent variable and the studies seek to
determine variables that produce variations in or effects on performance (March and Sutton
1998, 698). In accordance with methodological precedent discussed in Case Studies and Theory
Development in the Social Sciences by George and Bennett (2005), the research objective
focuses not on outcomes of the dependent effectiveness variable, i.e., the impact of product
effectiveness on ISR operations, but on the importance and impact of independent variables, i.e.,
on how certain predefined variables affect product effectiveness. (George and Bennett 2005, 80-
81).
Due to environmental constraints including the highly sensitive and classified nature of
FMV production, this study will be limited to secondary over primary research and theoretical
over practical analysis. Rather than testing the effects of mission variables via the observation of
production analysts or surveying the effectiveness of specific FMV product types, the primary
research objective in this case study is to explore and explain the relationship between multiple
independent variables and a single dependent effectiveness variable. Independent variables of
theoretical interest include timeliness, accuracy, and usability.
Research Design Overview
The research problem was addressed via both archival and analytic research strategies
(Buckley, Buckley, and Chiang 1976, 15). The archival strategy involved quantitative and
32
qualitative research methods whereby research was conducted in the secondary domain
exclusively through the collection and content analysis of publically available data, primarily
journal articles and military publications. The archival research design heavily considers George
and Bennett’s 2005 “structured-focused, research design” method (George and Bennett 2005, 73-
75). The research objective, to derive FMV analysis-based production MOE and MOP, can best
be described as “Plausibility Probe” research wherein the study explores an original theory and
hypotheses to explore whether more comprehensive, operational, primary research and testing is
warranted. Theory development, not operational testing is the research design focus (George and
Bennett 2005, 73-75).
The lack of previously existing theories, hypotheses and research concerning niche ISR
mission effectiveness necessitated the modification of the initial, Plausibility Probe research in
order to build upon and develop new operational variables and intelligence variable definitions
(George and Bennett 2005, 70-71). The uniqueness of the problem set additionally required the
use of internal logic. As previously discussed, the highly classified nature of intelligence
operations coupled with the originally of the niche problem set made observational research,
empirical research and source meta-analysis impossible. Because of information and personnel
access restrictions, hypothesis testing was necessarily relegated to theoretical exploration and the
analytical research strategy was required to satisfactorily answer the research question.
The analytic research method made use primarily of internal logic, both deductive and
inductive in research nature. Deductive research involved broad source collection and content
analysis in order to determine all data relevant to the problem set and to draw original
conclusions from multiple disparate data sets. For example, extensive research was conducted on
both MOE and MOP determination methods and how the military conducts operational
33
assessment in order to determine method overlap and to develop refined assessment
methodologies where overlap failed to occur. Inductive research involved taking all relevant
aspects of the problem set and further collecting source material relevant to that subtopic in order
to better, comprehensively, understand how each subtopic relates to the overarching problem set.
For example, once the independent variables were identified, each was extensively researched in
order to better define the variables within the context of the intelligence mission and to
determine any known relationships to the dependent variable, here product effectiveness. By
utilizing the analytic research strategy, the comparability of this research to previously existing
studies will be limited, especially with respect to variable analysis. The research design is further
outlined in the following sections: Collection Methodology, Operational Variable Specification
and Analysis Methodology.
Collection Methodology
The research collection methodology for this thesis involves a two-pronged content
analysis approach. Deriving a theory from a set of data and then testing it on the same data, is
thought by some to be an illegitimate research practice (George and Bennett 2005, 76). Thus,
two sets of data will be reviewed: literature concerning FMV PED operations (theoretical testing
data) and literature discussing MOE and MOP evaluation case studies (methodological theory
data). Testing data will be collected almost exclusively from military publications, all obtained
either through the U.S Air Force e-Publishing document repository, the U.S Army Electronic
Publication repository, or the U.S Department of Defense (DOD) Joint Electronic Library.
Because the American Public University System Digital Library yielded few results related to
this highly specialized research topic, theoretical data had to be collected from known academic
defense journals, primarily the Air University Press Air and Space Power Journal digital
34
archives, known defense research corporations, primarily RAND Corporation online research
publications, and the Defense Technical Information Center digital repository. Due to the
extremely limited amount of publically available and relevant data in either of the above
categories, case selection bias as well as competing/contradictory analysis was naturally
mitigated.
Key to testing the stated hypothesis and sub-hypotheses is the identification and analysis
of operational variables which may affect product effectiveness (MOEs), effectiveness
indicators, those elements of the mission related to MOEs, MOPs, and quantitative/qualitative
metrics for both MOEs and MOPs; these tasks were accomplished through an in-depth content
analysis of the many Air Force doctrinal publications detailing Geospatial Intelligence
(GEOINT) and DCGS operations (training, evaluation, and mission execution). For those
operational variables identified, additional publications relevant to each variable were consulted
for further examination of related content.
The second content analysis approach includes a close review of previously published
theses, dissertations, journal articles and technical reports concerning the derivation of MOEs
and MOPs, with specific attention paid to those addressing analysis, intelligence analysis,
military operations and/or theoretical MOE/MOP assessments. All methodologically applicable
portions of previously conducted MOE/MOP case studies will be incorporated into the analysis
of the FMV analysis-based production mission subset.
Specification of Operational Variables
As previously mentioned, the single dependent variable in this case study will be DCGS
FMV-analysis based product effectiveness. The execution of the FMV PED mission involves
almost immeasurable and innumerable systemic, systematize, idiosyncratic communicative
35
variables, all of which could arguably impact mission effectiveness. (Johnson 2003, 4). Given
that the micro-tactical nature of FMV production necessarily narrowed scope of this research and
that research resources were limited to publically available data, most variables will be held
constant and many more will remain unknown. This study will be limited to the examination of
only those variables which most greatly affect product effectiveness and analyst performance; all
other variables, such as leadership management style, communications and systems status, etc.
will be held as fixed and non-impacting. To further specify the research objective, the focus of
this thesis is not to identify every variable which may impact effectiveness, but rather to show
that any impacting variables (MOEs) can be identified at all. As described in the Research
Design Overview section above, this Plausibility Probe research is intended to initially address
the research question of whether or not MOE and MOP can be derived for FMV-analysis based
products; given the volume of DCGS FMV PED operational variables, the research is thus
designed to avoid any impact negated (fixed/non-impacting) variables may have on the validity
of analysis.
The research design further aims to isolate the impacts of product effectiveness by
assessing the influence of variance in independent variables. Variable factor levels were
developed for each independent variable as a sensitive method of describing variable variances
(George and Bennett 2005, 80-81).
Should this theoretical assessment framework ever be practically applied, classified
mission data, site specific objectives, and local TTPs can be incorporated into the assessment
process to derive additional MOEs should those designated in this research fail to
comprehensively reflect requirements for product effectiveness.
36
Based on an in-depth review of available literature, those independent variables identified
as impacting to DCGS FMV analysis-based product effectiveness (MOEs) include: product
timeliness, product accuracy, and product usability. Those independent variables identified as
reflective of analyst performance (MOPs) also include product timeliness and product accuracy.
The Analysis and Findings section of the thesis will revolve predominantly around an
examination of how those independent variables, or potential MOEs/MOPs, impact the overall
effectiveness of the product and reflect the performance of the production analysts.
Assessment Methodology
Following the data collection, all prior existing research critical to understanding DCGS
operations and those concerning intelligence performance variables was analyzed. A prerequisite
to any effectiveness analysis is the identification of an objective against which to measure
effectiveness. The primary phase of analysis was thus an investigation into the objective of
DCGS FMV analysis-based productions.
Subsequently, variable analysis commenced. Impact analysis of each independent
variable was not practically conducted, rather the relationship between each independent
variable, MOE, and the dependent variable, product effectiveness was theoretically examined. In
précis, the research examined how product timeliness, accuracy and usability affects DCGS
FMV-analysis based product effectiveness. A discussion of MOPs followed focusing on the
relationship between product accuracy and product effectiveness. The hypothesis, that MOE for
FMV-analysis based production can be identified, was tested through the examined relationship
between the independent and dependent variables. To further refine and quantify/qualify each
MOE and MOP, factor variable levels were determined for each independent variable. For
example, for the independent variable (MOE) “accuracy,” those factor levels would be: accurate
37
(100%), marginally accurate (98-99.9%), or inaccurate (<98%). In defining variable factors
levels, not only can the independent variables be assessed in terms of whether or not they impact
product effectiveness, but degree of impact can be assessed as well.
Although theoretical, one follow-on objective of the study is a practical application of the
assessment framework. Such a practical application will require the identification of MOE and
MOP but also metrics and indicators for each identified MOE and MOP. Following MOE
identification and metric determination, a secondary analysis was conducted to determine
effectiveness and performance indicators for each MOE and MOP, i.e., could a correlative
relationship be demonstrated between certain mission execution elements and each MOE/MOP.
Lastly should this research become operationally incorporated, because of the subjective nature
of intelligence, the case study examines potential biases, both of the production analyst, and of
the intelligence consumer.
38
SECTION IV
ANALYSIS AND FINDINGS
Because traditional military assessment measures and operational MOEs are not effective
in accurately assessing the niche combat mission support element of FMV production, the
challenge is in determining what attributes make the intelligence provided by FMV analysis-
based products “actionable”. According to Joint Publication 2-0, Joint Intelligence, the
effectiveness of intelligence operations is assessed by devising metrics and indicators associated
with Attributes of Intelligence Excellence (U.S. Joint Chiefs of Staff 2013, I-22). It is here
proposed that so long as they meet the MOE and MOP criteria, MOEs, MOPs and MOE/MOP
indicators can be derived from the established list of Attributes of Intelligence Excellence:
anticipatory, timely, accurate, usable, complete, relevant, objective and available (U.S. Joint
Chiefs of Staff 2013, III-33).The focus of this research is FMV analysis-based production
MOEs; however, because aggregate analyst performance can impact the overall effectiveness of
the products, MOPs will also be discussed as they relate to MOEs.
Excessive numbers of MOEs or MOPs become unmanageable and risk the collection
cost efforts outweighing the value and benefit of assessment (Center for Army Lessons Learned
2010, 8). To maximize the assessment benefit, some attributes can be combined; complete can be
considered an accuracy indicator and relevant can be considered a usability indicator.
Anticipatory, objective and available each fail to meet MOE criteria. Further discussed in their
individual sections below, of the eight Attributes of Intelligence Excellence, which can be said to
comprise FMV product success criteria, only timely, accurate, encompassing completeness, and
useable, encompassing relevance, succeed as effective MOE.
Concerning the anticipatory attribute, to be effective, intelligence must anticipate the
39
informational needs of the customer. Anticipating the intelligence needs of the customer requires
that the intelligence staff identify and fully understand the current tasked mission, the customer’s
intent, all relevant aspects of the operational environment and all possible friendly and adversary
courses of action. Most importantly anticipation requires the aggressive involvement of
intelligence personnel in operational planning at the earliest possible time (U.S Joint Chiefs of
Staff 2013, II-7). Anticipatory products certainly may be more actionable, and thus more
effective, than non-anticipatory products and those FMV analysts responsible for production
through training, experience, and an in depth knowledge of the operational environment are most
likely capable of providing anticipatory intelligence. Anticipatory intelligence, however, fails to
meet requirements for an FMV analysis-based production MOE. Because MOE must be
measurable to be effective, the first complication with the anticipatory attribute is in designating
a metric. Unlike timeliness, which can be measured in seconds, minutes or hours, or accuracy
which can be measured on a percentage scale of one to 100, determine a standardized unit of
measurement for the degree to which a product is anticipatory is not currently feasible.
Additionally, anticipatory, or predictive, intelligence is generally used to assess threat
capabilities and emphasize and identify the most dangerous adversary Course of Action (COA)
and the most likely adversary COA; such analysis is not typically conducted during near real-
time operations by the PED Crew, but by intelligence staff tasked to those determinations (U.S.
Department of the Army 2012, 2-2). Further complicating the issue of measurement is the issue
of replicability. Even if a quantitative or qualitative method of measuring anticipation was
devised, the same metric may not be applicable to the same product type under different mission
tasking circumstances. DCGS intelligence is derived from multiple ISR platforms, may be
analyzed by active duty, Air National Guard or Air Force Reserve forces and is provided to
40
COCOMs, Component Numbered Air Forces, national command authorities and any number of
their subordinates across the globe (U.S. Air Force ISR Agency 2014, 6). Depending on
exploitation requirements, analysts may be tasked to exploit FMV in support of several discrete
mission sets including situational awareness, high-value targeting, activity monitoring, etc. (U.S.
Joint Chiefs of Staff 2012, III-33). The nature of different customers and mission sets may
impact the degree to which anticipatory intelligence is even possible. For example, the case of
dynamic retasking, where a higher priority collection requirement arises and the tasked asset is
immediately retasked to support (U.S. Department of the Air Force 2007, 15). If an asset is
dynamically retasked and products are immediately requested thereafter, the PED crew would be
operating under severely limited knowledge of the new OE and anticipating a new customer’s
production expectations beyond expressed product requirements would be very difficult if not
infeasible. The intent of this research is to identify FMV analysis-based product MOE which are
repeatable, measurable, relevant, responsive and standardized across product and mission types.
The attribute anticipatory is neither easily measurable nor repeatable given the amount of
mission variables impacting the feasibility of analysts making anticipatory assessments. A well-
produced product disseminated in support of a dynamically retasked mission may be perfectly
effective in achieving its goal of actionable intelligence despite not being anticipatory. While it
may be more effective if it were anticipatory, the failure of that attribute to meet MOE criteria
negates it as an appropriate MOE.
Due to the decisive and consequential impact of intelligence on operations and the
reliance of planning and operations decisions on intelligence, it is important for intelligence
analysts to maintain objectivity and independence in developing assessments. Intelligence, to the
greatest extent possible, must be vigilant in guarding against biases. In particular, intelligence
41
analysts should both recognize each adversary as unique, and realize the possible biases
associated with their assessment type (U.S. Joint Chiefs of Staff 2013, II-8). As with the
anticipatory attribute, the objective attribute presents problems with both measurability and
replicability. Also comparable to the anticipatory attribute, objectivity is highly important to an
effective product. Because not only each mission, but each production analyst is unique, so too
is every analyst’s assessment biases, each of which would undermine product objectivity.
Further obfuscating objectivity as a viable MOE is the difficulty in determining a standard metric
by which to measure objectiveness and a baseline against which to compare the objectivity of
products. Both anticipating the customer’s needs and remaining objective during analysis and
production should be trained, heavily reinforced and corrected prior to dissemination as required.
Biased analysis and production degrade product accuracy and effectiveness through the
misinterpretation, underestimating or overestimating of events, the adversary, or the
consequences of friendly force action. Objectivity does impact the effectiveness of DCGS FMV
analysis-based products; however, the inability of the objective attribute to be measured in a
standardized, replicable fashion disqualifies the attribute as a potential MOE.
ISR-derived information must be readily accessible to be effective, hence the
availability attribute. Availability is a function of not only timeliness and usability, discussed in
their respective sections below, but also the appropriate security classification, interoperability,
and connectivity (U.S Joint Chiefs of Staff 2013, II-8). Naturally both ISR personnel and the
consumer should have the appropriate clearances to access and use the information (U.S.
Department of the Air Force 2007, 15). Product dissemination to the customer is necessarily
dependent on full network connectivity. While a product would be completely ineffective if it
were never received by the customer, or if it were classified above what the customer could
42
Figure 2: Measures of Effectiveness
access; however, throughout the theoretical exploration of this research problem, full product
availability will be assumed. During the discussion of specific MOE below, network connectivity
will be held constant at fully functional and classification will be assumed to be the derivative
classification of the tasked mission ensuring both analyst and customer access.
Measures of Effectiveness
Dismissing anticipatory, objective and available as attributes suitable to MOE selection,
the applicable attributes of timely, accurate, usable, complete, and relevant remain. It is against
that attribute criteria, either as MOE or as MOE indicators (see Figure 2) that the effectiveness of
DCGS FMV analysis-based products in this thesis will be measured.
Timeliness
Commanders have always required timely intelligence to enable operations. Because
intelligence is a continuous process that directly supports the operations process, operations and
intelligence are inextricably linked (U.S. Department of the Army 2012, v). Timely intelligence
is essential for the prevention of surprise by the adversary, to conduct defensive operations, seize
the initiative, and utilizes forces effectively (U.S. Department of the Air Force 2007, 5). To be
effective, the commander’s decision and execution cycles must be consistently faster than the
adversary’s; thus, the success of operations to some extent hinges upon timely intelligence (U.S.
43
Joint Chiefs of Staff 2013, xv).
The responsive nature of Air Force ISR assets is an essential enabler of timeliness (U.S.
Department of the Air Force 2007, 5). In the current environment of rapidly improving
technologies, having the capability to provide timely information is vital (U.S. Department of the
Air Force 1999, 40). The dynamic nature of ISR can provide information to aid in a
commander's decision-making cycle and constantly improve the commander's view of the OE,
that is, if the products are received in time to plan and execute operations as required. Because
timeliness is so critical to intelligence effectiveness, as technology evolves, every effort should
be made to streamline and automate processes to reduce timelines from tasking through product
dissemination. (U.S. Department of the Air Force 2007, 5).
Timeliness is vital to actionable intelligence; the question is, does ‘late’ intelligence
equate to ineffective intelligence? According the concept of LTIOV (Latest Time Information of
Value), there comes a time when intelligence is no longer useful. LTIOV is the time by which
information must be delivered to the requestor in order to provide decision makers with timely
intelligence (U.S. Department of the Army 1994, Glossary-7). The implication is that
information received by the end user after the LTIOV is no longer of value, and thus, no longer
effective. To be effective at all, intelligence must be available when the commander requires it
(U.S. Joint Chiefs of Staff 2013, II-7).
All efforts must be made to ensure that the personnel and organizations that need access
to required intelligence will have it in a timely manner (U.S. Joint Chiefs of Staff 2013, II-7).
The DCGS is one intelligence dissemination system capable of providing near real-time
situational awareness and threat and target status (U.S. Department of the Air Force 1999, 40). In
the fast-paced world of near real-time intelligence assessment, production time is measured not
44
Figure 3: Timeliness Factor Levels
in days, or hours, but in minutes. Consequently, the most appropriate metric for product
timeliness is the production time, from tasking to dissemination availability, in minutes.
The GA and MSA PED Crew positions are not only responsible for the creation and
dissemination of intelligence products, but also for ensuring their production meets the DCGS
Enterprise timeline (time limit) for that product (U.S. Department of the Air Force 2013a; 29).
For this discussion of timeliness as an MOE, it will be assumed that the established DCGS
Enterprise product timelines are congruent with product intelligence of value timelines, i.e., so
long as an analyst successfully produces a product within its timeline, that product will be
considered timely by the consumer. The DCGS Enterprise product timeline will serve as the
baseline against which products will be compared. In assessing the degree of product timeliness,
the three assigned factor levels, outlined in Figure 3, will be 1. The product was completed early
(very timely) 2. The product was completed on time (timely), or 3. The product was completed
late (not timely). In this way, the effectiveness of a product may be at least partially assessed
based upon the number of minutes in took to produce the product measured against the
established timeline for that type of product.
To be relevant intelligence in ISR means that the ISR product must be tailored to the
warfighter’s needs (U.S. Department of the Air Force 1999, 10). The GA and MSA are both
trained to produce their respective products within applicable timelines; however, if the end user
should request a highly tailored product, it may push production time past the established
45
timeline. Product customization may then be considered an indicator for the timeliness MOE.
When assessing product effectiveness, it is not single products that determine
effectiveness, but the aggregate of products within a genre. For example, it may be determined
that 90 percent of Product A are produced within Factor Level 1, early. To elevate the analyst
baseline, the timeline may be adjusted to bring the remaining ten percent of products up to par
and increase product effectiveness through product timeliness. For assessment standardization
across the enterprise, the designation ‘early’ must be specified. Here, to be considered very
timely, the product must be completed 50 percent earlier than the maximum allotted production
time. For example, if the timeline is ten minutes, and a product is completed in five, that product
would be considered very timely. On the other hand, as ISR evolves and products are used to
support ever more dynamic operations, if product type B is habitually requested at a high level of
customization resulting in 80 percent of products falling into Factor Level 3, not timely, the
DCGS may want to consider adjusting that product’s timeline, refining the procedure for
producing that product, or introducing a more applicable product. Products whose vast majority
falls into Factor Level 2, timely, may be considered effective, at least, so far as reaching the
customer when it is still of value is concerned.
Accuracy
Intelligence production analysts should continuously strive to achieve the highest
possible level of accuracy in their products. The accuracy of an intelligence products is
paramount to the ability of that intelligence analyst and their parent unit to attain and maintain
credibility with intelligence consumers. (U.S. Joint Chiefs of Staff 2013, II-6). To achieve the
highest standards of excellence, intelligence products must be factually correct, relay the
situation as it actually exists, and provide an understanding of the operational environment based
on the rational judgment of available information (U.S. Department of the Air Force 2007, 4).
46
Especially for geospatial intelligence products, as FMV analysis-based products are,
geopositional accuracy, suitable for geolocation or weapons employment, is a crucial product
requirement (U.S. Department of the Air Force 1999, 10). Intelligence accuracy additionally
involves the appropriate classification labels, proper grammar, and product completeness.
For a product to be complete, it must convey all of the necessary components (U.S.
Department of the Army 2012, 2-2). For example, if DCGS standardized product A requires the
inclusion of four different graphics and only three were produced, the product would be
incomplete and thus inaccurate. The necessary components of a product also include all of the
significant aspects of the operating environment that may close intelligence gaps or impact
mission accomplishment (U.S. Joint Chiefs of Staff 2013, II-8). Complete intelligence answers
all of the customer’s questions to the greatest extent possible. For example, if a product is
requested to confirm or deny the presence of rotary- and fixed-wing aircraft at an airport, and the
production analyst annotates and assesses the fixed-wing aircraft, but negates to annotate and
assess the rotary-wing aircraft, then the product is incomplete and thus inaccurate. Similarly, if a
production analyst fails to provide coordinates when required, the product would also be
incomplete and inaccurate.
Incompleteness may be the variety of inaccuracy that most impacts the effectiveness of
an intelligence product. While grammar, errors in coordinates and even errors in classification, as
critical as those inaccuracies may be, are most likely the cause of analyst mistakes or oversight,
incomplete products may hint at an inability to produce the product, access certain information
or understand production requirements. For example, if a product required analysts to obtain a
certain type of imagery and analysts did not possess that skillset, one would expect to see a
corresponding trend in incomplete products. Similarly, if a standardized product was modified
47
and analysts were not difference trained, one may expect to see a trend in analysts failing to input
the newly required information. In this way, product completeness can serve as indicator for the
accuracy MOE. If a product is found to be incomplete, without looking any further it is likely
that that product will already be too inaccurate to be effective. If a product is found to be
complete, however, even with grammar errors, a product may still be found effective, despite not
being perfectly accurate.
When DCGS products are accounted for and quality controlled, they are measured
against a baseline checklist, which when used, assigns the product an accuracy score on a scale
of zero to 100 percent (U.S. Air Force ISR Agency 2013e; 5). The metric for the accuracy MOE
will also be a percentage point system. The baseline checklist is designed to identify both major
and minor product errors. A major error is one that causes, or should have caused, the product to
be reissued. Major errors may include erroneous classification, incomplete and erroneous
information, inaccurate geo location data, or improper distribution (U.S. Air Force ISR Agency
2013e; 5). The checklist has been designed with a numeric point system for each item evaluated
and is weighted so that any one major error will result in a failing product, as will an
accumulation of minor errors Any product which has more than two percentage points deducted
will be considered a failing product. (U.S. Air Force ISR Agency 2013e; 5). Accordingly, 98
percent is the standard against which the quality of intelligence products should be continuously
evaluated.
Using a scale of zero to 100 percent, with anything below a 98 percent resulting in a
failing product, the accuracy MOE will be broken down into three different factor levels (Figure
4): 1. An accurate product with no errors resulting in a 100%, 2. A marginally accurate product
with only minor errors resulting in a 98-99.9%, or 3. An inaccurate product with major errors or
48
Figure 4: Accuracy Factor Levels
several minor errors resulting in below a 98%.
Any passing product meeting the scores in Factor Level 1 or Factor Level 2 may be
considered effective, where any failing products, falling into Factor Level 3, may be considered
ineffective. As discussed in the Timeliness section above, the effectiveness assessment of a
product is not based on individual products, but on the aggregate of products of the same genre.
If, for example, Product A is habitually found to be inaccurate, as a product it would not be
considered effective. The problem may be that parts of the product are operationally outdated
and information required by standard production are no longer available. Or, in the case of a new
product, analysts may not have adequate training to satisfy product requests accurately.
Whatever the cause is determined to be, accuracy as a measure of effectiveness would provide
the commander and staffs insight into when products need to be updated, amended, or when
analysts need further training in order for a product to be effective.
Usability
Usability is an assessment attribute that overlaps with all other intelligence product MOE
and MOE indicators; timeliness, product customization, accuracy, and completeness are all, at
least in some degree, prerequisites for a usable product. Beyond those attributes, usability may
be understood as interpretability, relevance, precision and confidence level; in that respect
usability may serve as an additional, discrete MOE.
Intelligence must be provided to the customer in a format suitable for immediate
49
comprehension (U.S. Joint Chiefs of Staff 2013, II-7). Products must be in the correct data file
specifications for both display and archival (U.S. Department of the Army 2012, 2-1). Providing
usable products also requires production analysts to understand how to deliver the intelligence to
the customer in context. Customers operate under mission, operational, and time constraints that
shape their intelligence requirements and determine how much time they have to review the
intelligence products provided, especially considering the near real-time operational tempo of
ISR operations. Customers must be able to quickly apply the received intelligence and may not
have sufficient time to analyze complex intelligence products; therefore to be usable the product
must be easily interpreted (U.S. Joint Chiefs of Staff 2013, II-7). To be easily understandable, a
function of interpretability, product text should be direct and approved acronyms and brevity
terms should be used as much as possible (U.S. Joint Chiefs of Staff 2013, II-8). The GA and
MSA are both trained in reviewing their products to ensure they meet established reporting
guidance (U.S. Air Force ISR Agency 2013b; 29). It is probable that the DCGS Enterprise has
established reporting guidance in order to make products more easily interpretable, at least for
the standardization of data file types.
Intelligence must be relevant to the planning and execution of the operation at hand, and
aid the commander in the accomplishment of the mission (U.S. Joint Chiefs of Staff 2013, II-8).
To be effective, ISR products must be responsive to the commander’s or decision maker’s needs.
Intelligence products must enable strategic, operational, and tactical users to better understand
the operational environment systematically, spatially, and temporally, allowing them to orient
themselves to the current and predicted situation with the goal of enabling decisive action (U.S.
Department of the Air Force 2007, 4). Products must contribute to the customer’s understanding
of the adversary and other significant aspects of the OE, but also not burden the commander with
50
extraneous intelligence that is of minimal or no importance to the current mission. To produce
relevant intelligence, the PED Crew must remain cognizant of the customer’s intent and the
intelligence products must support and satisfy the customer’s requirements (U.S. Joint Chiefs of
Staff 2013, II-8). The customer’s requirements are presented to the PED Crew in the form of
Essential Elements of Information (EEIs). EEIs are the most critical information requirements
needed by the customer to assist in reaching a decision (U.S. Joint Chiefs of Staff 2013, I-7). For
the intelligence product to be relevant and thus effective, the product must satisfy the tasked
EEIs.
Related to relevance, intelligence products must provide the required level of detail and
complexity to answer the requirements; the products must be precise. Precise threat location,
tracking, and target capabilities are especially essential for success during mission execution
(U.S. Joint Chiefs of Staff 2013, I-25). Precision can be defined as the level of detail reported
and can be understood by categorizing the concept into three levels: detection, classification and
recognition (Veit and Monti 1995, 11). The detection level, where only the location of objects
can be discerned without discrimination, can be considered low precision. The classification
level, where discrimination between objects can be discerned, such as the difference between
tracked or wheeled vehicles, can be considered moderate precision. The recognition level, where
discrimination can occur between similar objects, such as between tracked vehicles, can be
considered highly precise (Veit and Monti 1995, 11). During practical application, recognition
levels can be amended to better apply to specific mission sets. If the objective of the product is to
confirm or deny the presence of a certain object at a certain location, a certain degree of
precision may be required. If the product is not precise enough, the product may not be useful to
the consumer, and thus ineffective.
51
Production analysts use confidence levels to distinguish between what is known and
supported by facts of the situation and what are untested assumptions (U.S. Joint Chiefs of Staff
2013, A-1). High confidence assessments are well-corroborated by information from proven
sources, include minimal assumptions, utilize strong logical inferences and methods and no or
minor intelligence gaps exist. Medium confidence assessments are partially corroborated by
information from good sources, include several assumptions, utilize a mix of strong and weak
inferences and methods and minimum intelligence gaps exist. Low Confidence assessments are
uncorroborated by information from good or marginal sources, include many assumptions,
utilize mostly weak logical inferences and minimal methods application and glaring intelligence
gaps exist (U.S. Joint Chiefs of Staff 2013, A-1). A customer’s determination of appropriate
actions may rest on knowing whether intelligence is fact or assumption, a knowledge conveyed
via the use of confidence levels (U.S. Joint Chiefs of Staff 2013, A-1). In this regard, low
confidence levels equate to data uncertainty (Greitzer 2005, 7). Depending on the operational
environment, that uncertainty may lead the intelligence consumer to disregard the provided
intelligence, undermining not only the overall usability of the product but consequently the
product’s effectiveness.
Of interpretability, relevance, precision and confidence level, both precision and
confidence level may serve as indicators for the usability MOE. The primary distinction
between which usability elements can serve as indicators and which cannot is whether or not the
element can be internally assessed. Determining how relevant or how interpretable a product is,
can ultimately only be decided by the customer that requested the product. The precision and
confidence of the intelligence provided in the product, however, can be readily assessed by
mission managers and quality controllers within the DCGS.
52
Given the negative impact of low confidence assessments on overall product usability,
products with consistently low confidence assessments may be indicative of undermined product
effectiveness due to a lack of analyst confidence and training or due to product incompatibility
with assigned task. For example, if the purpose of Product C is to identify certain objects, and
Product C is disseminated primarily with low confidence ratings, that trend could indicate
inadequate analyst training in source corroboration and assessment methodologies or the product
may be ill suited to the task and new production procedures may be required. For the same
reasons an analyst could not make high confidence assessments, an analyst’s assessment may
have been lacking in its level of detail; thus, low precision, may also be indicative of product
ineffectiveness.
Because the ultimate determination of product usability must subjectively determined by
the end user, the metric for product usability can only be customer satisfaction based
qualitatively on customer feedback. Feedback is the basis for learning, adaptation and subsequent
adjustment (U.S. Joint Chiefs of Staff 2011a, xxv). Feedback is a requirement for ensuring that
any actions taken are moving the situation in a favorable direction (Bullock 2006, 2). The Air
Force doctrinally supports the necessity of consumer feedback. Published in AFI 14-201, “Air
Force customers will provide timely response to production center requests for clarification,
negotiation and customer feedback,” (U.S. Department of the Air Force 2002, 5). Air force
production centers are also instructed to “measure customer (requirement) satisfaction,” (U.S.
Department of the Air Force 2002, 5). Production centers are additionally tasked to participate in
operations-oriented, end-user/producer technical exchange meetings and working groups, as
applicable to the unit mission in order to ensure the relevance of products and to actively
collaborate with end users to better ensure responses meet intelligence needs (U.S. Department
53
of the Air Force 2012a, 20-21).
Feedback is critical to effectiveness assessment, and though Air Force doctrinally
supports the provision of feedback by intelligence product consumers, the frequency, quality and
content of feedback provided to the DCGS PED Crew is unclear. While internally assessed
product confidence level and precision may contribute to usability estimates, the greatest
emphasis for whether or not a product was effective must be determined through customer
satisfaction and customer satisfaction can only be measured through customer feedback.
Customer satisfaction with a type of product could be determined by way of a survey, grading
rubric, or open-ended comments. Whatever feedback mechanisms are utilized, to effectively
contribute to mission assessment they must be standardized. The amount of time a customer can
allocate to the provision of feedback when compared to mission essential tasks must be
considered, and yet, the importance of feedback to effective intelligence must not be
underestimated. As previously discussed, product effectiveness within the scope of this research
does not apply to the effectiveness of a single product, but of the aggregate of products of a
certain type. The research objective is not to determine whether all individual products are
effective, but rather to determine whether or not Product A is an effective type of intelligence
product. Thus, customer feedback is not required after the receipt of every product. Rather,
assuming repeat DCGS customers, only period feedback is needed to ensure that established
products remain effective across the evolution of the ISR mission and that newly introduced
DCGS products are in fact effective in providing actionable intelligence when and where it is
required.
The customer satisfaction metric can be divided into three factor levels (Figure 5): 1. The
product completely satisfied the requirements and the customer was highly satisfied, 2. The
54
product mostly satisfied the requirements and the customer was satisfied or, 3. The product did
not satisfy the requirements and the customer was dissatisfied.
Measures of Performance
A predominant component of this research is the theoretical exploration of DOD
established Intelligence Attributes of Excellence towards the identification of FMV analysis-
based production MOE. Because product quality is inexorably linked to product effectiveness,
the identification of MOP are relevant to the product effectiveness discussion. For FMV
analysis-based production, MOP should provide insight into a production analyst's ability to
correctly perform the task of producing DCGS FMV intelligence products. In the niche
intelligence mission subset of FMV analysis and production, two of the Intelligence Attributes of
Excellence previously identified as MOE also serve as MOP: product timeliness and accuracy. It
is vital to here, once again, distinguish between task assessment and effectiveness assessment.
Though the metrics and factor levels for MOP are the same as two of the MOE attributes, the
original data input and findings differ; despite overlapping MOE and MOP, the respective
analyses are discrete. As Figure 6 demonstrates below, MOP involves the timeliness and
accuracy assessment of a single product in order to determine how well the analyst performed in
the production task. MOE involves the aggregate timeliness and accuracy assessment of all
products of a certain type in order to determine how effective the product itself is in delivering
actionable intelligence to the consumer.
Figure 5: Customer Satisfaction Factor Levels
55
Figure 6: Task and Effectiveness Assessment
Products falling into the ‘not timely’ factor level during effectiveness analysis should
cause the DCGS to consider adjusting that product’s timeline, refining the procedure for
producing that product, or introducing a more applicable product. Opposingly, during task
analysis, a single analyst who regularly fails to produce tasked products within the allotted time
does not reflect ineffective products, but rather poor analyst performance, a result with a
completely separate mitigation COA. While usability may also be utilized to assess analyst
performance, given consumer time constraints, feedback on every product is infeasible.
56
Figure 7: Measures of Performance
Consequently, no standardized, repeatable data set can be provided to assess performance
through product usability; therefore, usability does not meet all of the previously established
criteria to be considered an MOP. As shown in Figure 7 below, as a byproduct of the differences
between task performance and product effectiveness analysis, the timeliness and accuracy
performance indicators differ from their respective effectiveness indicators.
Proposed FMV analysis-based production MOP indicators include training grades,
qualification scores and examination scores. In order to participate in production functions
during an FMV mission analysts must be qualified for a production position on the AN/GSQ-272
SENTINEL weapon system (U.S. Department of the Air Force 2015b, 4). In order to develop
and maintain a high state of readiness, mission managers ensure that all personnel performing
intelligence duties attain and maintain the qualifications and currencies needed to support their
unit’s mission effectively and to the minimum standards by each community (U.S. Department
of the Air Force 2015b, 4). Each analyst is additionally responsible for maintaining qualifications
and currencies associated with their assigned positions (U.S. Department of the Air Force 2015b,
9-11). When assessing performance measures and indicators, only qualified analysts should be
considered in the data set as unqualified analysts do not contribute to production metrics.
The performance and progress of DCGS trainees through each training item is
documented on an Intelligence Gradesheet, which provides a standardized depiction of how well
a trainee is progressing through individual training events. The Intelligence Gradesheet
57
additionally provides trainers and supervisors a means to improve trainee performance for
specific tasks through rapid written feedback on the trainee’s abilities during each event (U.S.
Department of the Air Force 2015b, 15). Through the course of training, how well a trainee
competes production tasks is extensively document with predefined quantitative metrics. “Grade
1” is given when performance is correct, efficient, and skillful and without hesitation. “Grade 2”
is given when performance is essentially correct and the analyst recognizes and corrects errors.
“Grade 3” is given when performance indicates a lack of ability or knowledge (U.S. Department
of the Air Force 2015b, 23).
Training grades may serve as an MOP indicator because training, to some degree,
indicates future analytic and production performance. Documentation of Grade 1 on production
tasks correlates to well-trained analyst with a demonstrated product proficiency who can be
expected to perform accurate and timely production during mission execution. Documented
training grades of 2 or 3 may indicate analysts who struggle with production and who may
produce untimely, inaccurate or unusable products during mission.
While every analyst responsible for production during live missions has successfully
passed the qualification test for their position and has demonstrated at least a baseline
proficiency in every mission category, qualification scores can still be useful as a second MOP
indicator. Qualifications in part measure how well analysts can apply their training to the
performance of mission tasks, including production. When an evaluation occurs, the same
grading criteria is used in evaluations as was used during the training process to ensure
continuity and that all personnel are trained to the minimum standards required to pass an
evaluation (U.S. Department of the Air Force 2015a, 23). Following an evaluation, an evaluatee
will receive one of three scores for every evaluation area; the evaluation area of greatest interest
58
to this research is the Intelligence Products area (U.S. Department of the Air Force 2015a, 70). A
score of “Q” means the individual is qualified. “Q” is the desired level of performance where the
individual demonstrated a satisfactory knowledge of all required information, performed
intelligence duties within prescribed tolerances and accomplished the assigned mission. A score
of “Q-” means the individual is Qualified with Discrepancies. “Q-” indicates that the individual
is qualified to perform the assigned task, but requires debriefing or additional training as
determined by the evaluator. A score of “U” means the individual is Unqualified (U.S.
Department of the Air Force 2015a, 23). Because only qualified individuals may produce DCGS
FMV analysis-based products, only the performance of qualified individuals is relevant to this
research; the “U” score will not be considered.
Evaluation scores may serve as MOP indicators because those individuals who receive a
“Q” score in the Intelligence Products evaluation area can reasonably be expected to reproduce
their evaluation results and produce products at the desired level of performance during a
mission. Similarly, those analysts who received a “Q-” in the Intelligence Products evaluation
area may indicate a propensity toward subpar performance during mission execution. A “Q-”
means that the evaluatee “created and disseminated/or submitted data, information and/or
intelligence products using the appropriate methods with minor deviations. Did not always meet
applicable timelines. Deviations, omissions, or discrepancies did not jeopardize mission
accomplishment, safety or security,” (U.S. Department of the Air Force 2015a, 70).
In addition to being evaluated, the DCGS PED crew must also pass periodic written
intelligence examinations on knowledge requisites in order to maintain their currency.
Examinations are typically comprised of open or closed book written examinations for mission-
specific and Area of Responsibility (AOR) certifications (U.S. Department of the Air Force
59
2015b, 14) Intelligence examinations measure an analyst’s knowledge of procedures, threats and
other information deemed essential for effective intelligence operations. The minimum passing
grade for examinations is 85 percent (U.S. Department of the Air Force 2015b, 13-14). Analysts
failing to achieve the examination requirement of at least an 85 percent will not be able to
participate in production activities during a mission and therefore fall outside the scope of this
research; however, analysts who receive scores of 85 to 90 percent only marginally possess the
requisite knowledge to perform their mission position duties. Poor and marginal testing
performance on examinations can often indicate areas requiring increased training emphasis
(U.S. Department of the Air Force 2015b, 14). Scores within the 85 to 90 percent score range
may indicate analysts who will struggle with the performance of mission tasks.
As discussed in the Measures of Effectiveness subsection above, timeliness and accuracy
with respective metrics of minutes and percent meet all of the criteria to be effective MOE/MOP.
During effects assessment, timeliness and accuracy apply to products types, with trend and
pattern analysis focused on discrete genres of products. During task assessment, timeliness and
accuracy apply to individual products, with trend and pattern analysis focused instead on analysts
or PED crews/flights. Though timeliness and accuracy serve as both MOEs and MOPs when
conducting FMV analysis-based production mission assessment, indicators differ. Whereas
MOEs have unique product effectiveness indicators for timeliness and accuracy, the same
indicators for MOPs may be assigned to both timeliness and accuracy.
Theoretical Application
In this section, the proposed MOEs of timeliness, accuracy, and usability will be
theoretically applied to the DCGS FMV mission set in order to better demonstrate how they may
be used as assessment tools in the operational environment. This theoretical case study will
60
consider a Still Image product, here defined as an annotated snapshot of FMV imagery capable
of being disseminated to the end user in near real-time, when produced in support of the Pattern
of Life (POL) mission set, the analysis and geospatially referencing of a target or individual’s
day to day interactions with their environment (Mason, Foss, and Vinh 2011, 6). In support of
POL mission subsets, the Still Image product may be used to graphically display significant
activity or objects of interest meeting EEI requirements or as requested by the customer. For
example, if POL analysis is being conducted on a Compound of Interest (COI) and one EEI is to
report any vehicle traffic, the arrival of a sedan outside the COI would be deemed significant
activity and the PED Crew would create an annotated Still Image of the vehicle detailing its
appearance, location relative to COI, any interaction between the vehicle/driver/passenger and
the COI, etc. and then immediately disseminate the product to the customer so that the end users
may determine if, based on that significant information, new COAs or FMV requirements need
to be devised. For Still Images produced in support of POL missions to be considered actionable
intelligence, the goal of DCGS production, they must enable to customer/end user to make COA
decisions.
To determine whether Still Imagery products are effective in POL missions, product data
must be collected and assessed. How, and in what context, the product data is assessed is critical
to the assessment outcome. That being said, it is the proposed MOE that provide as assessment
framework for determining whether or not Still Imagery products are effective to satisfy certain
EEIs during POL missions. For this case study, it will be assumed that over the course of mission
assessment, no negative performance indicators were identified and all PED Crew production
analysts, through MOP analysis, were found to perform at or above standard production
performance levels. Holding MOP constant, for the Still Image product to be baseline effective,
61
it must then only be timely, accurate and usable.
Product timelines (time limits) are set by the DCGS Enterprise and each product has a
single timeline regardless of tasked mission set. Comprising the simpler of DCGS products, Still
Images will have an assigned timeline of “X” minutes (versus hours). If Still Image products are
completed within “X” minutes and reach the customer/end user while still of value, then those
products were not ineffective. However, if Still Image products are habitually disseminated after
“X” minutes then the product should be considered ineffective and either the product will need to
be amended, analysts will require additional training, or both. Similarly, the product may
ineffective if the product timeline does not fit the situation. If most Still Images produced in
support of POL missions are of vehicles, and vehicles usually arrive and depart compounds
within “Y” minutes, where Y<X, even when products are disseminated in “X” minutes, the
intelligence is too late to be actionable; that is, if the products habitually arrive after the vehicles
of interest have already departed the COIs, then the intelligence is no longer actionable as
customers do not have the option of redirecting the FMV asset from the COI to follow the
vehicle should they determine based on the graphic that tracking the vehicle is a higher priority.
In addition to products, information is passed real-time to the customer. Usually retasking
decisions can be made instantaneously; however, graphic products allow customers to view data
in a more easily comprehensible, often more in-depth manner and so the situation may exist
where products are the deciding factor in customer decision making. If customer feedback
reveals that on time products are still deemed situationally late, then new products, more quickly
produced may need to be introduced and analysts would need to be trained to default to those
new products in certain situations. For example, when conducting POL and significant vehicle
activity is assessed, analysts may be trained to produce a “clean,” unannotated Still Image
62
instead of the DCGS standard Still Image to ensure the product is timely, and thus, effective.
Without assessing product timeliness within the context of MOE, however, such analyses may
not be operationally conducted. Establishing timeliness as an MOE directs mission managers to
collect the appropriate product data, here whether or not products are meeting their timelines,
and assess that data, along with timeliness-focused customer feedback to ensure that DCGS
Enterprise directed timelines are appropriate for the tasked OE.
As with timeliness, establishing accuracy as a FMV analysis-based production MOE
directs both the collection of product data and its assessment to ensure that products are meeting
their objectives. DCGS site product accuracy data collection is dictated by Air Force ISR
Agency Instruction 14-121, Geospatial Intelligence Exploitation and Dissemination Quality
Control Program Management. Quality Control requires the evaluation of FMV analysis-based
product accuracy on a percentage scale. Beyond sending quarterly reports to higher headquarters,
how each site utilizes the collected accuracy data is up to that site (U.S. Air Force ISR Agency
2013e, 4). With accuracy as an established product MOE, mission managers will be expected to
at least conduct product accuracy trend analysis by error type, product type and PED Crew. For
example, if Still Images in support of POL missions are habitually missing georeference
information across PED Crews, additional training may be required, the product template may
need to be updated to call attention to the required field, or both. If only one PED Crew is
neglecting the georeference input field, then mission manager time and effort may be spared in
template updates and full training sessions by only conducting continuation training with the
deficient crew. If incorrect or misleading intelligence is acted upon, the negative repercussions
could be quite serious. Thus, for intelligence to be effective, it must be accurate, and that
includes even the simplest of micro-tactical FMV products. Designating accuracy as a MOE is
63
critical to directing product assessment towards increased product accuracy and efficiently
utilized mission management resources.
Above all, the customer/end user must find the product usable. Even a perfectly accurate,
on time product is ineffective because the customer does not utilize it and because it wastes
analyst and PED Crew efforts which could be better directed to activities the customer does find
useful. For example, periodic customer feedback may reveal low usability ratings on Still
Imagery when disseminated in support of POL missions despite high timeliness and accuracy
ratings. Further discussions with the customer may expose the customer’s exclusive reliance on
real time PED-crew disseminated intelligence. The customer may make all POL mission
decisions in real time prior to receiving even the timeliest of Still Image products. Whereas the
customer may find Still Images invaluable to conducting post-strike Battle Damage Assessments,
the nature of the POL mission may negate the value of Still Images altogether. In this
hypothetical case, PED Crew directives may be amended from automatically producing Still
Image products when significant activity is observed during POL missions to only producing
Still Image products when requested by the customer thereby more efficiently executing the
DCGS FMV mission. Without establishing usability as an MOE, however, established customer
feedback requirements may not facilitate such back and forth between mission managers and the
customer as in the above example and such ineffective product-caused mission inefficiencies
may never be recognized.
As discussed in the Conclusion below, MOEs themselves do not comprise mission
assessment. MOEs are assessment tools. In identifying MOEs, an assessment framework is
provided to focus data collection efforts and analysis in a way that can efficiently determine
whether products are meeting their objective of providing actionable intelligence.
64
SECTION V
CONCLUSION
Well devised MOEs and MOPs are critical to helping commanders and their staffs
understand the links between tasks, end states, and lines of effort (Center for Army Lessons
Learned 2010, 8). The research presented in the paper theoretically establishes that both MOEs
and MOPs can be identified for DCGS FMV analysis-based production. MOEs and MOPs
contribute to mission assessment, but they do not comprise the assessment themselves.
Supporting data must be collected and organized into presentable formats for trend and pattern
analysis (Center for Army Lessons Learned 2010, 9). MOE analysis involves product type trend
analysis in order to determine if the products made a difference in the OE and if the product
objective, the provision of actionable intelligence, was met. MOP analysis involves individual
and PED crew/flight trend analysis in order to identify determine performance deficiencies.
Effective mission assessment will result in either findings supporting the continuation of current
operations, actions, programs and resource allocations or findings which require reprioritization,
redirection or adjustments to increase goal attainment and enhance success (Center for Army
Lessons Learned 2010, 9).
With the exception of customer satisfaction, because based on publically available
information the establishment and standardization of DCGS feedback mechanisms are unclear,
data collection supporting mission assessment is doctrinally supported for DCGS operations. All
DCGS units are already required to send a monthly product Quality Control Status and Trends
Analysis Report to higher headquarters; that quality control mandatorily includes the evaluation
of product accuracy and timeliness (U.S. Air Force ISR Agency 2013b; 4-6).
Individual site support staffs are primarily responsible for mission assessment. Each
65
DCGS node’s Department of Operations Mission Management is charged with the establishment
and maintenance of mission statistics and the trends analysis program (U.S. Air Force ISR
Agency 2013d, 30). Mission Management monitors program effectiveness and provides
feedback to all levels in order to improve both the quality control process and the overall
production efforts of all analysts (U.S. Air Force ISR Agency 2013b; 7). They publish trends for
feedback to intelligence personnel at least quarterly, initiate actions to determine the root cause
of any negative trends and provide quarterly trends to squadron commander (U.S. Air Force ISR
Agency 2013d, 30). They are responsible for ensuring individuals are properly trained and
equipped to meet their mission requirements by emphasizing a stringent, multi-layered pre- and
post-release quality control process for FMV analysis-based products (U.S. Air Force ISR
Agency 2013b; 4). Most specifically, Mission Management is tasked with identification and
analysis of trends by types of errors, flights or work centers (U.S. Air Force ISR Agency 2013b;
7). Each DCGS node’s Commander’s Standardization/Evaluation Program is responsible for
trend analysis data. Standardization/Evaluation is tasked with conducting trend analysis at least
annually including trend identification for unit evaluation and examination results (U.S.
Department of the Air Force 2015a, 10). Each DCGS node’s Department of Operations Training
is tasked with monitoring training programs to ensure training objectives are met and convening
a semi-annual Training Review Panel in order to discuss training progression, changes to
requirements and any positive or negative trends (U.S. Air Force ISR Agency 2013c, 10-11). At
the higher headquarters level, the 480th ISR Wing, responsible for operating the AF DCGS
Enterprise, monitors, tracks, and provides feedback on products and other analyses including the
quality and timeliness of FMV products and analyses (U.S Department of the Air Force 2012a,
14).
66
The requirement for assessment, mission monitoring and trend analysis to include roles
and responsibilities is doctrinally addressed for the DCGS Enterprise. It is the responsibility of
the commander to ensure that those responsible for conducting assessment are adequately
resourced, as they must be in order to be effective (U.S Joint Chiefs of Staff 2011b, D-9). Unit
commanders must detail the procedures for mission crews to create, collect, compile and forward
mission data (timeliness, accuracy and usability metrics) necessary for the unit commander,
Director of Operations, and mission backshops (support staffs) to assess mission effectiveness
and report to other organizations as necessary (U.S, Air Force ISR Agency 2013d; 39). The
commander and staffs should ensure that resource requirements for data collection efforts and
analysis are built into the mission strategy and monitored to ensure compliance (U.S Joint Chiefs
of Staff 2011b, D-9). Collecting mission data and recognizing which Attributes of Intelligence
Excellence may serve as MOE and MOP is not enough to conduct mission assessment.
MOE and MOP trends and patterns must be derived from the collected mission metrics
and when analyst production performance or product effectiveness goals and objectives are not
being met, deficiency analysis must be conducted (Heuer 1999, 161). Deficiency analysis is used
if the task assessment and effects assessment indicate failures in analysts producing things right
(MOP) or products not achieving the right things (MOE) (Heuer 1999, 161). For mission
assessment to be accurate, however, data collection bias must be mitigated to the greatest extent
possible.
Bias is one of the major sources of analytical failure and may greatly affect the
intelligence analysts and subsequent intelligence product, in part because it is difficult for
recognize bias when it occurs (Achananuparp et al. 2005, 30). Research into the psychology of
intelligence analysis conducted by the Central Intelligence Agency has yielded several
67
concerning conclusions that may impact the validity of mission data. The analyst, consumer, and
mission assessor, on who evaluates analytical performance, all exercising hindsight. They take
their current state of knowledge and compare it with what they or others did, or could, or should
have known before the current knowledge was received (Heuer 1999, 161). In general, analysts
normally overestimate the accuracy of their past judgments and intelligence consumers normally
underestimate how much they learned from intelligence reports. When consumers of intelligence
evaluate the quality of the intelligence product, they assess how much they learned from the
product that they did not previously know. During this assessment, there is a consistent tendency
for most consumers to underestimate the contribution made by new information and thus, they
undervalue the intelligence product (Heuer 1999, 161). Unfortunately such biases cannot easily
be overcome. To even attempt to mitigate evaluation biases both analysts and consumers should
be educated on bias tendencies and actively consider their objectivity when assessing their own
performance or the effectiveness of a product.
To meet the DCGS Enterprise objective of delivering actionable intelligence to
consumers, intelligence products must be timely, accurate and usable. Especially as the ISR
mission continues to evolve, assessing the effectiveness of products will be vital to informing the
commander’s decisions concerning whether to maintain or change the existing production
strategy. As discussed in this research, the application of MOE in mission analysis allows for
production effectiveness assessments and MOE as well as MOP have been demonstratively
identified for the DCGS Enterprise FMV analysis-based production mission.
The research and theoretical process for FMV analysis-based production MOP and MOE
explored in this paper may be extended with easy applicability beyond the FMV mission subset
across DCGS Enterprise mission functions. This research may further prove useful in its
68
framework for other operationally focused and product oriented sections of the Intelligence
Community, especially GEOINT production centers. In customizing the assessment process to
best align with their mission set, units should consider whether or not to apply a weighted
approach to the assessment attributes.
In this research, usability held elevated importance over timeliness and accuracy, which
were not ranked in order of importance; however, units may find that their mission set
emphasizes the importance of one attribute over the others. Units may also find that mission
assessment should be conducted differently for different mission sets. FMV is a subset of
GEOINT, a mission which enables area familiarization and monitoring, feasibility assessments,
target development, weapons effectiveness, collateral damage estimates, the formulation of
courses of action and estimated consequences of execution as well as strategic, operational and
tactical planning for combat, peacekeeping, and humanitarian assistance/disaster relief missions
(U.S. Department of the Air Force 2012a, 3-4). Recognizing and balancing the subtle
differences relative to timeliness and accuracy is one of the critical art forms for good
intelligence (U.S. Joint Chiefs of Staff 2013, II-7). Less time sensitive missions, such as target
development, may naturally hold accuracy above timeliness in overall product effectiveness. In
time-sensitive missions, such as targeting, personnel recovery or threat warnings, dissemination
immediacy is critical for decision makers (U.S. Joint Chiefs of Staff 2013, III-33). Usually, the
need to balance timeliness and accuracy should favor timeliness; any errors in products could
and should be followed up on later (U.S. Joint Chiefs of Staff 2013, II-7). Each unit would need
to decide for itself whether or not to consider weighted MOE or MOP in their individual
assessment frameworks, and if they choose to weight the attributes, guidance should be locally
published to ensure assessment continuity.
69
This research is, as previously discussed, a theoretical exploration of how product
effectiveness may be assessed based on established DOD Attributes of Intelligence Excellence.
The research goal was not to comprehensively address every MOE or MOP possible to assess
DCGS FMV analysis-based production, but rather to demonstrate that any MOE and MOP were
possible to identify and to present several potential options for both. Should this research be
practically applied, each DCGS node or intelligence production center should consider their OE
and mission specifics to determine the addition or subtraction of MOE and MOP as relevant.
70
References
Achananuparp, Palakorn, Park, S. Joon, Proctor, Jason M., Wania, Christine and Zhou, Nan.
2005. An Analysis of Tools that Support the Cognitive Processes of Intelligence
Analysis. Pennsylvania State University.
Air University Press. “Air and Space Power Journal.” Accessed June 25, 2015.
http://www.airpower.maxwell.af.mil/.
Alberts, David S., Garstka, John J., and Stein, Frederick P. 2000. Developing and leveraging
information superiority. Washington, D.C.: C4ISR Cooperative Research Program.
Brown, Jason M. 2009. "Operating the Distributed Common Ground System: A Look at the
Human Factor in Net-Centric Operations." Air & Space Power Journal 23, no. 4: 51-
57. International Security & Counter Terrorism Reference Center, EBSCOhost (accessed
May 16, 2015)
Buckley John, Buckley Marlene, and Chiang Hung-Fu. 1976. Research methodology and
business decisions. New York: National Association of Accounts.
Bullock, Richard K. 2006. “Theory of effectiveness measurement” Ph.D. diss., Air Force
Institute of Technology.
Center for Army Lessons Learned. 2010. Assessments and Measures of Effectiveness in Stability
Operations tactics, techniques, and procedures. Handbook 10-41. Washington, D.C.
Darilek, Richard E., Walter L. Perry, Jerome Bracken, John Gordon and Brian Nichiporuk. 2001.
MOEs for stability and support operations. In Measures of effectiveness for the
Information-Age Army. Santa Monica: RAND Corporation.
George, Alexander L. and Bennett, Andrew. 2005. Case studies and theory development in the
social sciences. Cambridge: Massachusetts Institute of Technology Press.
Green, John M. 2001. Establishing system measures of effectiveness. San Diego: Raytheon
Systems Corporation.
Greitzer, Frank L., and Allwein, Kelcy. 2005. Metrics and measures for intelligence analysis
task difficulty. In Panel Session, 2005 International Conference on Intelligence Analysis
Methods and Tools. Vienna, VA.
Greitzer, Frank L. 2005. "Methodology, metrics, and measures for testing and evaluation of
intelligence analysis tools.” Richland: Battelle Memorial Institute Pacific Northwest
Division.
Heuer, Richards J. 1999. Hindsight biases in evaluation of intelligence reporting. In Psychology
of intelligence analysis. Washington, D.C.: Center for the Study for Intelligence.
71
Johnson, Rob. 2003. Developing a taxonomy of intelligence analysis variables. Central
Intelligence Agency Center for the Study of Intelligence. Washington, D.C.
Johnson, R., McCormick, R. D., Prus, J. S., and Rogers, J. S. 1993. Assessment options for the
college major. In Making a difference: Outcomes of a decade of assessment in higher
education. San Francisco: Jossey-Bass.
Kimminau, Jon. 2012. “A Culminating Point for Air Force Intelligence, Surveillance and
Reconnaissance." Air & Space Power Journal 26, no. 6: 113-129.
Langley, John K. 2012. “Occupational burnout and retention of Air Force Distributed Common
Ground System (DCGS).” Ph.D. diss., Pardee RAND Graduate School.
Mason, Tony, Foss, Suzanne, and Lam, Vinh. 2011. Using ArcGIS for intelligence analysis. Esri
Federal User Conference.
March, James and Sutton, Robert. “Organizational Performance as a Dependent Variable.”
Organizational Science. 6, no. 8. (November-December 1998). Accessed May 3, 2015.
http://www.wiggo.com/mgmt8510/readings/readings3/march1997orgsci.pdf.
RAND Corporation. “Research.” Accessed June 25, 2015. http://www.rand.org/research.html.
Report of the DNI Authorities Hearing before the Senate Select Committee on Intelligence.
2008. By J. Michael McConnell, Director of National Intelligence. Washington, D.C.:
Government Printing Office.
Sink, D. Scott. 1991. The role of measurement in achieving world class quality and productivity
management. Industrial Engineering 23, no. 6: 23-28.
Smith, Neill, and Clark, Thea. 2006. "A framework to model and measure system
effectiveness." Australian Government Department of Defence. 11th ICCRTS.
Sproles, Noel. 1997. Identifying success through measures. PHALANX 30, no. 4: 16-31
Veit, Clairice T., and Monti D. Callero. 1995. “A method for measuring the value of
scout/reconnaissance.” Santa Monica: RAND Arroyo Center. No. RAND-MR-476-A.
U.S Air Force ISR Agency. 2013. Air Force Distributed Common Ground System (AF DCGS)
Evaluation Criteria. AFISRA Instruction 14-153 Volume 2. Washington, D.C.
U.S Air Force ISR Agency. 2013. Air Force Distributed Common Ground System (AF DCGS)
Intelligence Training. AFISRA Instruction 14-153 Volume 1 480 ISR WG Supplement.
Washington, D.C.
U.S Air Force ISR Agency. 2013. Air Force Distributed Common Ground System (AF DCGS)
Intelligence Training. AFISRA Instruction 14-153 Volume 1. Washington, D.C.
72
U.S Air Force ISR Agency. 2013. Air Force Distributed Common Ground System (AF DCGS)
Operations Procedures. AFISRA Instruction 14-153 Volume 3. Washington, D.C.
U.S. Air Force ISR Agency. 2013. Geospatial Intelligence Exploitation and Dissemination
Quality Control Program Management. AFISRA Instruction 14-121 480th ISR Wing
Supplement. Washington, D.C.
U.S Air Force ISR Agency. 2014. Air Force Distributed Common Ground System (AF DCGS)
Operations Procedures. AFISRA Instruction 14-153 Volume 3 480th
ISR Wing
Supplement. Washington, D.C.
U.S Air Force. “Air Force Distributed Common Ground System.” Accessed June 27, 2015.
http://www.af.mil/AboutUs/FactSheets/Display/tabid/224/Article/104525/air-force-
distributed-common-ground-system.aspx.
U.S Department of Defense. “Defense Technical Information Center.” Accessed June 25, 2015.
http://www.dtic.mil/dtic/.
U.S Department of Defense. “Joint Electronic Library.” Accessed June 25, 2015.
http://www.dtic.mil/doctrine/.
U.S Department of the Air Force. 1997. Sentinel Force 2000+. Air Force Intelligence Force
Development Guide. Washington, D.C.
U.S. Department of the Air Force. 1999. Intelligence, Surveillance and Reconnaissance
Operations. Air Force Doctrine Document 2-5.2. Washington, D.C.
U.S Department of the Air Force. 2002. Intelligence Production and Applications. Air Force
Instruction 14-201. Washington, D.C.
U.S Department of the Air Force. 2004. Intelligence, Surveillance, and Reconnaissance (ISR)
Planning, Resources, and Operations. Air Force Policy Directive 14-1. Washington, D.C.
U.S Department of the Air Force. 2007. Intelligence, Surveillance and Reconnaissance
Operations. Air Force Doctrine Document 2-9. Washington, D.C.
U.S Department of the Air Force. 2012. Geospatial-Intelligence (GEOINT). Air Force
Instruction 14-132. Washington, D.C.
U.S Department of the Air Force. 2012. Global Integrated Intelligence, Surveillance, &
Reconnaissance Operations. Air Force Doctrine Document 2-0. Washington, D.C.
U.S Department of the Air Force. 2013. Air Force ISR 2023: Delivering decision advantage.
Washington, D.C.
73
U.S Department of the Air Force. 2013. Intelligence Training. Air Force Instruction 14-202
Volume 1 AFISRA Supplement. Washington, D.C.
U.S Department of the Air Force. 2014. Intelligence Standardization/Evaluation Program. Air
Force Instruction 14-202 Volume 2. 480th ISR Wing Supplement. Washington, D.C.
U.S Department of the Air Force. 2015. Intelligence Standardization/Evaluation Program. Air
Force Instruction 14-202 Volume 2. Washington, D.C.
U.S Department of the Air Force. 2015. Intelligence Training. Air Force Instruction 14-202
Volume 1 Washington, D.C.
U.S Department of the Air Force “Air Force e-Publishing.” Accessed June 25, 2015.
http://www.e-publishing.af.mil/.
U.S Department of the Army. “Army Electronic Publications.” Accessed June 25, 2015.
http://armypubs.army.mil/.
U.S Department of the Army. 1994. Intelligence Preparation of the Battlefield. Field Manual 34-
130. Washington, D.C.
U.S Department of the Army. 2012. Intelligence. Army Doctrine and Training Publication 2-0.
Washington, D.C.
U.S Joint Chiefs of Staff. 2010. Department of Defense Dictionary of Military and Associated
Terms. Joint Publication 1-02. Washington, D.C.
U.S Joint Chiefs of Staff. 2011. Joint Operations. Joint Publication 3-0. Washington, D.C.
U.S Joint Chiefs of Staff. 2011. Joint Operation Planning. Joint Publication 5-0. Washington,
D.C.
U.S. Joint Chiefs of Staff. 2012. Joint and National Intelligence Support to Military Operations.
Joint Publication 2-01. Washington, D.C.
U.S. Joint Chiefs of Staff. 2013. Joint Intelligence. Joint Publication 2-0. Washington, D.C.

Hendrix_Capstone_2015

  • 1.
    AN ASSESSMENT FRAMEWORKFOR FULL MOTION VIDEO ANALYSIS- BASED PRODUCTION: DEFINING INTELLIGENCE PRODUCT MEASURES OF EFFECTIVENESS AND PERFORMANCE A Master Thesis Submitted to the Faculty of American Public University System by Tori Hendrix Duckworth In Partial Fulfillment of the Requirement for the Degree of Master of Arts July 2015 American Public University System Charles Town, WV
  • 2.
    i The author herebygrants the American Public University System the right to display these contents for educational purposes. The author assumes total responsibility for meeting the requirements set by United States copyright law for the inclusion of any materials that are not the author’s creation or in the public domain. © Copyright 2015 by Tori Hendrix Duckworth All rights reserved.
  • 3.
    ii DEDICATION I dedicate thisthesis to my husband, Phillip Duckworth. Without his patience, support, and barista skills the completion of this thesis would never have been possible.
  • 4.
    iii ACKNOWLEDGMENTS I wish tothank Professor Christian Shuler of the American Public University System, both for his dedication to my academic progression and for his mentorship on this project. I would also like to thank Dr. Roger Travis of the University of Connecticut for investing the time required to develop my formal research and writing techniques early in my academic life. Thanks also to Mr. Michael Bailey, Mr. William Elkins, and Mr. Jeremy Jarrell, Mr. Linn Wisner, Mr. Bill Majors, Mr. Mikel Gault, Mr. Daniel Thomas and Mr. Donray Kirkland for their personal and professional investments toward the continuous improvement of intelligence analysis and production. The above gentlemen have always provided a sounding board, a support network and a sanity check for my intelligence research and analysis endeavors, this thesis proving no exception.
  • 5.
    iv ABSTRACT OF THETHESIS AN ASSESSMENT FRAMEWORK FOR FULL MOTION VIDEO ANALYSIS- BASED PRODUCTION: DEFINING INTELLIGENCE PRODUCT MEASURES OF EFFECTIVENESS AND PERFORMANCE by Tori Hendrix Duckworth American Public University System, July 5, 2015 Charles Town, West Virginia Professor Christian Shuler, Thesis Professor This paper discusses an original mixed-method analytic approach to assessing and measuring analyst performance and product effectiveness in Air Force Distributed Common Ground Station (DCGS) Full Motion Video (FMV) analysis-based production. The method remains consistent with Joint Publication doctrine on Measures of Effectiveness (MOE) and Measures of Performance (MOP) and considers established methodologies for assessment in Combat and Stability Operations, but refines the processes down to the micro-tactical level in order to theoretically assess intelligence support functions and how MOP and MOE can be determined for intelligence products. The research focuses specifically on those products derived from FMV analysis produced per customer request in support of real-time contingency operations. The outcome is an assessment process, based on Department of Defense Attributes of Intelligence Excellence. The assessment framework can not only be used to assess FMV analysis-based production, but prove a valuable assessment tool in other production-heavy sectors of the Intelligence Community. The next phase of research will be to validate the developed method through the observation and analysis of its practical application.
  • 6.
    v TABLE OF CONTENTS SECTIONPAGE I. INTRODUCTION ........................................................................................................ 1 II. LITERATURE REVIEW ........................................................................................... 12 DCGS Operations .................................................................................................... 13 Mission Assessment................................................................................................. 16 Variable Identification and Control ......................................................................... 22 Measuring Performance and Effectiveness.............................................................. 25 III. METHODOLOGY ..................................................................................................... 31 Research Design Overview...................................................................................... 31 Collection Methodology .......................................................................................... 33 Specification of Operational Variables.................................................................... 34 Assessment Methodology........................................................................................ 36 IV. ANALYSIS AND FINDINGS ................................................................................... 38 Measures of Effectiveness ....................................................................................... 42 Measures of Performance ........................................................................................ 54 Theoretical Application ........................................................................................... 59 V. CONCLUSION........................................................................................................... 64
  • 7.
    vi LIST OF FIGURES FIGUREPAGE 1. FMV Analysis-Based Production Mission Assessment………………………………..29 2. Measures of Effectiveness.……………………………………………………………..42 3. Timeliness Factor Levels……………………………………………………………….44 4. Accuracy Factor Levels..………………………………………………………………48 5. Customer Satisfaction Factor Levels…………………………………………………...54 6. Task and Effectiveness Assessment…………………………………………………....55 7. Measures of Performance……………………………………………………………....56
  • 8.
    1 SECTION I INTRODUCTION Assessment isan essential tool for those leaders and decisions makers responsible for the successful execution of military operations, actions and programs. The process of operationally assessing military operations is crucial to the identification and implementation of those adjustments required to meet intermediate and long term goals (Center for Army Lessons Learned 2010, 1). Assessment supports the mission by charting progress in evolving situations. Assessment allows for the determination of event impacts as they relate to overall mission accomplishment and provides data points and trends for judgments about progress in monitored mission areas (Center for Army Lessons Learned 2010, 4). In this way, assessment assists military commanders in determining where opportunities are where course corrections, modifications and adjustments must be made (U.S. Joint Chiefs of Staff 2011b, III-44). Without operational assessment, both the commander and his staff would be beleaguered to keep pace with quickly evolving situations and the commander’s decision making cycle could lack focus due to a lack of understanding about the dynamic environment (Center for Army Lessons Learned 2010, 4). Assessments are the commander’s keystone to maintaining accurate situational understanding and are vital to appropriately revising their visualization and operational approach (U.S. Joint Chiefs of Staff 2011b, III-44). From major strategic operations to the smallest of tactical missions, assessment is a crucial aspect of mission management; the Air Force Intelligence mission and its associated mission subsets are no exception. Much as assessment helps commanders and staffs understand mission progress, intelligence helps commanders and staffs understand the operational environment. Intelligence identifies enemy capabilities and vulnerabilities, projects probable intentions and actions, and is a critical
  • 9.
    2 aspect of boththe planning process and the execution of operations. Intelligence provides analyses that help commanders decide which forces or munitions to deploy and how to employ them in a manner that accomplishes mission objectives (U.S Joint Chiefs of Staff 2013, I-18). Most importantly, intelligence is a prerequisite for achieving information superiority, the state that is attained when a competitive advantage is derived from the ability to exploit a superior information position (Alberts, Garstka, and Stein 2000, 34). The role of intelligence in Air Force operations it to provide warfighters, policy makers, and the acquisition community comprehensive intelligence across the conflict spectrum. This role is designed to build the foundation for information superiority and mission success by delivering on-time, tailored intelligence (U.S Department of the Air Force 1997, 1). Key to operating in today’s information dominated environment and getting the right information to the right people in order for them to make the best possible decisions is the intelligence mission subset Intelligence, Surveillance and Reconnaissance (ISR) (U.S. Department of the Air Force 2012b, 1). The U.S Joint Chiefs of Staff Joint Publication 1-02, Department of Defense (DOD) Dictionary of Military and Associated Terms defines ISR as “an activity that synchronizes and integrates the planning and operation of sensors, assets, processing, exploitation, and dissemination systems in direct support of current and future operations,” (U.S. Joint Chiefs of Staff 2010, X). ISR is a strategic resource that enables commanders the ability to both make decisions, and execute those decisions more rapidly and effectively than the adversary (U.S Joint Chiefs of Staff 2011a, III-3). The fundamental job of U.S Air Force Airmen supporting the ISR mission is to analyze, inform and provide commanders at every level with the knowledge required to prevent surprise, make decisions, command forces and employ weapons (U.S Department of the Air Force 2013a, 6). In addition to enabling information superiority, ISR is
  • 10.
    3 critical to themaintenance of U.S. decision advantage, the ability to prevent strategic surprise, understand emerging threats and track known threats, while adapting to the changing world (Report of the DNI Authorities Hearing 2008, 1). The joint ISR mission consists of multiple separate elements, spread across uniformed services, component commands and intelligence specialties; each element in its own right must be successful for the integrated ISR whole to support operations as required (U.S. Department of the Air Force 2012b, 1). One such ISR element is Full-Motion Video (FMV) Analysis and Production. FMV analysis and production is a tactical intelligence function. Tactical intelligence is used by commanders, planners, and operators for planning and conducting engagements, and special missions. Relevant, accurate, and timely tactical intelligence allows tactical units to achieve positional and informational advantage over their adversaries (U.S. Joint Chiefs of Staff 2013, I-25). Tactical intelligence addresses the threat across the range of military operations by identifying and assessing the adversary’s capabilities, intentions, and vulnerabilities, as well as describing the physical environment. Tactical intelligence seeks to identify when, where, and in what strength the adversary will conduct their tactical level operations. Physical identification of the adversary and their operational networks as FMV analysis and production can facilitate, allows for enhanced situational awareness, targeting, and watchlisting to track, hinder, and/or prevent insurgent movements at the regional, national, or international levels (U.S. Joint Chiefs of Staff 2013, I-25). FMV analysis and production is conducted, in part, by the Air Force Distributed Common Ground System (DCGS) Enterprise. The Air Force DCGS, also referred to as the AN/GSQ-272 SENTINEL weapon system, is the Air Force’s primary ISR collection, processing, exploitation, analysis and dissemination system. The DCGS functions as a distributed ISR operations capability which provides
  • 11.
    4 worldwide, near real-timeintelligence support simultaneously to multiple theaters of operation through various reachback communication architectures. ISR provides the capability to collect raw data from many different intelligence sensors across the terrestrial, aerial, and space layers of the intelligence enterprise (U.S Department of the Army 2012b, 4-12).The Air Force DCGS currently produces intelligence from data collected by a variety of sensors on the U-2, RQ-4 Global Hawk, MQ-1 Predator, MQ-9 Reaper, MC-12 and other ISR platforms (Air Force Distributed Common Ground System, 2014). ISR synchronization involves the employment of assigned and allocable platforms and sensors, to include FMV sensors, against specified collection targets; once the raw data is collected, it is routed to the appropriate processing and exploitation system, such as a DCGS node, so that it may be converted via exploitation and analysis into usable information and disseminated to the customer in a timely fashion (U.S Joint Chiefs of Staff 2013, I-11). The analysis and production portion of the intelligence process involves integrating, evaluating analyzing, and interpreting information into a finished intelligence product. Properly formatted intelligence products are then disseminated to the requester, who integrates the intelligence into the decision-making and planning processes (U.S. Joint Chiefs of Staff 2012, III-33). The usable information disseminated to the customer is the FMV analysis-based product. In the DCGS, FMV analysis-based products are produced by the GA and MSA mission crew positions (U.S. Air Force ISR Agency 2013b, 59). The combination of the above process is routinely combined into the acronym PED, Processing, Exploitation and Dissemination. Every intelligence discipline and complementary intelligence capability is different, but each conducts PED activities to support timely and effective intelligence operations. PED activities facilitate timely, relevant, usable, and tailored intelligence (U.S. Department of the Army 2012, 4-13). PED activities are prioritized and
  • 12.
    5 focused on intelligenceprocessing, analysis, and assessment to quickly support specific intelligence collection requirements and facilitate improved intelligence operations. PED began as a processing and analytical support structure for unique systems and capabilities like FMV derived from unmanned aircraft systems (U.S Department of the Army 2012, 4-12). Intelligence is essential to maintaining aerospace power, information dominance and decisions advantage; accordingly, the Air Force provides premier intelligence products and services to all its aerospace customers including warfighters, policy makers and acquisition managers (U.S Department of the Air Force 2004, 1). Intelligence is produced and provided to combat forces even at the lowest level via the integration of data collected by the various ISR platforms with Production, Exploitation and Dissemination (PED) crews. The PED crews are comprised of intelligence specialists capable of robust, multi-intelligence analysis and production (Air Force Distributed Common Ground System, 2014). DCGS mission sets depend greatly on which type of sensor the PED crew is tasked to exploit; the scope of this research will focus exclusively on Full Motion Video (FMV) analysis-based production, that is, products based on the exploitation and analysis of Full Motion Video sensors. Air Force ISR, with capabilities such as satellites and unmanned aerial vehicles, is a mission set ever more integral to global operational success. The President’s 2012 Defense Strategic Guidance directs the US military to begin transitioning from current operations to future challenge preparation; as that transition into what will likely be a highly volatile, unpredictable future occurs, Air Force ISR will be fundamental to ensuring U.S. and Allied force freedom of action (U.S Department of the Air Force 2013a, 2013). Further complicating such a transition is the reality of the currently austere fiscal environment. Balancing intelligence and remote sensing capabilities across multiple Areas of Responsibility (AOR) with limited resources will demand
  • 13.
    6 the development ofmore efficient ISR mission management and execution processes. Intelligence organizations do recognize the need to continually improve their process performance and the analytical arm of ISR Operations, the DCGS, is no exception. Having a significant role in the maximization of ISR capabilities, the DCGS Enterprise should take every step to ensure system-wide PED efficiency, an objective that requires an established mission assessment framework. Assessment is an essential tool that allows mission managers to monitor the performance of tactical actions and to determine whether the desired effects are created in order to support goal achievement. Assessment entails two distinct tasks: continuously monitoring the situation and the progress of the operations and evaluating the operation against Measures of Effectiveness (MOEs) and Measures of Performance (MOPs) to determine progress relative to the mission, objectives, and end states. (U.S Joint Chiefs of Staff 2011b, III-45). All assessments should answer two questions: Are things being done right? Are the right things being done? Those two questions are perhaps best answered by the MOP and MOE assessment criteria. It is by monitoring available information and using MOP and MOE as assessment tools during planning, preparation, and mission execution that commanders and staffs determine progress toward completing tasks, achieving objectives and attaining the military end state (U.S Joint Chiefs of Staff 2011b, D-10). It is accepted that effectiveness is a measure associated with the problem domain (what is to be achieved) and that performance measures are associated with the solution domain (how is the problem to be solved) (Smith and Clark 2006, N -2). In other words MOE help determine if a task is achieving its intended results (if the task the right thing to do) and MOPs help determine if a task is completed properly (if the task is done right). More specifically, MOE concern how well a system tracks against its purpose (Sproles 1997,
  • 14.
    7 17). MOEs arecriteria used to assess changes in system behavior, capability, or operational environment and are tied to measuring the attainment of an end state, achievement of an objective, or creation of an effect. MOEs are based on effects assessment, the process of determining whether an action, program, or campaign is productive and can achieve the desired results in operations (Center for Army Lessons Learned 2010, 6). Should the desired result be achieved, then the outcome may be deemed effective; however, simply achieving the result may not measure the degree of effectiveness. For example, within context of computer programming, an effective program is one which produces the correct outcome, based on various constraints. Given that the desired result was achieved, it is generally accepted that a program which is faster and uses less resources is more effective (Smith and Clark 2006, N -2). MOPs area discrete assessment tool based on task assessment. MOPs are criteria used to assess actions that are tied to measuring task accomplishment and essentially confirm or deny that a task was performed properly. Agencies, organizations, and military units will usually perform a task assessment, based on established MOPs, to ensure work is performed to at least a minimum standard (Center for Army Lessons Learned 2010, 5). When discussing product effectiveness, it important to distinguish between how effectively the product was produced and how effective the produced product was in achieving its objective. Task assessment does have an effect on effects assessment, as any task not performed to standard will have a negative impact on effectiveness (Center for Army Lessons Learned 2010, 5). Thus, both effectiveness and consequently performance are significant in this study. It is the objective of this research to demonstrate that analyst performance can be assessed using MOPs and that product effectiveness can be measured, at least in part, by the identification of repeatable MOEs. To provide accurate assessments of performance and effectiveness, any
  • 15.
    8 identified MOPs andMOEs as assessment standards must be measurable, relevant and responsive. Whether conducted explicitly or implicitly, measurement is the mechanism for extracting information from empirical observation (Bullock 2006, 17). Assessment measures must have qualitative or quantitative standards, or metrics, against which they can be measured. To effectively measure change, whether positive or negative, a baseline measurement should be established prior to execution in order to facilitate accurate assessment (U.S Joint Chiefs of Staff 2011b, D-10). MOEs and MOPs measure a distinct aspect of an operation and thus require distinct and discrete qualitative or quantitative metrics against which to be measured (Center for Army Lessons Learned 2010, 8). Determining and applying qualitative and quantitative metrics is doubly difficult for intelligence support operations, like FMV analysis-based production because few traditional war fighting metrics apply (Darilek et al. 2001, 101). Determining effectiveness is not as straight forward as assessing the number of tanks destroyed by an offensive and weighing that against the objective number of tanks destroyed. While metrics can be defined, they are generally more one-sided. The focus is centered more on the ability of the analyst to operate in an operational environment than on how well an analyst operates relative to an enemy force (Darilek et al. 2001, 101). Obtaining assessment insight is dependent on having feasible implementation methods, as well as reliable models, for approaching the task of measurement (Sink 1991, 25). Each MOP and MOE discussed in this paper will be assigned either a qualitative or a quantitative metric and each metric will be discrete. In addition to being measurable, MOPs and MOEs must also be relevant to tactical, operational, or strategic objectives as applicable as well as to the operational environment and the desired end state (Center for Army Lessons Learned 2010, 8). The relevancy criterion
  • 16.
    9 prevents the collectionand analysis of information that is of no value and helps ensure efficiency by eliminating redundant and futile efforts (U.S Joint Chiefs of Staff 2011b, D-9). The relevancy of each MOP and MOE presented in this paper will be demonstrated through a discussion of its relationship to either the performance or effectiveness of FMV analysis-based production. Lastly, MOP and MOE must be responsive. Both MOP and MOE must identify changes quickly and accurately enough to allow commanders and staffs to respond effectively (Center for Army Lessons Learned 2010, 9). When considering the time required for an action to produce a desired result within the operational environment, the designated MOP and MOE should respond accordingly; the result of action implementation should be capable of being measured in a timely manner (U.S Joint Chiefs of Staff 2011b, D-10). DCGS FMV analysis-based production occurs in near real-time. As such, any proposed production MOP or MOE should be readily derived from available data or mission observation. MOEs and MOPs are simply assessment criteria; in and of themselves they do not represent assessment. MOEs and MOPs require relevant information in the form of indicators for evaluation (U.S Joint Chiefs of Staff 2011b, D-3). In the context of assessments, an indicator is an item of information that provides insight into MOP or MOE. A single indicator may inform multiple MOPs and MOEs and MOPs and MOEs may be made up of several observable and collectable indicators (U.S. Joint Chiefs of Staff 2011b, D-4). For MOPs, indicators should consider the capability required to perform assigned tasks. Indicators used to perform MOE analysis should inform changes to the operational environment. Indicators provide evidence that a certain condition exists or certain results have or have not been attained, and enable decision makers to direct changes to ongoing operations to ensure the mission remains focused on the end state (U.S. Joint Chiefs of Staff 2011b, D-4). Indicators inform commanders and staff on the
  • 17.
    10 current status ofoperational elements which may be used to predict performance and efficiency. Air Force production centers produce and provide those intelligence products based on validated/approved customer requirements (U.S. Department of the Air Force 2012b, 4). Tailoring, adapting a product to ensure it meets the customer’s format and content requests, helps to ensure the relevancy of the intelligence product. For example, if a customer requests a highly tailored product versus a standard baseline product, such a request may indicate an increased production time potentially impacting the timeliness, and thus the effectiveness, of the product. In this way, the level of customization typically requested by customers for a certain product type would serve as an indicator for the timeliness MOE, which is further discussed in the MOE subsection of Section IV, Analysis and Findings. Well-devised MOPs and MOEs, supported by effective information management, help commanders and staffs understand the linkage between specific tasks, the desired effects, the commander’s objectives and end state (U.S Joint Chiefs of Staff 2011b, D-10). Among the most important analytical tasks in the intelligence mission set is effectiveness and performance analysis, and yet, DCGS FMV operations have no enterprise-wide framework for mission assessment in which clear production MOPs or MOEs are defined. The highly dynamic nature of intelligence analysis, where even basic tasks are further complicated by analyst discretion and technique, makes the systematic evaluation of DCGS product effectiveness highly problematic but yet, not impossible. The question then arises: what repeatable data can be identified and how can the FMV analysis-based production mission be assessed in order to derive MOPs and MOEs? This paper proposes an assessment framework specifically identified to enable the quantification and qualification of DCGS analyst production performance and product effectiveness. The demands
  • 18.
    11 of the modernoperational environment require that intelligence products disseminated to the customer are anticipatory, timely, accurate, usable, complete, relevant, objective, and available (U.S Joint Chiefs of Staff 2013, III-33). Also known as Attributes of Intelligence Excellence, it is from those intelligence product requirements that performance and product effectiveness variables will be derived. Given their importance to industries, organizations and institutions, the topics of MOE, MOP and operational assessment are very well researched and sound methodologies for deriving MOE and MOP exist even for most major military operation types. From the global strategic ISR mission down, FMV analysis-based production is micro-tactical in nature. As discussed in Section II, Literature Review, though the provision of products are a basic intelligence function, analytic methodologies for the performance of product effectiveness analysis has not been refined for such a niche subset of the intelligence mission. Expanded upon in Section III, Methodology, this paper presents a theoretical process for identifying both MOP and MOE for DCGS FMV analysis-based production. Following an outline of the assessment methodology, Section V, Analysis and Findings will demonstrate that repeatable, measurable DCGS FMV analysis-based production can be identified.
  • 19.
    12 SECTION II LITERATURE REVIEW Noshortage of literature exists concerning the importance of MOEs and MOPs and how MOEs and MOPs may be assessed, especially with respect to military operations. While assessment is required at every level of military operations, strategic, operational and tactical, a majority of the relevant literature available focuses either on operational, or combat tactical assessments. FMV analysis-based production, an operational function of the DCGS Enterprise, is not a combat, but rather a support function, and when compared to operational functions such as Village Stability Operations, may be better classified as micro-tactical (when assessed independent of the operation being supporting). Thus, for such a small scale, niche function, no predominant school of thought exists on the matter, i.e., no prior research has attempted to answer this research question and no theoretical framework exists which can be applied wholly to the problem set. At least, no prior research exists on the research topic that is publically available. Due to the highly sensitive nature of DCGS Operations, any amount of relevant literature may exist on classified networks; however, only publically available publications will be utilized in this research. Many open source publications exist which are critical to understanding how FMV-analysis based production MOP and MOE may be derived. Most are not academic or peer-reviewed research, but rather are military doctrinal publications. Prior existing research concerning the agglomerately developed theoretical framework of this study also include military doctrinal publications, but more so include research-based case studies. Publications critical to understanding the nature of the problem set, considered background information, is detailed in the sections DCGS Operations and Variable Identification and Control. To assess production, at is conducted during DCGS Operations, the operating
  • 20.
    13 environment (OE) mustbe understood to the greatest degree possible. Given the volume of possible variables and the challenges associated with illuminating their relationships, an understanding of operational variables affecting analyst production, performance and product effectiveness are similarly critical. Publications relating to the theoretical framework for deriving FMV-analysis based MOPs and MOEs are discussed in the section Measuring Performance and Effectiveness. The uniqueness of the problem set precludes the application of any pre-proposed theoretical framework in its entirety. Rather, the problem set in question required the development of a new model for determining MOPs and MOEs, one which examines the relationship between the two assessment tolls as well as the effect of some critically impacting variables on each. DCGS Operations One prerequisite for identifying DCGS mission assessment metrics and deriving FMV analysis-based production MOPs and MOEs is a thorough understanding of DCGS mission operations. DCGS mission operations for this research focuses on the production aspect of operations and includes production training, evaluation and execution; whereas executing production may most predominantly determine product effectiveness, how well the production analyst was trained, and how well they performed during their evaluation prove useful as MOP indicators. The four most pertinent publications to DCGS training are Air Force Instruction (AFI) 14-202 Volume 1 (2015) concerning intelligence training, the Air Force ISR Agency (AFISRA) supplement to AFI 14-202 Volume 1 (2013) specifying ISR intelligence training, AFISRA Instruction 14-153 Volume 1 (2013) concerning AF DCGS intelligence training and the 480th ISR Wing supplement to AFISRA Instruction 14-153 Volume 1 (2013) which further refines the
  • 21.
    14 document for DCGSunits, all of which are subordinate to the 480th ISR Wing. The stated objective of AFI 14-202 Volume 1 (2015) and its AFIRSA supplement (2013) is “to develop and maintain a high state of readiness… by ensuring all personnel performing intelligence duties attain and maintain the qualifications and currencies needed to support their unit’s mission effectively…” (U.S Department of the Air Force 2015b, 4). The publications, having stated the impact of training on mission effectiveness, discuss types of training, the flow of training, individual training and currency requirements, mission readiness levels as well as the grading rubric for training tasks, all data critical to identifying analyst/production performance metrics and effectiveness indicators. The same holds true for AFISRA Instruction 14-153 Volume 1, Air Force DCGS Intelligence Training (2013) and its 480th ISR Wing Supplement (2013). Naturally more applicable to DCGS training than the aforementioned AFI and AFISRA supplements, the publications not only concur with the previous publications, but additionally provides guidance for the assessment of training programs and outlines mission training elements, more so in the 480th ISR Wing Supplement (Air Force ISR Agency 2013b, 7-31). The Training Task List (TTL) defines training requirements for skill set development as they support the operational duties necessary for each position. Those duties, for some positions, include the creation, quality control and dissemination of data, information and textual/graphic intelligence products (Air Force ISR Agency 2013b, 17; 28-29). These training publications are detailed to the point of exacting passing grades for training tasks, a quantitative standard of great interest to MOP research. AFI 14-202 Volume 2 (2015) and the 480th ISR Wing Supplement to the AFISRA Supplement to AFI 14-202 Volume 2 (2014) are the Air Force and DCGS doctrinal publications covering the Intelligence unit Commander’s Standardization/Evaluation Program. The purpose
  • 22.
    15 of these publicationsis “to provide commanders a tool to validate mission readiness and the effectiveness of intelligence personnel, including documentation of individual member qualifications and capabilities specific to their duty position,” (U.S Department of the Air Force 2014, 7). The publications, having stated the importance of standardization and evaluation on measuring personnel and mission effectiveness, discuss how the Standardization/Evaluation Program evaluates intelligence personnel on their knowledge of the duty position and applicable procedures wherein the task (mission performance) phase includes evaluating a demonstrated task, like that of production (U.S Department of the Air Force 2014, 38). Because the evaluation profile used to fulfill the task phase requisites must incorporate all appropriate requirements and allow accurate measurements of the proficiency of the evaluatee, as with training, the quantitative data collected during evaluations must be considered when defining MOP indicators for FMV-analysis based production (U.S Department of the Air Force 2014, 44). In addition to serving as a baseline for DCGS mission assessment and providing insight into performance and effectiveness affecting variables as discussed in their respective sections below, AFI 14-153 Volume 3 (2013), “AF DCGS Operations Procedures” and its 480th ISR Supplement (2013) define the DCGS mission and provide guidance and requirements for the execution of DCGS operations. Aligning well with this research is the specifically stated DCGS program objective to “enhance Air Force DCGS mission effectiveness,” (U.S Department of the Air Force 2014, 6). Moreover, when researching elements that may affect analyst performance during production efforts, knowing which analysts conduct the production is imperative. AFI 14- 153 Volume 3 (2013) breaks down mission execution roles and responsibilities by mission crew position. Of most interest are the positions of the Geospatial Analyst (GA), responsible for the creation of FMV-analysis based products, and that of the Multi-Source Analyst (MSA),
  • 23.
    16 responsible for thecreation of multi-intelligence report products (U.S Department of the Air Force 2014, 13-18). Because final products are created and disseminated during the mission execution phase of DCGS Operations, the DCGS Operations publications above are perhaps of more research value than either the Training or the Standardization/Evaluations Air Force publications with respect to contextualizing DCGS operations for assessment. Publications concerning the DCGS are limited; especially those published academically and not as military doctrine. Two peripheral sources concerning the DCGS are Brown’s 2009 “Operating the Distributed Common Ground System,” and Langley’s 2012 “Occupational Burnout and Retention of Distributed Common Ground System (DCGS) Intelligence Personnel.” Brown’s article focuses on the human factors of the AN/GSQ SENTINEL (DCGS) Weapons System and how the human factor influences net-centric operations (Brown 2009, 2). The tone of the article is that of a story, full of details that experience may corroborate but that the referenced sources did not. Beyond this thesis’ scope and questionable in supporting its details, the article will not be incorporated in this research beyond mentioning that nothing in the article contradicted any of the other referenced sources. Though much more professional in tone and with all points very well supported, Langley’s 2012 report similarly fell beyond the scope of this research, but importantly, also did not contradict any information presented in related research. In formulating a background knowledge of DCGS operations, military doctrinal publications must exclusively be relied upon. Mission Assessment Before MOE and MOP can be valuable to mission assessment, the desired end state must first be defined and mission success criteria developed. Mission success criteria describe the standards for determining mission accomplishment and should be applied to all
  • 24.
    17 phases of alloperations, including FMV analysis-based production (U.S Joint Chiefs of Staff. 2011b, IV-10). The initial set of success criteria determined during mission analysis, usually in the mission planning phase, becomes the basis for assessment. If the mission is unambiguous and limited in time and scope, mission success criteria could be readily identifiable and linked directly to the mission statement (U.S Joint Chiefs of Staff. 2011b, IV-10). The Air Force DCGS mission statement states that the “Air Force DCGS provides actionable, multi-discipline intelligence derived from multiple ISR platforms” (U.S Air Force ISR Agency 2013d, 5). That the mission statement does not detail which attributes comprise actionable intelligence complicates the mission success criteria and requires mission analysis to indicate several potential success criteria, analyzed via MOE and MOP, for the desired effect and task respectively (U.S Joint Chiefs of Staff. 2011b, IV-10). Research into mission assessment for intelligence production, specifically production derived from FMV, is significant not only because requirements and precedents exist for intelligence product metric collection and assessment, but also because of the impact such products have on the operating environment and the extent to which those products are poised to evolve over the next few decades. Requirements, precedents and assessment guidance are outlined across several Air Force and Joint doctrinal and exploratory military publications. The importance of ISR, and of the DCGS, as well as a look at future operations is explored in the Air Force publication Air Force ISR 2023 (2013) and in a corroborating journal article, A Culminating Point for Air Force Intelligence Surveillance and Reconnaissance, published in the Air & Space Power Journal by Kimminau (2012). In sum, the literature reviewed in this section provides an overview of existing military mission assessment and lays the groundwork for the increasing importance of intelligence production task (MOP) and effect (MOE) assessment.
  • 25.
    18 The military publicationsreviewed which outline requirements or precedents for intelligence product assessment can be divided into Air Force publications and Joint publications. Air Force publications include AFI 14-201, Intelligence Production and Applications, the aforementioned AFI 14-153 Volume 3, Air Force Distributed Common Ground System (AF DCGS) Operations Procedures, along with its 480th ISR Wing Supplement (2014) and the 480th ISR Wing Supplement to AFISRA Instruction 14-121 (2013), Geospatial Intelligence Exploitation and Dissemination Quality Control Program Management. AFI 41-201 applies to all Air Force organizations from whom intelligence production is requested, a category into which the DCGS, as a Production Center, falls; however, the AFI is directed more towards those intelligence agencies producing “stock products,” which the DCGS most usually does not (U.S Department of the Air Force 2002, 20). The AFI does, however, reveal an intelligence production precedent in requiring Headquarters (HQ), Director of ISR to conduct annual reviews of production processes and metrics and to measure customer (requirement) satisfaction (U.S Department of the Air Force 2002, 6-20). AFI 14-153 Volume 3 (2013) and its 480th ISR Wing Supplement (2013) deal directly with AF DCGS Operations and includes instructions for mission assessment, most usefully outlining assessment roles and responsibilities, albeit vaguely. While every analyst is responsible for conducting mission management activities during mission execution, the party responsible for documenting mission highlights and deficiencies, keeping mission statistics, performing trend analysis and quality control, and overall assessing mission effectiveness is the Department of Operations Mission Management (U.S Department of the Air Force 2014, 39). Mission management efforts are further discussed in AFISRA Instruction 14-121 (2013), which details the quality control and documentation process for DCGS products, a process which must include the identification and
  • 26.
    19 analysis of trendsby types or errors, flights or work centers (U.S Air Force ISR Agency 2013e, 7). All three Air Force publications set a standard requiring intelligence producers to conduct mission analysis; neither of the three however, mention MOPs, MOEs, possible metrics or specifically suggest how such mission analysis should be conducted. While the Air Force requires mission analysis towards greater effectiveness, the responsibility appears to fall to individual organizations and units to conduct mission assessment in accordance with the broad guidance set forth by Air Force doctrine as they see fit. How DCGS FMV units conduct mission analysis and how they may do so better is central to this research, thus, AFI 14-153 Volume 3 is an invaluable publication that sets the baseline for existing DCGS mission assessment guidance. Joint Publications (JP) 3-0 (2011) and 5-0 (2011), both concerning Joint (multi-service) operations, discuss mission assessment as well as how MOPs and MOEs may be used as assessment criteria, albeit with an exclusive focus on the assessment of combat operations. JP 5- 0, in its direction of assessment processes and measures, defines evaluation, assessment, MOE and MOP, sets the criteria for assessment, and discusses the value of MOPs and MOEs at every level of operations (U.S Joint Chiefs of Staff 2011b, xvii-D-10). JP 3-0, though less in depth when addressing mission assessment than JP 5-0, JP 3-0 similarly defines MOE and MOP lending continuity between the documents and further discusses assessment at the strategic, operational and tactical levels. Of most concern to this research is assessment at the tactical level, the assessment level about which JP 3-0 and JP 5-0 diverge. In JP 5-0, tactical level assessment can be conducted using MOPs to evaluate task accomplishment, a discussion which reflects the impact of assessment on operations and more so makes the case for comprehensive, integrated assessment plans (U.S Joint Chiefs of Staff 2011b, D-6). In JP 3-0, tactical-level assessment is described as using both MOPs and MOEs to evaluate task accomplishment where
  • 27.
    20 the rest ofthe paragraph aligns with JP 3-0 almost verbatim (U.S Joint Chiefs of Staff 2011a, II- 10). Both JPs define MOE and MOP in exactly the same way, where MOEs are a tool with which conduct effects assessment and MOPs are a tool with which to conduct task assessment; JP 3-0, however, later suggests that MOE and MOP are both utilized for task assessment leaving out and contradicting its earlier guidance about MOE being a discrete tool concerning effects assessment (U.S Joint Chiefs of Staff 2011a, GL-13; U.S Joint Chiefs of Staff 2011b, III-45). The MOP for task and MOE for effect assessment is corroborated by every other relevant source and so JP 3-0’s contradiction will be considered an oversight. In both Air Force and Joint Publications, a strong emphasis is placed on the conduction of mission assessment, the latter going so far as to incorporate MOPs and MOEs specifically. The importance of mission assessment is apparent in published literature, not only by military requirements, precedent and guidance, but also because of the impact of intelligence production on operations and the importance of assessment in quickly evolving mission sets, like that of ISR. Sentinel Force 2000+ (1997), a quick reference document for the intelligence career field, defines the Air Force Intelligence vision, identifies intelligence consumers and highlights the importance of intelligence in Air Force operations: with respect to intelligence, “this role is designed build the foundation for information superiority and mission success by delivering on- time, tailored intelligence,” (U.S Department of the Air Force 1997, 2). The publication additionally designates seven Goal Focus Areas of Air Force Intelligence designed to assure U.S. information dominance. One of the Focus Areas is Products and Services; the corresponding goal is to “focus on customer needs…modernize and continuously improve our products and services to see, shape and dominate the operating environment,” (U.S Department
  • 28.
    21 of the AirForce 1997, 3). Intelligence is critical to information superiority and mission success. Production is a critical component of intelligence and an Air Force goal for almost two decades has been product improvement. The objective for improvement exists, and yet, how can intelligence product improvement be assessed/measured? Air Force ISR 2023: Delivering Decision Advantage (2013) accounts for both the importance of intelligence, specifically ISR, to military operations and discusses the projected evolution of ISR over the next decade. ISR provides commanders with the knowledge they need to prevent surprise, make decisions, command forces and employ weapons; it enables them to maintain decision advantage, protect friendly forces, hold targets and apply deliberate, discriminate kinetic and non-kinetic combat power (U.S Department of Air Force 2013a, 6). The challenge for ISR is to maintain those competencies required to provide commanders with the knowledge they need while reconstituting and refocusing ISR toward future operations, all under the constraints of an austere fiscal environment (U.S Department of Air Force 2013a, 2- 4). Focusing heavily on ISR capabilities, the normalization of operations and the strengthening of collaborative partnerships, the report focuses on the revolution of analysis and exploitation, a topic well within the scope of this research. “The most important and challenging part of our analysis and exploitation revolution is the need to shift to a new model of intelligence analysis and production,” (U.S Department of Air Force 2013a, 13). With an actively evolving mission set where intelligence production is on the brink of great change, any requirement for tactical mission assessment becomes all the more important; otherwise, how can positive or negative changes in production or products be assessed relative to the baseline? A mission subset as critical as ISR production, especially when poised to rapidly evolve, requires established, repeatable, assessment measures.
  • 29.
    22 Closely paralleling AirForce ISR 2023 (2013) with respect to the future of the ISR Enterprise is Kimminau’s 2012 Air & Space Power Journal article, A Culminating Point for Air Force Intelligence Surveillance and Reconnaissance. Kimminau (2013) concurs with the imperative for ISR: “success in war depends on superior information. ISR underpins every mission that the DOD executes,” (Kimminau 2012, 117). The article further discusses the future of ISR through 2030. While Kimminau focuses less on an analytical and production revolution, he does elaborate on several of the projected evolution areas that do overlap with Air Force ISR 2023 (2013). Though not as useful to this research without a discuss of analysis and production, Kimminau’s article serves best in corroborating Air Force ISR 2023 (2013) in its view of ISR as dynamic, revolutionary, ever more important. Variable Identification and Control Perhaps the most challenging aspect of researching MOPs and MOEs in non-combat micro-tactical mission operations is the identification and control of variables. Production is just one element of DCGS operations, albeit an extremely interconnected element dependent on human analytical training and preferences, the quality of received data, management style, the status of communications, mission parameters and potentially several hundred other variables. One significant research challenge is in determining which variables to hold constant and which variables directly affect the task of production (MOPs) and/or product effectiveness (MOEs). As discussed in the DCGS Operations section above, several Air Force doctrinal publications provide guidance on DCGS training, evaluation, and mission execution and mission management. Those same publications provide insight into which variables may affect each respective aspect of mission operations. Another publication, AFI 14-132, Geospatial- Intelligence (2012), offers insight into additional mission variables, not through references to
  • 30.
    23 DCGS operations, butrather to the field of Geospatial-Intelligence (GEOINT) itself. GEOINT encompasses all imagery, Imagery Intelligence (IMINT) and geospatial information, a category into which FMV exploitation and production falls (U.S Department of the Air Force 2012a, 3). In fact, a prerequisite to qualify as a GA, the position which creates FMV-analysis based products, is to be in the Geospatial Intelligence Analyst career field, Air Force Specialty Code (AFSC) 1N1X1 (Air Force ISR Agency 2013b, 24). AFI 14-132 (2012) mentions the DCGS as a key exploiter and producer of Air Force GEOINT wherein the DCGS as a GEOINT production center must report up the chain of command an analysis of the quality and timeliness of GEOINT products, revealing timeliness and quality to be significant variables in GEOINT product effectiveness (U.S Department of the Air Force 2012a, 14). The AFI additionally discusses GEOINT baseline and core functions revealing the types of operations GEOINT supports and the types of GEOINT exploited for that support, variables necessary to contextually consider. For example, would the timeliness variable have the same impact on performance and effectiveness across all mission types supported, or, do some mission types necessarily value timeliness more or less than others? The Raytheon Company’s 2001 Establishing System Measures of Effectiveness by John Green corroborates the assessment that timeliness factors into product effectiveness. In addition to addressing the imperative for performance analysis and developing definitions for system MOPs and MOEs, the report most importantly weighs in on timeliness as an assessment variable. “Time is often used… [and] is attractive as an effectiveness measure,” (Green 2001, 4). Green (2001) goes on say that: “time is the independent variable and outcomes of processes occur with respect to time,” (Green 2001, 4). Similarly, in this research, the only dependent variable is effectiveness where all other uncontrolled variables, to include time, are considered
  • 31.
    24 independent. Green (2001)not only nods to the common practice of using time to measure effectiveness, but highlights the logic behind time as an independent rather than dependent variable. The most comprehensive categorization of intelligence mission variables can be found in a 2003 Central Intelligence Agency publication by Rob Johnson, Developing a Taxonomy of Intelligence Analysis Variables. Johnson (2003) provides a classification of 136 variables potentially affecting intelligence analysis broken down into four exclusive categories: systemic variables, systematic variables, idiosyncratic variables and communicative variables. The list does not suggest that any of the variables affect human performance or product effectiveness, rather it serves to identification purposes so that experimental methods may isolate and control some variables in order to further isolate those variable affecting analysis (Johnson 2003, 3-4). FMV products are dependent on FMV analysis, hence FMV-analysis based production. The focus of this research is on the effectiveness of products and also on analyst performance in producing said products, thus, many of those variables identified as potentially affecting analysis will also potentially affect production. The list focuses more on cognitive and situational-type variables such as analytic methodology, decision strategies, collaboration, etc. rather than variables like timeliness, quality, precision, etc. which are the focus of this research; however, the idea of controlling some variables in order to assess what impact others may have on the dependent variable, here effectiveness, is a goal of this research. RAND’s 1995 publication, A Method for Measuring the Value of Scout/Reconnaissance, by Veit and Callero explores the topic of variable identification and control, the most relevant piece of which concerned variable definitions. The case study well defined two variables of interest to FMV analysis-based production: timeliness and precision. Timeliness was defined as
  • 32.
    25 the “time betweeninformation collection and its availability for use by the situation assessor” and precision was defined as “the level of detail about enemy weapons systems reported,” (Veit and Callero 1995, xv). The timeliness definition is applicable verbatim; the precision definition is geared exclusively towards scout/reconnaissance but can be easily amended to fit analysis and production precision. Most interestingly, Veit and Callero (1995) introduced the concept of variable factor levels. To better understand the relationship between variables, several factor levels can be assigned to a variable in order to understand the degree of impact. For example, the precision variable can be further specified by categorically establishing three different levels of detail. Factor levels are further discussed in Analysis and Findings. While useful for its variable definitions and factor level methodology, Veit and Callero (1995) was found to be more valuable in its proposed methodology for deriving MOEs. The most significant publication concerning potential variables for the FMV production mission subset was JP 2-0 (2013), Joint Intelligence. Though the variables discussed were not strictly identified as variables or put directly into the context of mission assessment, JP 2-0 introduced the concept of Attributes of Intelligence Excellence, a list of variables thought to be critical in the production of effective products. Attributes of Intelligence Excellence include anticipatory, timely, accurate, usable, complete, relevant, objective and available (U.S. Joint Chiefs of Staff 2013, III-33). JP 2-0 additionally acknowledged that the effectiveness of intelligence operations is assessed by devising metrics and indicators associated with those attributes (U.S. Joint Chiefs of Staff 2013, I-22). Over the course of this research, along with precision, each attribute listed will be assessed as a potential MOP, MOE or MOP/MOE indicator. Measuring Performance and Effectiveness
  • 33.
    26 “There is nomore critical juncture in implementing [a] successful assessment…than the moment of methods selection,” (Johnson et al. 1993, 153). Joint Publication 3-0 (2011), Joint Operations, and Joint Publication 5-0 (2011), Joint Operations Planning, both discuss the importance of conducting assessment at every phase of military operations, both at the task assessment level (MOPs) and effects assessment level (MOEs). Much attention has been paid to defining MOP and MOE for Combat Operations, Stability Operations, Reconnaissance and even Intelligence Simulations; however, publically available data assessing the niche operational field of FMV analysis-based production is lacking. No “one size fits all” methodology exists for determining MOPs and MOEs, nor does any existing methodology adequately apply to FMV analysis-based production assessment problem set. Several publications do detail MOP and/or MOE derivation methodologies, portions of which do relate to this problem set and aggregately those publications enable the foundation of a new theoretical framework. The Center for Army Lessons Learned 2010 Handbook 10-41, Assessments and Measures of Effectiveness in Stability Operations outlines the Tactics, Techniques and Procedures (TTPs) for conducting mission assessment and for determining MOEs for stability operations. The handbook details the necessity for assessment and clearly distinguishes between task-based (doing things right) and effect-based (doing the right thing) and delineates when MOPs and MOEs are appropriately applied. The handbook provides insight into the Army’s Stability Operations Assessment Process, including how task and effect-based assessment support deficiency analysis. “The process of assessment and measuring effectiveness of military operations, capacity building programs, and other actions is crucial in identifying and implementing the necessary adjustments to meet intermediate and long term goals in stability operations,” (Center for Army Lessons Learned 2010, 1). All such guidance is highly relevant to
  • 34.
    27 the assessment processherein proposed for FMV analysis-based production, despite the Stability Operation-specific assessment is beyond the scope of this research. Greitzer’s 2005 Methodology, Metrics and Measures for Testing and Evaluation of Intelligence Analysis Tools is a case study involving a practical framework for assessing intelligence analysis tools based on almost-exclusively quantitative assessment. While the case study’s framework did not align with this research’s mixed-method theoretical assessment approach, the case study did identify several potentially replicable performance measures, both qualitative (customer satisfaction, etc.) and quantitative (accuracy, etc.) all of which pertain specifically to intelligence analysis. Such performance measures are pertinent to the discussion of MOPs across any intelligence function and are especially pertinent to the analysis of FMV analysis-based production. The case study also defined and distinguished between measurement and metrics, definitions which were adopted for the FMV-analysis based production assessment. The highly applicable discussion of quantitative and qualitative performance measures, as well as of metrics, heavily influenced the theoretical framework of this research. Bullock’s 2006 Theory of Effectiveness Measurement dissertation titularly appears applicable to FMV-analysis based production effectiveness measurement, especially when defining effectiveness measurement as “the difference, or conceptual distance from a given system state to some reference system state (e.g. desired end-state);” however, the foundation of Bullock’s effectiveness measurement was applied mathematics with all relationships between variables explained as mathematically stated relational systems (Bullock 2006, iv-12). The Bullock (2006) study approached the problem of effectiveness measurement quantitatively, a method which may have applied to quantifiable FMV-analysis based production variables, but which did not provide an applicable framework for unquantifiable (qualitative and quasi-
  • 35.
    28 quantitative) variables. Chapter Sevenof Measures of Effectiveness for the Information-Age Army, “MOEs for Stability and Support Operations” (Darilek et al. 2001) is a case study revolving around MOEs for non-combat operations, a category applicable to intelligence support functions (when assessed independently of the operation they are supporting). The case study theoretically examined how Dominant Maneuver in Humanitarian Assistance might be assessed and how MOE might be derived. The method of theoretical examination perfectly aligned with the proposed theoretical examination of FMV-analysis based production MOE, although the specifics of assessing Humanitarian Assistance were too disparate for the analytical framework to be duplicated. As with the ISR mission, Darelik et al. (2001) emphasized the changing nature of Assistance Operations, partially to highlight the necessity of assessment. Of all the reviewed publications, Darelik et al. (2001), had the methodology which most closely applied to the FMV- analysis based production mission set, most probably because it was the only theoretical case study to focus on non-combat operations. Evaluating the effectiveness of any intelligence analysis-related problem set presents several methodological challenges (Greitzer and Allwein 2005, 1). Because intelligence production cannot avoid the human elements of analyst preference and technique, assessing analyst performance (was the product created correctly) and product effectiveness (did the product meet its objective) is far more subjective than traditional combat operations, such as whether a projectile hit and incapacitated it's intended target or whether or not a route was successfully cleared. Coupled with the difficulties associated with valuing intelligence in a strictly quantitative, i.e., numerical fashion, the pre-proposed MOP/MOE assessment frameworks above cannot be wholly adopted. Rather, this unique problem set required the
  • 36.
    29 development of anew model for determining MOP and MOE, one which examines the relationship between the two as well as the effect of all variables when examined in the FMV production operating environment. The model proposed does include effectiveness criteria (variable factors) adapted from Method for Measuring the Value of Scout/Reconnaissance (Veit and Callero, 1995) and a flow of assessment loosely based upon Chapter Seven of Measures of Effectiveness for the Information-Age Army, “MOEs for Stability and Support Operations” (Darilek et al., 2001). The overarching hypothesis in this research, that MOP and MOE are identifiable, measurable and can be utilized to evaluate FMV analysis-based products in the context of circumstances defined by mission variables, is dependent on several sub-hypotheses:  Sub-Hypothesis 1: At least one MOE and one MOP can be identified for DCGS FMV analysis-based products  Sub-Hypothesis 2: For each DCGS FMV production MOE and MOP, at least one indicator can be defined based on variable and objective analysis  Sub-Hypothesis 3: A discrete, measurable metric can be assigned to each identified MOE and MOP Figure 1 demonstrates the relationship between MOP, MOE, indicators and factor levels. The outline below illustrates the theoretical framework via which the above hypotheses above will be tested. Using a top-down approach, the research will: 1. Identify MOP and MOE: which independent variables impact product quality or effectiveness and meet MOE/MOP criteria (repeatable, measurable, relevant and responsive)? 2. Identify MOP and MOE factor levels: what factor levels are required to assess degree of contribution to operational effectiveness? 3. Identify performance and effectiveness indicators: what items of information provide insight into MOP and MOE analysis? 4. Identify MOP and MOE metrics: how will MOE and MOP be measured?
  • 37.
    30 Figure 1: FMVAnalysis-Based Production Mission Assessment
  • 38.
    31 SECTION III METHODOLOGY In supportof determining FMV analysis-based production MOP and MOE, this case study’s theoretical analysis will consider how multiple independent variables affect one dependent effectiveness variable, a precedent set by multiple studies assessing organization dependence where performance is defined as a dependent variable and the studies seek to determine variables that produce variations in or effects on performance (March and Sutton 1998, 698). In accordance with methodological precedent discussed in Case Studies and Theory Development in the Social Sciences by George and Bennett (2005), the research objective focuses not on outcomes of the dependent effectiveness variable, i.e., the impact of product effectiveness on ISR operations, but on the importance and impact of independent variables, i.e., on how certain predefined variables affect product effectiveness. (George and Bennett 2005, 80- 81). Due to environmental constraints including the highly sensitive and classified nature of FMV production, this study will be limited to secondary over primary research and theoretical over practical analysis. Rather than testing the effects of mission variables via the observation of production analysts or surveying the effectiveness of specific FMV product types, the primary research objective in this case study is to explore and explain the relationship between multiple independent variables and a single dependent effectiveness variable. Independent variables of theoretical interest include timeliness, accuracy, and usability. Research Design Overview The research problem was addressed via both archival and analytic research strategies (Buckley, Buckley, and Chiang 1976, 15). The archival strategy involved quantitative and
  • 39.
    32 qualitative research methodswhereby research was conducted in the secondary domain exclusively through the collection and content analysis of publically available data, primarily journal articles and military publications. The archival research design heavily considers George and Bennett’s 2005 “structured-focused, research design” method (George and Bennett 2005, 73- 75). The research objective, to derive FMV analysis-based production MOE and MOP, can best be described as “Plausibility Probe” research wherein the study explores an original theory and hypotheses to explore whether more comprehensive, operational, primary research and testing is warranted. Theory development, not operational testing is the research design focus (George and Bennett 2005, 73-75). The lack of previously existing theories, hypotheses and research concerning niche ISR mission effectiveness necessitated the modification of the initial, Plausibility Probe research in order to build upon and develop new operational variables and intelligence variable definitions (George and Bennett 2005, 70-71). The uniqueness of the problem set additionally required the use of internal logic. As previously discussed, the highly classified nature of intelligence operations coupled with the originally of the niche problem set made observational research, empirical research and source meta-analysis impossible. Because of information and personnel access restrictions, hypothesis testing was necessarily relegated to theoretical exploration and the analytical research strategy was required to satisfactorily answer the research question. The analytic research method made use primarily of internal logic, both deductive and inductive in research nature. Deductive research involved broad source collection and content analysis in order to determine all data relevant to the problem set and to draw original conclusions from multiple disparate data sets. For example, extensive research was conducted on both MOE and MOP determination methods and how the military conducts operational
  • 40.
    33 assessment in orderto determine method overlap and to develop refined assessment methodologies where overlap failed to occur. Inductive research involved taking all relevant aspects of the problem set and further collecting source material relevant to that subtopic in order to better, comprehensively, understand how each subtopic relates to the overarching problem set. For example, once the independent variables were identified, each was extensively researched in order to better define the variables within the context of the intelligence mission and to determine any known relationships to the dependent variable, here product effectiveness. By utilizing the analytic research strategy, the comparability of this research to previously existing studies will be limited, especially with respect to variable analysis. The research design is further outlined in the following sections: Collection Methodology, Operational Variable Specification and Analysis Methodology. Collection Methodology The research collection methodology for this thesis involves a two-pronged content analysis approach. Deriving a theory from a set of data and then testing it on the same data, is thought by some to be an illegitimate research practice (George and Bennett 2005, 76). Thus, two sets of data will be reviewed: literature concerning FMV PED operations (theoretical testing data) and literature discussing MOE and MOP evaluation case studies (methodological theory data). Testing data will be collected almost exclusively from military publications, all obtained either through the U.S Air Force e-Publishing document repository, the U.S Army Electronic Publication repository, or the U.S Department of Defense (DOD) Joint Electronic Library. Because the American Public University System Digital Library yielded few results related to this highly specialized research topic, theoretical data had to be collected from known academic defense journals, primarily the Air University Press Air and Space Power Journal digital
  • 41.
    34 archives, known defenseresearch corporations, primarily RAND Corporation online research publications, and the Defense Technical Information Center digital repository. Due to the extremely limited amount of publically available and relevant data in either of the above categories, case selection bias as well as competing/contradictory analysis was naturally mitigated. Key to testing the stated hypothesis and sub-hypotheses is the identification and analysis of operational variables which may affect product effectiveness (MOEs), effectiveness indicators, those elements of the mission related to MOEs, MOPs, and quantitative/qualitative metrics for both MOEs and MOPs; these tasks were accomplished through an in-depth content analysis of the many Air Force doctrinal publications detailing Geospatial Intelligence (GEOINT) and DCGS operations (training, evaluation, and mission execution). For those operational variables identified, additional publications relevant to each variable were consulted for further examination of related content. The second content analysis approach includes a close review of previously published theses, dissertations, journal articles and technical reports concerning the derivation of MOEs and MOPs, with specific attention paid to those addressing analysis, intelligence analysis, military operations and/or theoretical MOE/MOP assessments. All methodologically applicable portions of previously conducted MOE/MOP case studies will be incorporated into the analysis of the FMV analysis-based production mission subset. Specification of Operational Variables As previously mentioned, the single dependent variable in this case study will be DCGS FMV-analysis based product effectiveness. The execution of the FMV PED mission involves almost immeasurable and innumerable systemic, systematize, idiosyncratic communicative
  • 42.
    35 variables, all ofwhich could arguably impact mission effectiveness. (Johnson 2003, 4). Given that the micro-tactical nature of FMV production necessarily narrowed scope of this research and that research resources were limited to publically available data, most variables will be held constant and many more will remain unknown. This study will be limited to the examination of only those variables which most greatly affect product effectiveness and analyst performance; all other variables, such as leadership management style, communications and systems status, etc. will be held as fixed and non-impacting. To further specify the research objective, the focus of this thesis is not to identify every variable which may impact effectiveness, but rather to show that any impacting variables (MOEs) can be identified at all. As described in the Research Design Overview section above, this Plausibility Probe research is intended to initially address the research question of whether or not MOE and MOP can be derived for FMV-analysis based products; given the volume of DCGS FMV PED operational variables, the research is thus designed to avoid any impact negated (fixed/non-impacting) variables may have on the validity of analysis. The research design further aims to isolate the impacts of product effectiveness by assessing the influence of variance in independent variables. Variable factor levels were developed for each independent variable as a sensitive method of describing variable variances (George and Bennett 2005, 80-81). Should this theoretical assessment framework ever be practically applied, classified mission data, site specific objectives, and local TTPs can be incorporated into the assessment process to derive additional MOEs should those designated in this research fail to comprehensively reflect requirements for product effectiveness.
  • 43.
    36 Based on anin-depth review of available literature, those independent variables identified as impacting to DCGS FMV analysis-based product effectiveness (MOEs) include: product timeliness, product accuracy, and product usability. Those independent variables identified as reflective of analyst performance (MOPs) also include product timeliness and product accuracy. The Analysis and Findings section of the thesis will revolve predominantly around an examination of how those independent variables, or potential MOEs/MOPs, impact the overall effectiveness of the product and reflect the performance of the production analysts. Assessment Methodology Following the data collection, all prior existing research critical to understanding DCGS operations and those concerning intelligence performance variables was analyzed. A prerequisite to any effectiveness analysis is the identification of an objective against which to measure effectiveness. The primary phase of analysis was thus an investigation into the objective of DCGS FMV analysis-based productions. Subsequently, variable analysis commenced. Impact analysis of each independent variable was not practically conducted, rather the relationship between each independent variable, MOE, and the dependent variable, product effectiveness was theoretically examined. In précis, the research examined how product timeliness, accuracy and usability affects DCGS FMV-analysis based product effectiveness. A discussion of MOPs followed focusing on the relationship between product accuracy and product effectiveness. The hypothesis, that MOE for FMV-analysis based production can be identified, was tested through the examined relationship between the independent and dependent variables. To further refine and quantify/qualify each MOE and MOP, factor variable levels were determined for each independent variable. For example, for the independent variable (MOE) “accuracy,” those factor levels would be: accurate
  • 44.
    37 (100%), marginally accurate(98-99.9%), or inaccurate (<98%). In defining variable factors levels, not only can the independent variables be assessed in terms of whether or not they impact product effectiveness, but degree of impact can be assessed as well. Although theoretical, one follow-on objective of the study is a practical application of the assessment framework. Such a practical application will require the identification of MOE and MOP but also metrics and indicators for each identified MOE and MOP. Following MOE identification and metric determination, a secondary analysis was conducted to determine effectiveness and performance indicators for each MOE and MOP, i.e., could a correlative relationship be demonstrated between certain mission execution elements and each MOE/MOP. Lastly should this research become operationally incorporated, because of the subjective nature of intelligence, the case study examines potential biases, both of the production analyst, and of the intelligence consumer.
  • 45.
    38 SECTION IV ANALYSIS ANDFINDINGS Because traditional military assessment measures and operational MOEs are not effective in accurately assessing the niche combat mission support element of FMV production, the challenge is in determining what attributes make the intelligence provided by FMV analysis- based products “actionable”. According to Joint Publication 2-0, Joint Intelligence, the effectiveness of intelligence operations is assessed by devising metrics and indicators associated with Attributes of Intelligence Excellence (U.S. Joint Chiefs of Staff 2013, I-22). It is here proposed that so long as they meet the MOE and MOP criteria, MOEs, MOPs and MOE/MOP indicators can be derived from the established list of Attributes of Intelligence Excellence: anticipatory, timely, accurate, usable, complete, relevant, objective and available (U.S. Joint Chiefs of Staff 2013, III-33).The focus of this research is FMV analysis-based production MOEs; however, because aggregate analyst performance can impact the overall effectiveness of the products, MOPs will also be discussed as they relate to MOEs. Excessive numbers of MOEs or MOPs become unmanageable and risk the collection cost efforts outweighing the value and benefit of assessment (Center for Army Lessons Learned 2010, 8). To maximize the assessment benefit, some attributes can be combined; complete can be considered an accuracy indicator and relevant can be considered a usability indicator. Anticipatory, objective and available each fail to meet MOE criteria. Further discussed in their individual sections below, of the eight Attributes of Intelligence Excellence, which can be said to comprise FMV product success criteria, only timely, accurate, encompassing completeness, and useable, encompassing relevance, succeed as effective MOE. Concerning the anticipatory attribute, to be effective, intelligence must anticipate the
  • 46.
    39 informational needs ofthe customer. Anticipating the intelligence needs of the customer requires that the intelligence staff identify and fully understand the current tasked mission, the customer’s intent, all relevant aspects of the operational environment and all possible friendly and adversary courses of action. Most importantly anticipation requires the aggressive involvement of intelligence personnel in operational planning at the earliest possible time (U.S Joint Chiefs of Staff 2013, II-7). Anticipatory products certainly may be more actionable, and thus more effective, than non-anticipatory products and those FMV analysts responsible for production through training, experience, and an in depth knowledge of the operational environment are most likely capable of providing anticipatory intelligence. Anticipatory intelligence, however, fails to meet requirements for an FMV analysis-based production MOE. Because MOE must be measurable to be effective, the first complication with the anticipatory attribute is in designating a metric. Unlike timeliness, which can be measured in seconds, minutes or hours, or accuracy which can be measured on a percentage scale of one to 100, determine a standardized unit of measurement for the degree to which a product is anticipatory is not currently feasible. Additionally, anticipatory, or predictive, intelligence is generally used to assess threat capabilities and emphasize and identify the most dangerous adversary Course of Action (COA) and the most likely adversary COA; such analysis is not typically conducted during near real- time operations by the PED Crew, but by intelligence staff tasked to those determinations (U.S. Department of the Army 2012, 2-2). Further complicating the issue of measurement is the issue of replicability. Even if a quantitative or qualitative method of measuring anticipation was devised, the same metric may not be applicable to the same product type under different mission tasking circumstances. DCGS intelligence is derived from multiple ISR platforms, may be analyzed by active duty, Air National Guard or Air Force Reserve forces and is provided to
  • 47.
    40 COCOMs, Component NumberedAir Forces, national command authorities and any number of their subordinates across the globe (U.S. Air Force ISR Agency 2014, 6). Depending on exploitation requirements, analysts may be tasked to exploit FMV in support of several discrete mission sets including situational awareness, high-value targeting, activity monitoring, etc. (U.S. Joint Chiefs of Staff 2012, III-33). The nature of different customers and mission sets may impact the degree to which anticipatory intelligence is even possible. For example, the case of dynamic retasking, where a higher priority collection requirement arises and the tasked asset is immediately retasked to support (U.S. Department of the Air Force 2007, 15). If an asset is dynamically retasked and products are immediately requested thereafter, the PED crew would be operating under severely limited knowledge of the new OE and anticipating a new customer’s production expectations beyond expressed product requirements would be very difficult if not infeasible. The intent of this research is to identify FMV analysis-based product MOE which are repeatable, measurable, relevant, responsive and standardized across product and mission types. The attribute anticipatory is neither easily measurable nor repeatable given the amount of mission variables impacting the feasibility of analysts making anticipatory assessments. A well- produced product disseminated in support of a dynamically retasked mission may be perfectly effective in achieving its goal of actionable intelligence despite not being anticipatory. While it may be more effective if it were anticipatory, the failure of that attribute to meet MOE criteria negates it as an appropriate MOE. Due to the decisive and consequential impact of intelligence on operations and the reliance of planning and operations decisions on intelligence, it is important for intelligence analysts to maintain objectivity and independence in developing assessments. Intelligence, to the greatest extent possible, must be vigilant in guarding against biases. In particular, intelligence
  • 48.
    41 analysts should bothrecognize each adversary as unique, and realize the possible biases associated with their assessment type (U.S. Joint Chiefs of Staff 2013, II-8). As with the anticipatory attribute, the objective attribute presents problems with both measurability and replicability. Also comparable to the anticipatory attribute, objectivity is highly important to an effective product. Because not only each mission, but each production analyst is unique, so too is every analyst’s assessment biases, each of which would undermine product objectivity. Further obfuscating objectivity as a viable MOE is the difficulty in determining a standard metric by which to measure objectiveness and a baseline against which to compare the objectivity of products. Both anticipating the customer’s needs and remaining objective during analysis and production should be trained, heavily reinforced and corrected prior to dissemination as required. Biased analysis and production degrade product accuracy and effectiveness through the misinterpretation, underestimating or overestimating of events, the adversary, or the consequences of friendly force action. Objectivity does impact the effectiveness of DCGS FMV analysis-based products; however, the inability of the objective attribute to be measured in a standardized, replicable fashion disqualifies the attribute as a potential MOE. ISR-derived information must be readily accessible to be effective, hence the availability attribute. Availability is a function of not only timeliness and usability, discussed in their respective sections below, but also the appropriate security classification, interoperability, and connectivity (U.S Joint Chiefs of Staff 2013, II-8). Naturally both ISR personnel and the consumer should have the appropriate clearances to access and use the information (U.S. Department of the Air Force 2007, 15). Product dissemination to the customer is necessarily dependent on full network connectivity. While a product would be completely ineffective if it were never received by the customer, or if it were classified above what the customer could
  • 49.
    42 Figure 2: Measuresof Effectiveness access; however, throughout the theoretical exploration of this research problem, full product availability will be assumed. During the discussion of specific MOE below, network connectivity will be held constant at fully functional and classification will be assumed to be the derivative classification of the tasked mission ensuring both analyst and customer access. Measures of Effectiveness Dismissing anticipatory, objective and available as attributes suitable to MOE selection, the applicable attributes of timely, accurate, usable, complete, and relevant remain. It is against that attribute criteria, either as MOE or as MOE indicators (see Figure 2) that the effectiveness of DCGS FMV analysis-based products in this thesis will be measured. Timeliness Commanders have always required timely intelligence to enable operations. Because intelligence is a continuous process that directly supports the operations process, operations and intelligence are inextricably linked (U.S. Department of the Army 2012, v). Timely intelligence is essential for the prevention of surprise by the adversary, to conduct defensive operations, seize the initiative, and utilizes forces effectively (U.S. Department of the Air Force 2007, 5). To be effective, the commander’s decision and execution cycles must be consistently faster than the adversary’s; thus, the success of operations to some extent hinges upon timely intelligence (U.S.
  • 50.
    43 Joint Chiefs ofStaff 2013, xv). The responsive nature of Air Force ISR assets is an essential enabler of timeliness (U.S. Department of the Air Force 2007, 5). In the current environment of rapidly improving technologies, having the capability to provide timely information is vital (U.S. Department of the Air Force 1999, 40). The dynamic nature of ISR can provide information to aid in a commander's decision-making cycle and constantly improve the commander's view of the OE, that is, if the products are received in time to plan and execute operations as required. Because timeliness is so critical to intelligence effectiveness, as technology evolves, every effort should be made to streamline and automate processes to reduce timelines from tasking through product dissemination. (U.S. Department of the Air Force 2007, 5). Timeliness is vital to actionable intelligence; the question is, does ‘late’ intelligence equate to ineffective intelligence? According the concept of LTIOV (Latest Time Information of Value), there comes a time when intelligence is no longer useful. LTIOV is the time by which information must be delivered to the requestor in order to provide decision makers with timely intelligence (U.S. Department of the Army 1994, Glossary-7). The implication is that information received by the end user after the LTIOV is no longer of value, and thus, no longer effective. To be effective at all, intelligence must be available when the commander requires it (U.S. Joint Chiefs of Staff 2013, II-7). All efforts must be made to ensure that the personnel and organizations that need access to required intelligence will have it in a timely manner (U.S. Joint Chiefs of Staff 2013, II-7). The DCGS is one intelligence dissemination system capable of providing near real-time situational awareness and threat and target status (U.S. Department of the Air Force 1999, 40). In the fast-paced world of near real-time intelligence assessment, production time is measured not
  • 51.
    44 Figure 3: TimelinessFactor Levels in days, or hours, but in minutes. Consequently, the most appropriate metric for product timeliness is the production time, from tasking to dissemination availability, in minutes. The GA and MSA PED Crew positions are not only responsible for the creation and dissemination of intelligence products, but also for ensuring their production meets the DCGS Enterprise timeline (time limit) for that product (U.S. Department of the Air Force 2013a; 29). For this discussion of timeliness as an MOE, it will be assumed that the established DCGS Enterprise product timelines are congruent with product intelligence of value timelines, i.e., so long as an analyst successfully produces a product within its timeline, that product will be considered timely by the consumer. The DCGS Enterprise product timeline will serve as the baseline against which products will be compared. In assessing the degree of product timeliness, the three assigned factor levels, outlined in Figure 3, will be 1. The product was completed early (very timely) 2. The product was completed on time (timely), or 3. The product was completed late (not timely). In this way, the effectiveness of a product may be at least partially assessed based upon the number of minutes in took to produce the product measured against the established timeline for that type of product. To be relevant intelligence in ISR means that the ISR product must be tailored to the warfighter’s needs (U.S. Department of the Air Force 1999, 10). The GA and MSA are both trained to produce their respective products within applicable timelines; however, if the end user should request a highly tailored product, it may push production time past the established
  • 52.
    45 timeline. Product customizationmay then be considered an indicator for the timeliness MOE. When assessing product effectiveness, it is not single products that determine effectiveness, but the aggregate of products within a genre. For example, it may be determined that 90 percent of Product A are produced within Factor Level 1, early. To elevate the analyst baseline, the timeline may be adjusted to bring the remaining ten percent of products up to par and increase product effectiveness through product timeliness. For assessment standardization across the enterprise, the designation ‘early’ must be specified. Here, to be considered very timely, the product must be completed 50 percent earlier than the maximum allotted production time. For example, if the timeline is ten minutes, and a product is completed in five, that product would be considered very timely. On the other hand, as ISR evolves and products are used to support ever more dynamic operations, if product type B is habitually requested at a high level of customization resulting in 80 percent of products falling into Factor Level 3, not timely, the DCGS may want to consider adjusting that product’s timeline, refining the procedure for producing that product, or introducing a more applicable product. Products whose vast majority falls into Factor Level 2, timely, may be considered effective, at least, so far as reaching the customer when it is still of value is concerned. Accuracy Intelligence production analysts should continuously strive to achieve the highest possible level of accuracy in their products. The accuracy of an intelligence products is paramount to the ability of that intelligence analyst and their parent unit to attain and maintain credibility with intelligence consumers. (U.S. Joint Chiefs of Staff 2013, II-6). To achieve the highest standards of excellence, intelligence products must be factually correct, relay the situation as it actually exists, and provide an understanding of the operational environment based on the rational judgment of available information (U.S. Department of the Air Force 2007, 4).
  • 53.
    46 Especially for geospatialintelligence products, as FMV analysis-based products are, geopositional accuracy, suitable for geolocation or weapons employment, is a crucial product requirement (U.S. Department of the Air Force 1999, 10). Intelligence accuracy additionally involves the appropriate classification labels, proper grammar, and product completeness. For a product to be complete, it must convey all of the necessary components (U.S. Department of the Army 2012, 2-2). For example, if DCGS standardized product A requires the inclusion of four different graphics and only three were produced, the product would be incomplete and thus inaccurate. The necessary components of a product also include all of the significant aspects of the operating environment that may close intelligence gaps or impact mission accomplishment (U.S. Joint Chiefs of Staff 2013, II-8). Complete intelligence answers all of the customer’s questions to the greatest extent possible. For example, if a product is requested to confirm or deny the presence of rotary- and fixed-wing aircraft at an airport, and the production analyst annotates and assesses the fixed-wing aircraft, but negates to annotate and assess the rotary-wing aircraft, then the product is incomplete and thus inaccurate. Similarly, if a production analyst fails to provide coordinates when required, the product would also be incomplete and inaccurate. Incompleteness may be the variety of inaccuracy that most impacts the effectiveness of an intelligence product. While grammar, errors in coordinates and even errors in classification, as critical as those inaccuracies may be, are most likely the cause of analyst mistakes or oversight, incomplete products may hint at an inability to produce the product, access certain information or understand production requirements. For example, if a product required analysts to obtain a certain type of imagery and analysts did not possess that skillset, one would expect to see a corresponding trend in incomplete products. Similarly, if a standardized product was modified
  • 54.
    47 and analysts werenot difference trained, one may expect to see a trend in analysts failing to input the newly required information. In this way, product completeness can serve as indicator for the accuracy MOE. If a product is found to be incomplete, without looking any further it is likely that that product will already be too inaccurate to be effective. If a product is found to be complete, however, even with grammar errors, a product may still be found effective, despite not being perfectly accurate. When DCGS products are accounted for and quality controlled, they are measured against a baseline checklist, which when used, assigns the product an accuracy score on a scale of zero to 100 percent (U.S. Air Force ISR Agency 2013e; 5). The metric for the accuracy MOE will also be a percentage point system. The baseline checklist is designed to identify both major and minor product errors. A major error is one that causes, or should have caused, the product to be reissued. Major errors may include erroneous classification, incomplete and erroneous information, inaccurate geo location data, or improper distribution (U.S. Air Force ISR Agency 2013e; 5). The checklist has been designed with a numeric point system for each item evaluated and is weighted so that any one major error will result in a failing product, as will an accumulation of minor errors Any product which has more than two percentage points deducted will be considered a failing product. (U.S. Air Force ISR Agency 2013e; 5). Accordingly, 98 percent is the standard against which the quality of intelligence products should be continuously evaluated. Using a scale of zero to 100 percent, with anything below a 98 percent resulting in a failing product, the accuracy MOE will be broken down into three different factor levels (Figure 4): 1. An accurate product with no errors resulting in a 100%, 2. A marginally accurate product with only minor errors resulting in a 98-99.9%, or 3. An inaccurate product with major errors or
  • 55.
    48 Figure 4: AccuracyFactor Levels several minor errors resulting in below a 98%. Any passing product meeting the scores in Factor Level 1 or Factor Level 2 may be considered effective, where any failing products, falling into Factor Level 3, may be considered ineffective. As discussed in the Timeliness section above, the effectiveness assessment of a product is not based on individual products, but on the aggregate of products of the same genre. If, for example, Product A is habitually found to be inaccurate, as a product it would not be considered effective. The problem may be that parts of the product are operationally outdated and information required by standard production are no longer available. Or, in the case of a new product, analysts may not have adequate training to satisfy product requests accurately. Whatever the cause is determined to be, accuracy as a measure of effectiveness would provide the commander and staffs insight into when products need to be updated, amended, or when analysts need further training in order for a product to be effective. Usability Usability is an assessment attribute that overlaps with all other intelligence product MOE and MOE indicators; timeliness, product customization, accuracy, and completeness are all, at least in some degree, prerequisites for a usable product. Beyond those attributes, usability may be understood as interpretability, relevance, precision and confidence level; in that respect usability may serve as an additional, discrete MOE. Intelligence must be provided to the customer in a format suitable for immediate
  • 56.
    49 comprehension (U.S. JointChiefs of Staff 2013, II-7). Products must be in the correct data file specifications for both display and archival (U.S. Department of the Army 2012, 2-1). Providing usable products also requires production analysts to understand how to deliver the intelligence to the customer in context. Customers operate under mission, operational, and time constraints that shape their intelligence requirements and determine how much time they have to review the intelligence products provided, especially considering the near real-time operational tempo of ISR operations. Customers must be able to quickly apply the received intelligence and may not have sufficient time to analyze complex intelligence products; therefore to be usable the product must be easily interpreted (U.S. Joint Chiefs of Staff 2013, II-7). To be easily understandable, a function of interpretability, product text should be direct and approved acronyms and brevity terms should be used as much as possible (U.S. Joint Chiefs of Staff 2013, II-8). The GA and MSA are both trained in reviewing their products to ensure they meet established reporting guidance (U.S. Air Force ISR Agency 2013b; 29). It is probable that the DCGS Enterprise has established reporting guidance in order to make products more easily interpretable, at least for the standardization of data file types. Intelligence must be relevant to the planning and execution of the operation at hand, and aid the commander in the accomplishment of the mission (U.S. Joint Chiefs of Staff 2013, II-8). To be effective, ISR products must be responsive to the commander’s or decision maker’s needs. Intelligence products must enable strategic, operational, and tactical users to better understand the operational environment systematically, spatially, and temporally, allowing them to orient themselves to the current and predicted situation with the goal of enabling decisive action (U.S. Department of the Air Force 2007, 4). Products must contribute to the customer’s understanding of the adversary and other significant aspects of the OE, but also not burden the commander with
  • 57.
    50 extraneous intelligence thatis of minimal or no importance to the current mission. To produce relevant intelligence, the PED Crew must remain cognizant of the customer’s intent and the intelligence products must support and satisfy the customer’s requirements (U.S. Joint Chiefs of Staff 2013, II-8). The customer’s requirements are presented to the PED Crew in the form of Essential Elements of Information (EEIs). EEIs are the most critical information requirements needed by the customer to assist in reaching a decision (U.S. Joint Chiefs of Staff 2013, I-7). For the intelligence product to be relevant and thus effective, the product must satisfy the tasked EEIs. Related to relevance, intelligence products must provide the required level of detail and complexity to answer the requirements; the products must be precise. Precise threat location, tracking, and target capabilities are especially essential for success during mission execution (U.S. Joint Chiefs of Staff 2013, I-25). Precision can be defined as the level of detail reported and can be understood by categorizing the concept into three levels: detection, classification and recognition (Veit and Monti 1995, 11). The detection level, where only the location of objects can be discerned without discrimination, can be considered low precision. The classification level, where discrimination between objects can be discerned, such as the difference between tracked or wheeled vehicles, can be considered moderate precision. The recognition level, where discrimination can occur between similar objects, such as between tracked vehicles, can be considered highly precise (Veit and Monti 1995, 11). During practical application, recognition levels can be amended to better apply to specific mission sets. If the objective of the product is to confirm or deny the presence of a certain object at a certain location, a certain degree of precision may be required. If the product is not precise enough, the product may not be useful to the consumer, and thus ineffective.
  • 58.
    51 Production analysts useconfidence levels to distinguish between what is known and supported by facts of the situation and what are untested assumptions (U.S. Joint Chiefs of Staff 2013, A-1). High confidence assessments are well-corroborated by information from proven sources, include minimal assumptions, utilize strong logical inferences and methods and no or minor intelligence gaps exist. Medium confidence assessments are partially corroborated by information from good sources, include several assumptions, utilize a mix of strong and weak inferences and methods and minimum intelligence gaps exist. Low Confidence assessments are uncorroborated by information from good or marginal sources, include many assumptions, utilize mostly weak logical inferences and minimal methods application and glaring intelligence gaps exist (U.S. Joint Chiefs of Staff 2013, A-1). A customer’s determination of appropriate actions may rest on knowing whether intelligence is fact or assumption, a knowledge conveyed via the use of confidence levels (U.S. Joint Chiefs of Staff 2013, A-1). In this regard, low confidence levels equate to data uncertainty (Greitzer 2005, 7). Depending on the operational environment, that uncertainty may lead the intelligence consumer to disregard the provided intelligence, undermining not only the overall usability of the product but consequently the product’s effectiveness. Of interpretability, relevance, precision and confidence level, both precision and confidence level may serve as indicators for the usability MOE. The primary distinction between which usability elements can serve as indicators and which cannot is whether or not the element can be internally assessed. Determining how relevant or how interpretable a product is, can ultimately only be decided by the customer that requested the product. The precision and confidence of the intelligence provided in the product, however, can be readily assessed by mission managers and quality controllers within the DCGS.
  • 59.
    52 Given the negativeimpact of low confidence assessments on overall product usability, products with consistently low confidence assessments may be indicative of undermined product effectiveness due to a lack of analyst confidence and training or due to product incompatibility with assigned task. For example, if the purpose of Product C is to identify certain objects, and Product C is disseminated primarily with low confidence ratings, that trend could indicate inadequate analyst training in source corroboration and assessment methodologies or the product may be ill suited to the task and new production procedures may be required. For the same reasons an analyst could not make high confidence assessments, an analyst’s assessment may have been lacking in its level of detail; thus, low precision, may also be indicative of product ineffectiveness. Because the ultimate determination of product usability must subjectively determined by the end user, the metric for product usability can only be customer satisfaction based qualitatively on customer feedback. Feedback is the basis for learning, adaptation and subsequent adjustment (U.S. Joint Chiefs of Staff 2011a, xxv). Feedback is a requirement for ensuring that any actions taken are moving the situation in a favorable direction (Bullock 2006, 2). The Air Force doctrinally supports the necessity of consumer feedback. Published in AFI 14-201, “Air Force customers will provide timely response to production center requests for clarification, negotiation and customer feedback,” (U.S. Department of the Air Force 2002, 5). Air force production centers are also instructed to “measure customer (requirement) satisfaction,” (U.S. Department of the Air Force 2002, 5). Production centers are additionally tasked to participate in operations-oriented, end-user/producer technical exchange meetings and working groups, as applicable to the unit mission in order to ensure the relevance of products and to actively collaborate with end users to better ensure responses meet intelligence needs (U.S. Department
  • 60.
    53 of the AirForce 2012a, 20-21). Feedback is critical to effectiveness assessment, and though Air Force doctrinally supports the provision of feedback by intelligence product consumers, the frequency, quality and content of feedback provided to the DCGS PED Crew is unclear. While internally assessed product confidence level and precision may contribute to usability estimates, the greatest emphasis for whether or not a product was effective must be determined through customer satisfaction and customer satisfaction can only be measured through customer feedback. Customer satisfaction with a type of product could be determined by way of a survey, grading rubric, or open-ended comments. Whatever feedback mechanisms are utilized, to effectively contribute to mission assessment they must be standardized. The amount of time a customer can allocate to the provision of feedback when compared to mission essential tasks must be considered, and yet, the importance of feedback to effective intelligence must not be underestimated. As previously discussed, product effectiveness within the scope of this research does not apply to the effectiveness of a single product, but of the aggregate of products of a certain type. The research objective is not to determine whether all individual products are effective, but rather to determine whether or not Product A is an effective type of intelligence product. Thus, customer feedback is not required after the receipt of every product. Rather, assuming repeat DCGS customers, only period feedback is needed to ensure that established products remain effective across the evolution of the ISR mission and that newly introduced DCGS products are in fact effective in providing actionable intelligence when and where it is required. The customer satisfaction metric can be divided into three factor levels (Figure 5): 1. The product completely satisfied the requirements and the customer was highly satisfied, 2. The
  • 61.
    54 product mostly satisfiedthe requirements and the customer was satisfied or, 3. The product did not satisfy the requirements and the customer was dissatisfied. Measures of Performance A predominant component of this research is the theoretical exploration of DOD established Intelligence Attributes of Excellence towards the identification of FMV analysis- based production MOE. Because product quality is inexorably linked to product effectiveness, the identification of MOP are relevant to the product effectiveness discussion. For FMV analysis-based production, MOP should provide insight into a production analyst's ability to correctly perform the task of producing DCGS FMV intelligence products. In the niche intelligence mission subset of FMV analysis and production, two of the Intelligence Attributes of Excellence previously identified as MOE also serve as MOP: product timeliness and accuracy. It is vital to here, once again, distinguish between task assessment and effectiveness assessment. Though the metrics and factor levels for MOP are the same as two of the MOE attributes, the original data input and findings differ; despite overlapping MOE and MOP, the respective analyses are discrete. As Figure 6 demonstrates below, MOP involves the timeliness and accuracy assessment of a single product in order to determine how well the analyst performed in the production task. MOE involves the aggregate timeliness and accuracy assessment of all products of a certain type in order to determine how effective the product itself is in delivering actionable intelligence to the consumer. Figure 5: Customer Satisfaction Factor Levels
  • 62.
    55 Figure 6: Taskand Effectiveness Assessment Products falling into the ‘not timely’ factor level during effectiveness analysis should cause the DCGS to consider adjusting that product’s timeline, refining the procedure for producing that product, or introducing a more applicable product. Opposingly, during task analysis, a single analyst who regularly fails to produce tasked products within the allotted time does not reflect ineffective products, but rather poor analyst performance, a result with a completely separate mitigation COA. While usability may also be utilized to assess analyst performance, given consumer time constraints, feedback on every product is infeasible.
  • 63.
    56 Figure 7: Measuresof Performance Consequently, no standardized, repeatable data set can be provided to assess performance through product usability; therefore, usability does not meet all of the previously established criteria to be considered an MOP. As shown in Figure 7 below, as a byproduct of the differences between task performance and product effectiveness analysis, the timeliness and accuracy performance indicators differ from their respective effectiveness indicators. Proposed FMV analysis-based production MOP indicators include training grades, qualification scores and examination scores. In order to participate in production functions during an FMV mission analysts must be qualified for a production position on the AN/GSQ-272 SENTINEL weapon system (U.S. Department of the Air Force 2015b, 4). In order to develop and maintain a high state of readiness, mission managers ensure that all personnel performing intelligence duties attain and maintain the qualifications and currencies needed to support their unit’s mission effectively and to the minimum standards by each community (U.S. Department of the Air Force 2015b, 4). Each analyst is additionally responsible for maintaining qualifications and currencies associated with their assigned positions (U.S. Department of the Air Force 2015b, 9-11). When assessing performance measures and indicators, only qualified analysts should be considered in the data set as unqualified analysts do not contribute to production metrics. The performance and progress of DCGS trainees through each training item is documented on an Intelligence Gradesheet, which provides a standardized depiction of how well a trainee is progressing through individual training events. The Intelligence Gradesheet
  • 64.
    57 additionally provides trainersand supervisors a means to improve trainee performance for specific tasks through rapid written feedback on the trainee’s abilities during each event (U.S. Department of the Air Force 2015b, 15). Through the course of training, how well a trainee competes production tasks is extensively document with predefined quantitative metrics. “Grade 1” is given when performance is correct, efficient, and skillful and without hesitation. “Grade 2” is given when performance is essentially correct and the analyst recognizes and corrects errors. “Grade 3” is given when performance indicates a lack of ability or knowledge (U.S. Department of the Air Force 2015b, 23). Training grades may serve as an MOP indicator because training, to some degree, indicates future analytic and production performance. Documentation of Grade 1 on production tasks correlates to well-trained analyst with a demonstrated product proficiency who can be expected to perform accurate and timely production during mission execution. Documented training grades of 2 or 3 may indicate analysts who struggle with production and who may produce untimely, inaccurate or unusable products during mission. While every analyst responsible for production during live missions has successfully passed the qualification test for their position and has demonstrated at least a baseline proficiency in every mission category, qualification scores can still be useful as a second MOP indicator. Qualifications in part measure how well analysts can apply their training to the performance of mission tasks, including production. When an evaluation occurs, the same grading criteria is used in evaluations as was used during the training process to ensure continuity and that all personnel are trained to the minimum standards required to pass an evaluation (U.S. Department of the Air Force 2015a, 23). Following an evaluation, an evaluatee will receive one of three scores for every evaluation area; the evaluation area of greatest interest
  • 65.
    58 to this researchis the Intelligence Products area (U.S. Department of the Air Force 2015a, 70). A score of “Q” means the individual is qualified. “Q” is the desired level of performance where the individual demonstrated a satisfactory knowledge of all required information, performed intelligence duties within prescribed tolerances and accomplished the assigned mission. A score of “Q-” means the individual is Qualified with Discrepancies. “Q-” indicates that the individual is qualified to perform the assigned task, but requires debriefing or additional training as determined by the evaluator. A score of “U” means the individual is Unqualified (U.S. Department of the Air Force 2015a, 23). Because only qualified individuals may produce DCGS FMV analysis-based products, only the performance of qualified individuals is relevant to this research; the “U” score will not be considered. Evaluation scores may serve as MOP indicators because those individuals who receive a “Q” score in the Intelligence Products evaluation area can reasonably be expected to reproduce their evaluation results and produce products at the desired level of performance during a mission. Similarly, those analysts who received a “Q-” in the Intelligence Products evaluation area may indicate a propensity toward subpar performance during mission execution. A “Q-” means that the evaluatee “created and disseminated/or submitted data, information and/or intelligence products using the appropriate methods with minor deviations. Did not always meet applicable timelines. Deviations, omissions, or discrepancies did not jeopardize mission accomplishment, safety or security,” (U.S. Department of the Air Force 2015a, 70). In addition to being evaluated, the DCGS PED crew must also pass periodic written intelligence examinations on knowledge requisites in order to maintain their currency. Examinations are typically comprised of open or closed book written examinations for mission- specific and Area of Responsibility (AOR) certifications (U.S. Department of the Air Force
  • 66.
    59 2015b, 14) Intelligenceexaminations measure an analyst’s knowledge of procedures, threats and other information deemed essential for effective intelligence operations. The minimum passing grade for examinations is 85 percent (U.S. Department of the Air Force 2015b, 13-14). Analysts failing to achieve the examination requirement of at least an 85 percent will not be able to participate in production activities during a mission and therefore fall outside the scope of this research; however, analysts who receive scores of 85 to 90 percent only marginally possess the requisite knowledge to perform their mission position duties. Poor and marginal testing performance on examinations can often indicate areas requiring increased training emphasis (U.S. Department of the Air Force 2015b, 14). Scores within the 85 to 90 percent score range may indicate analysts who will struggle with the performance of mission tasks. As discussed in the Measures of Effectiveness subsection above, timeliness and accuracy with respective metrics of minutes and percent meet all of the criteria to be effective MOE/MOP. During effects assessment, timeliness and accuracy apply to products types, with trend and pattern analysis focused on discrete genres of products. During task assessment, timeliness and accuracy apply to individual products, with trend and pattern analysis focused instead on analysts or PED crews/flights. Though timeliness and accuracy serve as both MOEs and MOPs when conducting FMV analysis-based production mission assessment, indicators differ. Whereas MOEs have unique product effectiveness indicators for timeliness and accuracy, the same indicators for MOPs may be assigned to both timeliness and accuracy. Theoretical Application In this section, the proposed MOEs of timeliness, accuracy, and usability will be theoretically applied to the DCGS FMV mission set in order to better demonstrate how they may be used as assessment tools in the operational environment. This theoretical case study will
  • 67.
    60 consider a StillImage product, here defined as an annotated snapshot of FMV imagery capable of being disseminated to the end user in near real-time, when produced in support of the Pattern of Life (POL) mission set, the analysis and geospatially referencing of a target or individual’s day to day interactions with their environment (Mason, Foss, and Vinh 2011, 6). In support of POL mission subsets, the Still Image product may be used to graphically display significant activity or objects of interest meeting EEI requirements or as requested by the customer. For example, if POL analysis is being conducted on a Compound of Interest (COI) and one EEI is to report any vehicle traffic, the arrival of a sedan outside the COI would be deemed significant activity and the PED Crew would create an annotated Still Image of the vehicle detailing its appearance, location relative to COI, any interaction between the vehicle/driver/passenger and the COI, etc. and then immediately disseminate the product to the customer so that the end users may determine if, based on that significant information, new COAs or FMV requirements need to be devised. For Still Images produced in support of POL missions to be considered actionable intelligence, the goal of DCGS production, they must enable to customer/end user to make COA decisions. To determine whether Still Imagery products are effective in POL missions, product data must be collected and assessed. How, and in what context, the product data is assessed is critical to the assessment outcome. That being said, it is the proposed MOE that provide as assessment framework for determining whether or not Still Imagery products are effective to satisfy certain EEIs during POL missions. For this case study, it will be assumed that over the course of mission assessment, no negative performance indicators were identified and all PED Crew production analysts, through MOP analysis, were found to perform at or above standard production performance levels. Holding MOP constant, for the Still Image product to be baseline effective,
  • 68.
    61 it must thenonly be timely, accurate and usable. Product timelines (time limits) are set by the DCGS Enterprise and each product has a single timeline regardless of tasked mission set. Comprising the simpler of DCGS products, Still Images will have an assigned timeline of “X” minutes (versus hours). If Still Image products are completed within “X” minutes and reach the customer/end user while still of value, then those products were not ineffective. However, if Still Image products are habitually disseminated after “X” minutes then the product should be considered ineffective and either the product will need to be amended, analysts will require additional training, or both. Similarly, the product may ineffective if the product timeline does not fit the situation. If most Still Images produced in support of POL missions are of vehicles, and vehicles usually arrive and depart compounds within “Y” minutes, where Y<X, even when products are disseminated in “X” minutes, the intelligence is too late to be actionable; that is, if the products habitually arrive after the vehicles of interest have already departed the COIs, then the intelligence is no longer actionable as customers do not have the option of redirecting the FMV asset from the COI to follow the vehicle should they determine based on the graphic that tracking the vehicle is a higher priority. In addition to products, information is passed real-time to the customer. Usually retasking decisions can be made instantaneously; however, graphic products allow customers to view data in a more easily comprehensible, often more in-depth manner and so the situation may exist where products are the deciding factor in customer decision making. If customer feedback reveals that on time products are still deemed situationally late, then new products, more quickly produced may need to be introduced and analysts would need to be trained to default to those new products in certain situations. For example, when conducting POL and significant vehicle activity is assessed, analysts may be trained to produce a “clean,” unannotated Still Image
  • 69.
    62 instead of theDCGS standard Still Image to ensure the product is timely, and thus, effective. Without assessing product timeliness within the context of MOE, however, such analyses may not be operationally conducted. Establishing timeliness as an MOE directs mission managers to collect the appropriate product data, here whether or not products are meeting their timelines, and assess that data, along with timeliness-focused customer feedback to ensure that DCGS Enterprise directed timelines are appropriate for the tasked OE. As with timeliness, establishing accuracy as a FMV analysis-based production MOE directs both the collection of product data and its assessment to ensure that products are meeting their objectives. DCGS site product accuracy data collection is dictated by Air Force ISR Agency Instruction 14-121, Geospatial Intelligence Exploitation and Dissemination Quality Control Program Management. Quality Control requires the evaluation of FMV analysis-based product accuracy on a percentage scale. Beyond sending quarterly reports to higher headquarters, how each site utilizes the collected accuracy data is up to that site (U.S. Air Force ISR Agency 2013e, 4). With accuracy as an established product MOE, mission managers will be expected to at least conduct product accuracy trend analysis by error type, product type and PED Crew. For example, if Still Images in support of POL missions are habitually missing georeference information across PED Crews, additional training may be required, the product template may need to be updated to call attention to the required field, or both. If only one PED Crew is neglecting the georeference input field, then mission manager time and effort may be spared in template updates and full training sessions by only conducting continuation training with the deficient crew. If incorrect or misleading intelligence is acted upon, the negative repercussions could be quite serious. Thus, for intelligence to be effective, it must be accurate, and that includes even the simplest of micro-tactical FMV products. Designating accuracy as a MOE is
  • 70.
    63 critical to directingproduct assessment towards increased product accuracy and efficiently utilized mission management resources. Above all, the customer/end user must find the product usable. Even a perfectly accurate, on time product is ineffective because the customer does not utilize it and because it wastes analyst and PED Crew efforts which could be better directed to activities the customer does find useful. For example, periodic customer feedback may reveal low usability ratings on Still Imagery when disseminated in support of POL missions despite high timeliness and accuracy ratings. Further discussions with the customer may expose the customer’s exclusive reliance on real time PED-crew disseminated intelligence. The customer may make all POL mission decisions in real time prior to receiving even the timeliest of Still Image products. Whereas the customer may find Still Images invaluable to conducting post-strike Battle Damage Assessments, the nature of the POL mission may negate the value of Still Images altogether. In this hypothetical case, PED Crew directives may be amended from automatically producing Still Image products when significant activity is observed during POL missions to only producing Still Image products when requested by the customer thereby more efficiently executing the DCGS FMV mission. Without establishing usability as an MOE, however, established customer feedback requirements may not facilitate such back and forth between mission managers and the customer as in the above example and such ineffective product-caused mission inefficiencies may never be recognized. As discussed in the Conclusion below, MOEs themselves do not comprise mission assessment. MOEs are assessment tools. In identifying MOEs, an assessment framework is provided to focus data collection efforts and analysis in a way that can efficiently determine whether products are meeting their objective of providing actionable intelligence.
  • 71.
    64 SECTION V CONCLUSION Well devisedMOEs and MOPs are critical to helping commanders and their staffs understand the links between tasks, end states, and lines of effort (Center for Army Lessons Learned 2010, 8). The research presented in the paper theoretically establishes that both MOEs and MOPs can be identified for DCGS FMV analysis-based production. MOEs and MOPs contribute to mission assessment, but they do not comprise the assessment themselves. Supporting data must be collected and organized into presentable formats for trend and pattern analysis (Center for Army Lessons Learned 2010, 9). MOE analysis involves product type trend analysis in order to determine if the products made a difference in the OE and if the product objective, the provision of actionable intelligence, was met. MOP analysis involves individual and PED crew/flight trend analysis in order to identify determine performance deficiencies. Effective mission assessment will result in either findings supporting the continuation of current operations, actions, programs and resource allocations or findings which require reprioritization, redirection or adjustments to increase goal attainment and enhance success (Center for Army Lessons Learned 2010, 9). With the exception of customer satisfaction, because based on publically available information the establishment and standardization of DCGS feedback mechanisms are unclear, data collection supporting mission assessment is doctrinally supported for DCGS operations. All DCGS units are already required to send a monthly product Quality Control Status and Trends Analysis Report to higher headquarters; that quality control mandatorily includes the evaluation of product accuracy and timeliness (U.S. Air Force ISR Agency 2013b; 4-6). Individual site support staffs are primarily responsible for mission assessment. Each
  • 72.
    65 DCGS node’s Departmentof Operations Mission Management is charged with the establishment and maintenance of mission statistics and the trends analysis program (U.S. Air Force ISR Agency 2013d, 30). Mission Management monitors program effectiveness and provides feedback to all levels in order to improve both the quality control process and the overall production efforts of all analysts (U.S. Air Force ISR Agency 2013b; 7). They publish trends for feedback to intelligence personnel at least quarterly, initiate actions to determine the root cause of any negative trends and provide quarterly trends to squadron commander (U.S. Air Force ISR Agency 2013d, 30). They are responsible for ensuring individuals are properly trained and equipped to meet their mission requirements by emphasizing a stringent, multi-layered pre- and post-release quality control process for FMV analysis-based products (U.S. Air Force ISR Agency 2013b; 4). Most specifically, Mission Management is tasked with identification and analysis of trends by types of errors, flights or work centers (U.S. Air Force ISR Agency 2013b; 7). Each DCGS node’s Commander’s Standardization/Evaluation Program is responsible for trend analysis data. Standardization/Evaluation is tasked with conducting trend analysis at least annually including trend identification for unit evaluation and examination results (U.S. Department of the Air Force 2015a, 10). Each DCGS node’s Department of Operations Training is tasked with monitoring training programs to ensure training objectives are met and convening a semi-annual Training Review Panel in order to discuss training progression, changes to requirements and any positive or negative trends (U.S. Air Force ISR Agency 2013c, 10-11). At the higher headquarters level, the 480th ISR Wing, responsible for operating the AF DCGS Enterprise, monitors, tracks, and provides feedback on products and other analyses including the quality and timeliness of FMV products and analyses (U.S Department of the Air Force 2012a, 14).
  • 73.
    66 The requirement forassessment, mission monitoring and trend analysis to include roles and responsibilities is doctrinally addressed for the DCGS Enterprise. It is the responsibility of the commander to ensure that those responsible for conducting assessment are adequately resourced, as they must be in order to be effective (U.S Joint Chiefs of Staff 2011b, D-9). Unit commanders must detail the procedures for mission crews to create, collect, compile and forward mission data (timeliness, accuracy and usability metrics) necessary for the unit commander, Director of Operations, and mission backshops (support staffs) to assess mission effectiveness and report to other organizations as necessary (U.S, Air Force ISR Agency 2013d; 39). The commander and staffs should ensure that resource requirements for data collection efforts and analysis are built into the mission strategy and monitored to ensure compliance (U.S Joint Chiefs of Staff 2011b, D-9). Collecting mission data and recognizing which Attributes of Intelligence Excellence may serve as MOE and MOP is not enough to conduct mission assessment. MOE and MOP trends and patterns must be derived from the collected mission metrics and when analyst production performance or product effectiveness goals and objectives are not being met, deficiency analysis must be conducted (Heuer 1999, 161). Deficiency analysis is used if the task assessment and effects assessment indicate failures in analysts producing things right (MOP) or products not achieving the right things (MOE) (Heuer 1999, 161). For mission assessment to be accurate, however, data collection bias must be mitigated to the greatest extent possible. Bias is one of the major sources of analytical failure and may greatly affect the intelligence analysts and subsequent intelligence product, in part because it is difficult for recognize bias when it occurs (Achananuparp et al. 2005, 30). Research into the psychology of intelligence analysis conducted by the Central Intelligence Agency has yielded several
  • 74.
    67 concerning conclusions thatmay impact the validity of mission data. The analyst, consumer, and mission assessor, on who evaluates analytical performance, all exercising hindsight. They take their current state of knowledge and compare it with what they or others did, or could, or should have known before the current knowledge was received (Heuer 1999, 161). In general, analysts normally overestimate the accuracy of their past judgments and intelligence consumers normally underestimate how much they learned from intelligence reports. When consumers of intelligence evaluate the quality of the intelligence product, they assess how much they learned from the product that they did not previously know. During this assessment, there is a consistent tendency for most consumers to underestimate the contribution made by new information and thus, they undervalue the intelligence product (Heuer 1999, 161). Unfortunately such biases cannot easily be overcome. To even attempt to mitigate evaluation biases both analysts and consumers should be educated on bias tendencies and actively consider their objectivity when assessing their own performance or the effectiveness of a product. To meet the DCGS Enterprise objective of delivering actionable intelligence to consumers, intelligence products must be timely, accurate and usable. Especially as the ISR mission continues to evolve, assessing the effectiveness of products will be vital to informing the commander’s decisions concerning whether to maintain or change the existing production strategy. As discussed in this research, the application of MOE in mission analysis allows for production effectiveness assessments and MOE as well as MOP have been demonstratively identified for the DCGS Enterprise FMV analysis-based production mission. The research and theoretical process for FMV analysis-based production MOP and MOE explored in this paper may be extended with easy applicability beyond the FMV mission subset across DCGS Enterprise mission functions. This research may further prove useful in its
  • 75.
    68 framework for otheroperationally focused and product oriented sections of the Intelligence Community, especially GEOINT production centers. In customizing the assessment process to best align with their mission set, units should consider whether or not to apply a weighted approach to the assessment attributes. In this research, usability held elevated importance over timeliness and accuracy, which were not ranked in order of importance; however, units may find that their mission set emphasizes the importance of one attribute over the others. Units may also find that mission assessment should be conducted differently for different mission sets. FMV is a subset of GEOINT, a mission which enables area familiarization and monitoring, feasibility assessments, target development, weapons effectiveness, collateral damage estimates, the formulation of courses of action and estimated consequences of execution as well as strategic, operational and tactical planning for combat, peacekeeping, and humanitarian assistance/disaster relief missions (U.S. Department of the Air Force 2012a, 3-4). Recognizing and balancing the subtle differences relative to timeliness and accuracy is one of the critical art forms for good intelligence (U.S. Joint Chiefs of Staff 2013, II-7). Less time sensitive missions, such as target development, may naturally hold accuracy above timeliness in overall product effectiveness. In time-sensitive missions, such as targeting, personnel recovery or threat warnings, dissemination immediacy is critical for decision makers (U.S. Joint Chiefs of Staff 2013, III-33). Usually, the need to balance timeliness and accuracy should favor timeliness; any errors in products could and should be followed up on later (U.S. Joint Chiefs of Staff 2013, II-7). Each unit would need to decide for itself whether or not to consider weighted MOE or MOP in their individual assessment frameworks, and if they choose to weight the attributes, guidance should be locally published to ensure assessment continuity.
  • 76.
    69 This research is,as previously discussed, a theoretical exploration of how product effectiveness may be assessed based on established DOD Attributes of Intelligence Excellence. The research goal was not to comprehensively address every MOE or MOP possible to assess DCGS FMV analysis-based production, but rather to demonstrate that any MOE and MOP were possible to identify and to present several potential options for both. Should this research be practically applied, each DCGS node or intelligence production center should consider their OE and mission specifics to determine the addition or subtraction of MOE and MOP as relevant.
  • 77.
    70 References Achananuparp, Palakorn, Park,S. Joon, Proctor, Jason M., Wania, Christine and Zhou, Nan. 2005. An Analysis of Tools that Support the Cognitive Processes of Intelligence Analysis. Pennsylvania State University. Air University Press. “Air and Space Power Journal.” Accessed June 25, 2015. http://www.airpower.maxwell.af.mil/. Alberts, David S., Garstka, John J., and Stein, Frederick P. 2000. Developing and leveraging information superiority. Washington, D.C.: C4ISR Cooperative Research Program. Brown, Jason M. 2009. "Operating the Distributed Common Ground System: A Look at the Human Factor in Net-Centric Operations." Air & Space Power Journal 23, no. 4: 51- 57. International Security & Counter Terrorism Reference Center, EBSCOhost (accessed May 16, 2015) Buckley John, Buckley Marlene, and Chiang Hung-Fu. 1976. Research methodology and business decisions. New York: National Association of Accounts. Bullock, Richard K. 2006. “Theory of effectiveness measurement” Ph.D. diss., Air Force Institute of Technology. Center for Army Lessons Learned. 2010. Assessments and Measures of Effectiveness in Stability Operations tactics, techniques, and procedures. Handbook 10-41. Washington, D.C. Darilek, Richard E., Walter L. Perry, Jerome Bracken, John Gordon and Brian Nichiporuk. 2001. MOEs for stability and support operations. In Measures of effectiveness for the Information-Age Army. Santa Monica: RAND Corporation. George, Alexander L. and Bennett, Andrew. 2005. Case studies and theory development in the social sciences. Cambridge: Massachusetts Institute of Technology Press. Green, John M. 2001. Establishing system measures of effectiveness. San Diego: Raytheon Systems Corporation. Greitzer, Frank L., and Allwein, Kelcy. 2005. Metrics and measures for intelligence analysis task difficulty. In Panel Session, 2005 International Conference on Intelligence Analysis Methods and Tools. Vienna, VA. Greitzer, Frank L. 2005. "Methodology, metrics, and measures for testing and evaluation of intelligence analysis tools.” Richland: Battelle Memorial Institute Pacific Northwest Division. Heuer, Richards J. 1999. Hindsight biases in evaluation of intelligence reporting. In Psychology of intelligence analysis. Washington, D.C.: Center for the Study for Intelligence.
  • 78.
    71 Johnson, Rob. 2003.Developing a taxonomy of intelligence analysis variables. Central Intelligence Agency Center for the Study of Intelligence. Washington, D.C. Johnson, R., McCormick, R. D., Prus, J. S., and Rogers, J. S. 1993. Assessment options for the college major. In Making a difference: Outcomes of a decade of assessment in higher education. San Francisco: Jossey-Bass. Kimminau, Jon. 2012. “A Culminating Point for Air Force Intelligence, Surveillance and Reconnaissance." Air & Space Power Journal 26, no. 6: 113-129. Langley, John K. 2012. “Occupational burnout and retention of Air Force Distributed Common Ground System (DCGS).” Ph.D. diss., Pardee RAND Graduate School. Mason, Tony, Foss, Suzanne, and Lam, Vinh. 2011. Using ArcGIS for intelligence analysis. Esri Federal User Conference. March, James and Sutton, Robert. “Organizational Performance as a Dependent Variable.” Organizational Science. 6, no. 8. (November-December 1998). Accessed May 3, 2015. http://www.wiggo.com/mgmt8510/readings/readings3/march1997orgsci.pdf. RAND Corporation. “Research.” Accessed June 25, 2015. http://www.rand.org/research.html. Report of the DNI Authorities Hearing before the Senate Select Committee on Intelligence. 2008. By J. Michael McConnell, Director of National Intelligence. Washington, D.C.: Government Printing Office. Sink, D. Scott. 1991. The role of measurement in achieving world class quality and productivity management. Industrial Engineering 23, no. 6: 23-28. Smith, Neill, and Clark, Thea. 2006. "A framework to model and measure system effectiveness." Australian Government Department of Defence. 11th ICCRTS. Sproles, Noel. 1997. Identifying success through measures. PHALANX 30, no. 4: 16-31 Veit, Clairice T., and Monti D. Callero. 1995. “A method for measuring the value of scout/reconnaissance.” Santa Monica: RAND Arroyo Center. No. RAND-MR-476-A. U.S Air Force ISR Agency. 2013. Air Force Distributed Common Ground System (AF DCGS) Evaluation Criteria. AFISRA Instruction 14-153 Volume 2. Washington, D.C. U.S Air Force ISR Agency. 2013. Air Force Distributed Common Ground System (AF DCGS) Intelligence Training. AFISRA Instruction 14-153 Volume 1 480 ISR WG Supplement. Washington, D.C. U.S Air Force ISR Agency. 2013. Air Force Distributed Common Ground System (AF DCGS) Intelligence Training. AFISRA Instruction 14-153 Volume 1. Washington, D.C.
  • 79.
    72 U.S Air ForceISR Agency. 2013. Air Force Distributed Common Ground System (AF DCGS) Operations Procedures. AFISRA Instruction 14-153 Volume 3. Washington, D.C. U.S. Air Force ISR Agency. 2013. Geospatial Intelligence Exploitation and Dissemination Quality Control Program Management. AFISRA Instruction 14-121 480th ISR Wing Supplement. Washington, D.C. U.S Air Force ISR Agency. 2014. Air Force Distributed Common Ground System (AF DCGS) Operations Procedures. AFISRA Instruction 14-153 Volume 3 480th ISR Wing Supplement. Washington, D.C. U.S Air Force. “Air Force Distributed Common Ground System.” Accessed June 27, 2015. http://www.af.mil/AboutUs/FactSheets/Display/tabid/224/Article/104525/air-force- distributed-common-ground-system.aspx. U.S Department of Defense. “Defense Technical Information Center.” Accessed June 25, 2015. http://www.dtic.mil/dtic/. U.S Department of Defense. “Joint Electronic Library.” Accessed June 25, 2015. http://www.dtic.mil/doctrine/. U.S Department of the Air Force. 1997. Sentinel Force 2000+. Air Force Intelligence Force Development Guide. Washington, D.C. U.S. Department of the Air Force. 1999. Intelligence, Surveillance and Reconnaissance Operations. Air Force Doctrine Document 2-5.2. Washington, D.C. U.S Department of the Air Force. 2002. Intelligence Production and Applications. Air Force Instruction 14-201. Washington, D.C. U.S Department of the Air Force. 2004. Intelligence, Surveillance, and Reconnaissance (ISR) Planning, Resources, and Operations. Air Force Policy Directive 14-1. Washington, D.C. U.S Department of the Air Force. 2007. Intelligence, Surveillance and Reconnaissance Operations. Air Force Doctrine Document 2-9. Washington, D.C. U.S Department of the Air Force. 2012. Geospatial-Intelligence (GEOINT). Air Force Instruction 14-132. Washington, D.C. U.S Department of the Air Force. 2012. Global Integrated Intelligence, Surveillance, & Reconnaissance Operations. Air Force Doctrine Document 2-0. Washington, D.C. U.S Department of the Air Force. 2013. Air Force ISR 2023: Delivering decision advantage. Washington, D.C.
  • 80.
    73 U.S Department ofthe Air Force. 2013. Intelligence Training. Air Force Instruction 14-202 Volume 1 AFISRA Supplement. Washington, D.C. U.S Department of the Air Force. 2014. Intelligence Standardization/Evaluation Program. Air Force Instruction 14-202 Volume 2. 480th ISR Wing Supplement. Washington, D.C. U.S Department of the Air Force. 2015. Intelligence Standardization/Evaluation Program. Air Force Instruction 14-202 Volume 2. Washington, D.C. U.S Department of the Air Force. 2015. Intelligence Training. Air Force Instruction 14-202 Volume 1 Washington, D.C. U.S Department of the Air Force “Air Force e-Publishing.” Accessed June 25, 2015. http://www.e-publishing.af.mil/. U.S Department of the Army. “Army Electronic Publications.” Accessed June 25, 2015. http://armypubs.army.mil/. U.S Department of the Army. 1994. Intelligence Preparation of the Battlefield. Field Manual 34- 130. Washington, D.C. U.S Department of the Army. 2012. Intelligence. Army Doctrine and Training Publication 2-0. Washington, D.C. U.S Joint Chiefs of Staff. 2010. Department of Defense Dictionary of Military and Associated Terms. Joint Publication 1-02. Washington, D.C. U.S Joint Chiefs of Staff. 2011. Joint Operations. Joint Publication 3-0. Washington, D.C. U.S Joint Chiefs of Staff. 2011. Joint Operation Planning. Joint Publication 5-0. Washington, D.C. U.S. Joint Chiefs of Staff. 2012. Joint and National Intelligence Support to Military Operations. Joint Publication 2-01. Washington, D.C. U.S. Joint Chiefs of Staff. 2013. Joint Intelligence. Joint Publication 2-0. Washington, D.C.