SlideShare a Scribd company logo
1 of 80
Download to read offline
AN ASSESSMENT FRAMEWORK FOR FULL MOTION VIDEO ANALYSIS-
BASED PRODUCTION: DEFINING INTELLIGENCE PRODUCT MEASURES OF
EFFECTIVENESS AND PERFORMANCE
A Master Thesis
Submitted to the Faculty
of
American Public University System
by
Tori Hendrix Duckworth
In Partial Fulfillment of the
Requirement for the Degree
of
Master of Arts
July 2015
American Public University System
Charles Town, WV
i
The author hereby grants the American Public University System the right to display
these contents for educational purposes.
The author assumes total responsibility for meeting the requirements set by United States
copyright law for the inclusion of any materials that are not the author’s creation or in the
public domain.
© Copyright 2015 by Tori Hendrix Duckworth
All rights reserved.
ii
DEDICATION
I dedicate this thesis to my husband, Phillip Duckworth. Without his patience, support,
and barista skills the completion of this thesis would never have been possible.
iii
ACKNOWLEDGMENTS
I wish to thank Professor Christian Shuler of the American Public University System,
both for his dedication to my academic progression and for his mentorship on this project. I
would also like to thank Dr. Roger Travis of the University of Connecticut for investing the
time required to develop my formal research and writing techniques early in my academic life.
Thanks also to Mr. Michael Bailey, Mr. William Elkins, and Mr. Jeremy Jarrell, Mr.
Linn Wisner, Mr. Bill Majors, Mr. Mikel Gault, Mr. Daniel Thomas and Mr. Donray Kirkland
for their personal and professional investments toward the continuous improvement of
intelligence analysis and production. The above gentlemen have always provided a sounding
board, a support network and a sanity check for my intelligence research and analysis endeavors,
this thesis proving no exception.
iv
ABSTRACT OF THE THESIS
AN ASSESSMENT FRAMEWORK FOR FULL MOTION VIDEO ANALYSIS-
BASED PRODUCTION: DEFINING INTELLIGENCE PRODUCT MEASURES OF
EFFECTIVENESS AND PERFORMANCE
by
Tori Hendrix Duckworth
American Public University System, July 5, 2015
Charles Town, West Virginia
Professor Christian Shuler, Thesis Professor
This paper discusses an original mixed-method analytic approach to assessing and
measuring analyst performance and product effectiveness in Air Force Distributed Common
Ground Station (DCGS) Full Motion Video (FMV) analysis-based production. The method
remains consistent with Joint Publication doctrine on Measures of Effectiveness (MOE) and
Measures of Performance (MOP) and considers established methodologies for assessment in
Combat and Stability Operations, but refines the processes down to the micro-tactical level in
order to theoretically assess intelligence support functions and how MOP and MOE can be
determined for intelligence products. The research focuses specifically on those products derived
from FMV analysis produced per customer request in support of real-time contingency
operations. The outcome is an assessment process, based on Department of Defense Attributes of
Intelligence Excellence. The assessment framework can not only be used to assess FMV
analysis-based production, but prove a valuable assessment tool in other production-heavy
sectors of the Intelligence Community. The next phase of research will be to validate the
developed method through the observation and analysis of its practical application.
v
TABLE OF CONTENTS
SECTION PAGE
I. INTRODUCTION ........................................................................................................ 1
II. LITERATURE REVIEW ........................................................................................... 12
DCGS Operations .................................................................................................... 13
Mission Assessment................................................................................................. 16
Variable Identification and Control ......................................................................... 22
Measuring Performance and Effectiveness.............................................................. 25
III. METHODOLOGY ..................................................................................................... 31
Research Design Overview...................................................................................... 31
Collection Methodology .......................................................................................... 33
Specification of Operational Variables.................................................................... 34
Assessment Methodology........................................................................................ 36
IV. ANALYSIS AND FINDINGS ................................................................................... 38
Measures of Effectiveness ....................................................................................... 42
Measures of Performance ........................................................................................ 54
Theoretical Application ........................................................................................... 59
V. CONCLUSION........................................................................................................... 64
vi
LIST OF FIGURES
FIGURE PAGE
1. FMV Analysis-Based Production Mission Assessment………………………………..29
2. Measures of Effectiveness.……………………………………………………………..42
3. Timeliness Factor Levels……………………………………………………………….44
4. Accuracy Factor Levels..………………………………………………………………48
5. Customer Satisfaction Factor Levels…………………………………………………...54
6. Task and Effectiveness Assessment…………………………………………………....55
7. Measures of Performance……………………………………………………………....56
1
SECTION I
INTRODUCTION
Assessment is an essential tool for those leaders and decisions makers responsible for the
successful execution of military operations, actions and programs. The process of operationally
assessing military operations is crucial to the identification and implementation of those
adjustments required to meet intermediate and long term goals (Center for Army Lessons
Learned 2010, 1). Assessment supports the mission by charting progress in evolving situations.
Assessment allows for the determination of event impacts as they relate to overall mission
accomplishment and provides data points and trends for judgments about progress in monitored
mission areas (Center for Army Lessons Learned 2010, 4). In this way, assessment assists
military commanders in determining where opportunities are where course corrections,
modifications and adjustments must be made (U.S. Joint Chiefs of Staff 2011b, III-44). Without
operational assessment, both the commander and his staff would be beleaguered to keep pace
with quickly evolving situations and the commander’s decision making cycle could lack focus
due to a lack of understanding about the dynamic environment (Center for Army Lessons
Learned 2010, 4). Assessments are the commander’s keystone to maintaining accurate situational
understanding and are vital to appropriately revising their visualization and operational approach
(U.S. Joint Chiefs of Staff 2011b, III-44). From major strategic operations to the smallest of
tactical missions, assessment is a crucial aspect of mission management; the Air Force
Intelligence mission and its associated mission subsets are no exception.
Much as assessment helps commanders and staffs understand mission progress, intelligence
helps commanders and staffs understand the operational environment. Intelligence identifies
enemy capabilities and vulnerabilities, projects probable intentions and actions, and is a critical
2
aspect of both the planning process and the execution of operations. Intelligence provides
analyses that help commanders decide which forces or munitions to deploy and how to employ
them in a manner that accomplishes mission objectives (U.S Joint Chiefs of Staff 2013, I-18).
Most importantly, intelligence is a prerequisite for achieving information superiority, the state
that is attained when a competitive advantage is derived from the ability to exploit a superior
information position (Alberts, Garstka, and Stein 2000, 34). The role of intelligence in Air Force
operations it to provide warfighters, policy makers, and the acquisition community
comprehensive intelligence across the conflict spectrum. This role is designed to build the
foundation for information superiority and mission success by delivering on-time, tailored
intelligence (U.S Department of the Air Force 1997, 1). Key to operating in today’s information
dominated environment and getting the right information to the right people in order for them to
make the best possible decisions is the intelligence mission subset Intelligence, Surveillance and
Reconnaissance (ISR) (U.S. Department of the Air Force 2012b, 1).
The U.S Joint Chiefs of Staff Joint Publication 1-02, Department of Defense (DOD)
Dictionary of Military and Associated Terms defines ISR as “an activity that synchronizes and
integrates the planning and operation of sensors, assets, processing, exploitation, and
dissemination systems in direct support of current and future operations,” (U.S. Joint Chiefs of
Staff 2010, X). ISR is a strategic resource that enables commanders the ability to both make
decisions, and execute those decisions more rapidly and effectively than the adversary (U.S Joint
Chiefs of Staff 2011a, III-3). The fundamental job of U.S Air Force Airmen supporting the ISR
mission is to analyze, inform and provide commanders at every level with the knowledge
required to prevent surprise, make decisions, command forces and employ weapons (U.S
Department of the Air Force 2013a, 6). In addition to enabling information superiority, ISR is
3
critical to the maintenance of U.S. decision advantage, the ability to prevent strategic surprise,
understand emerging threats and track known threats, while adapting to the changing world
(Report of the DNI Authorities Hearing 2008, 1).
The joint ISR mission consists of multiple separate elements, spread across uniformed
services, component commands and intelligence specialties; each element in its own right must
be successful for the integrated ISR whole to support operations as required (U.S. Department of
the Air Force 2012b, 1). One such ISR element is Full-Motion Video (FMV) Analysis and
Production. FMV analysis and production is a tactical intelligence function. Tactical intelligence
is used by commanders, planners, and operators for planning and conducting engagements, and
special missions. Relevant, accurate, and timely tactical intelligence allows tactical units to
achieve positional and informational advantage over their adversaries (U.S. Joint Chiefs of Staff
2013, I-25). Tactical intelligence addresses the threat across the range of military operations by
identifying and assessing the adversary’s capabilities, intentions, and vulnerabilities, as well as
describing the physical environment. Tactical intelligence seeks to identify when, where, and in
what strength the adversary will conduct their tactical level operations. Physical identification of
the adversary and their operational networks as FMV analysis and production can facilitate,
allows for enhanced situational awareness, targeting, and watchlisting to track, hinder, and/or
prevent insurgent movements at the regional, national, or international levels (U.S. Joint Chiefs
of Staff 2013, I-25). FMV analysis and production is conducted, in part, by the Air Force
Distributed Common Ground System (DCGS) Enterprise.
The Air Force DCGS, also referred to as the AN/GSQ-272 SENTINEL weapon system,
is the Air Force’s primary ISR collection, processing, exploitation, analysis and dissemination
system. The DCGS functions as a distributed ISR operations capability which provides
4
worldwide, near real-time intelligence support simultaneously to multiple theaters of operation
through various reachback communication architectures. ISR provides the capability to collect
raw data from many different intelligence sensors across the terrestrial, aerial, and space layers
of the intelligence enterprise (U.S Department of the Army 2012b, 4-12).The Air Force DCGS
currently produces intelligence from data collected by a variety of sensors on the U-2, RQ-4
Global Hawk, MQ-1 Predator, MQ-9 Reaper, MC-12 and other ISR platforms (Air Force
Distributed Common Ground System, 2014). ISR synchronization involves the employment of
assigned and allocable platforms and sensors, to include FMV sensors, against specified
collection targets; once the raw data is collected, it is routed to the appropriate processing and
exploitation system, such as a DCGS node, so that it may be converted via exploitation and
analysis into usable information and disseminated to the customer in a timely fashion (U.S Joint
Chiefs of Staff 2013, I-11). The analysis and production portion of the intelligence process
involves integrating, evaluating analyzing, and interpreting information into a finished
intelligence product. Properly formatted intelligence products are then disseminated to the
requester, who integrates the intelligence into the decision-making and planning processes (U.S.
Joint Chiefs of Staff 2012, III-33). The usable information disseminated to the customer is the
FMV analysis-based product. In the DCGS, FMV analysis-based products are produced by the
GA and MSA mission crew positions (U.S. Air Force ISR Agency 2013b, 59).
The combination of the above process is routinely combined into the acronym PED,
Processing, Exploitation and Dissemination. Every intelligence discipline and complementary
intelligence capability is different, but each conducts PED activities to support timely and
effective intelligence operations. PED activities facilitate timely, relevant, usable, and tailored
intelligence (U.S. Department of the Army 2012, 4-13). PED activities are prioritized and
5
focused on intelligence processing, analysis, and assessment to quickly support specific
intelligence collection requirements and facilitate improved intelligence operations. PED began
as a processing and analytical support structure for unique systems and capabilities like FMV
derived from unmanned aircraft systems (U.S Department of the Army 2012, 4-12).
Intelligence is essential to maintaining aerospace power, information dominance and
decisions advantage; accordingly, the Air Force provides premier intelligence products and
services to all its aerospace customers including warfighters, policy makers and acquisition
managers (U.S Department of the Air Force 2004, 1). Intelligence is produced and provided to
combat forces even at the lowest level via the integration of data collected by the various ISR
platforms with Production, Exploitation and Dissemination (PED) crews. The PED crews are
comprised of intelligence specialists capable of robust, multi-intelligence analysis and
production (Air Force Distributed Common Ground System, 2014). DCGS mission sets depend
greatly on which type of sensor the PED crew is tasked to exploit; the scope of this research will
focus exclusively on Full Motion Video (FMV) analysis-based production, that is, products
based on the exploitation and analysis of Full Motion Video sensors.
Air Force ISR, with capabilities such as satellites and unmanned aerial vehicles, is a mission
set ever more integral to global operational success. The President’s 2012 Defense Strategic
Guidance directs the US military to begin transitioning from current operations to future
challenge preparation; as that transition into what will likely be a highly volatile, unpredictable
future occurs, Air Force ISR will be fundamental to ensuring U.S. and Allied force freedom of
action (U.S Department of the Air Force 2013a, 2013). Further complicating such a transition is
the reality of the currently austere fiscal environment. Balancing intelligence and remote sensing
capabilities across multiple Areas of Responsibility (AOR) with limited resources will demand
6
the development of more efficient ISR mission management and execution processes.
Intelligence organizations do recognize the need to continually improve their process
performance and the analytical arm of ISR Operations, the DCGS, is no exception. Having a
significant role in the maximization of ISR capabilities, the DCGS Enterprise should take every
step to ensure system-wide PED efficiency, an objective that requires an established mission
assessment framework.
Assessment is an essential tool that allows mission managers to monitor the performance of
tactical actions and to determine whether the desired effects are created in order to support goal
achievement. Assessment entails two distinct tasks: continuously monitoring the situation and
the progress of the operations and evaluating the operation against Measures of Effectiveness
(MOEs) and Measures of Performance (MOPs) to determine progress relative to the mission,
objectives, and end states. (U.S Joint Chiefs of Staff 2011b, III-45). All assessments should
answer two questions: Are things being done right? Are the right things being done? Those two
questions are perhaps best answered by the MOP and MOE assessment criteria. It is by
monitoring available information and using MOP and MOE as assessment tools during planning,
preparation, and mission execution that commanders and staffs determine progress toward
completing tasks, achieving objectives and attaining the military end state (U.S Joint Chiefs of
Staff 2011b, D-10). It is accepted that effectiveness is a measure associated with the problem
domain (what is to be achieved) and that performance measures are associated with the solution
domain (how is the problem to be solved) (Smith and Clark 2006, N -2). In other words MOE
help determine if a task is achieving its intended results (if the task the right thing to do) and
MOPs help determine if a task is completed properly (if the task is done right).
More specifically, MOE concern how well a system tracks against its purpose (Sproles 1997,
7
17). MOEs are criteria used to assess changes in system behavior, capability, or operational
environment and are tied to measuring the attainment of an end state, achievement of an
objective, or creation of an effect. MOEs are based on effects assessment, the process of
determining whether an action, program, or campaign is productive and can achieve the desired
results in operations (Center for Army Lessons Learned 2010, 6). Should the desired result be
achieved, then the outcome may be deemed effective; however, simply achieving the result may
not measure the degree of effectiveness. For example, within context of computer programming,
an effective program is one which produces the correct outcome, based on various constraints.
Given that the desired result was achieved, it is generally accepted that a program which is faster
and uses less resources is more effective (Smith and Clark 2006, N -2).
MOPs area discrete assessment tool based on task assessment. MOPs are criteria used to
assess actions that are tied to measuring task accomplishment and essentially confirm or deny
that a task was performed properly. Agencies, organizations, and military units will usually
perform a task assessment, based on established MOPs, to ensure work is performed to at least a
minimum standard (Center for Army Lessons Learned 2010, 5).
When discussing product effectiveness, it important to distinguish between how
effectively the product was produced and how effective the produced product was in achieving
its objective. Task assessment does have an effect on effects assessment, as any task not
performed to standard will have a negative impact on effectiveness (Center for Army Lessons
Learned 2010, 5). Thus, both effectiveness and consequently performance are significant in this
study. It is the objective of this research to demonstrate that analyst performance can be assessed
using MOPs and that product effectiveness can be measured, at least in part, by the identification
of repeatable MOEs. To provide accurate assessments of performance and effectiveness, any
8
identified MOPs and MOEs as assessment standards must be measurable, relevant and
responsive.
Whether conducted explicitly or implicitly, measurement is the mechanism for extracting
information from empirical observation (Bullock 2006, 17). Assessment measures must have
qualitative or quantitative standards, or metrics, against which they can be measured. To
effectively measure change, whether positive or negative, a baseline measurement should be
established prior to execution in order to facilitate accurate assessment (U.S Joint Chiefs of Staff
2011b, D-10). MOEs and MOPs measure a distinct aspect of an operation and thus require
distinct and discrete qualitative or quantitative metrics against which to be measured (Center for
Army Lessons Learned 2010, 8). Determining and applying qualitative and quantitative metrics
is doubly difficult for intelligence support operations, like FMV analysis-based production
because few traditional war fighting metrics apply (Darilek et al. 2001, 101). Determining
effectiveness is not as straight forward as assessing the number of tanks destroyed by an
offensive and weighing that against the objective number of tanks destroyed. While metrics can
be defined, they are generally more one-sided. The focus is centered more on the ability of the
analyst to operate in an operational environment than on how well an analyst operates relative to
an enemy force (Darilek et al. 2001, 101). Obtaining assessment insight is dependent on having
feasible implementation methods, as well as reliable models, for approaching the task of
measurement (Sink 1991, 25). Each MOP and MOE discussed in this paper will be assigned
either a qualitative or a quantitative metric and each metric will be discrete.
In addition to being measurable, MOPs and MOEs must also be relevant to tactical,
operational, or strategic objectives as applicable as well as to the operational environment and
the desired end state (Center for Army Lessons Learned 2010, 8). The relevancy criterion
9
prevents the collection and analysis of information that is of no value and helps ensure efficiency
by eliminating redundant and futile efforts (U.S Joint Chiefs of Staff 2011b, D-9). The relevancy
of each MOP and MOE presented in this paper will be demonstrated through a discussion of its
relationship to either the performance or effectiveness of FMV analysis-based production.
Lastly, MOP and MOE must be responsive. Both MOP and MOE must identify changes
quickly and accurately enough to allow commanders and staffs to respond effectively (Center for
Army Lessons Learned 2010, 9). When considering the time required for an action to produce a
desired result within the operational environment, the designated MOP and MOE should respond
accordingly; the result of action implementation should be capable of being measured in a timely
manner (U.S Joint Chiefs of Staff 2011b, D-10). DCGS FMV analysis-based production occurs
in near real-time. As such, any proposed production MOP or MOE should be readily derived
from available data or mission observation.
MOEs and MOPs are simply assessment criteria; in and of themselves they do not
represent assessment. MOEs and MOPs require relevant information in the form of indicators for
evaluation (U.S Joint Chiefs of Staff 2011b, D-3). In the context of assessments, an indicator is
an item of information that provides insight into MOP or MOE. A single indicator may inform
multiple MOPs and MOEs and MOPs and MOEs may be made up of several observable and
collectable indicators (U.S. Joint Chiefs of Staff 2011b, D-4). For MOPs, indicators should
consider the capability required to perform assigned tasks. Indicators used to perform MOE
analysis should inform changes to the operational environment. Indicators provide evidence that
a certain condition exists or certain results have or have not been attained, and enable decision
makers to direct changes to ongoing operations to ensure the mission remains focused on the end
state (U.S. Joint Chiefs of Staff 2011b, D-4). Indicators inform commanders and staff on the
10
current status of operational elements which may be used to predict performance and efficiency.
Air Force production centers produce and provide those intelligence products based on
validated/approved customer requirements (U.S. Department of the Air Force 2012b, 4).
Tailoring, adapting a product to ensure it meets the customer’s format and content requests, helps
to ensure the relevancy of the intelligence product. For example, if a customer requests a highly
tailored product versus a standard baseline product, such a request may indicate an increased
production time potentially impacting the timeliness, and thus the effectiveness, of the product.
In this way, the level of customization typically requested by customers for a certain product
type would serve as an indicator for the timeliness MOE, which is further discussed in the MOE
subsection of Section IV, Analysis and Findings.
Well-devised MOPs and MOEs, supported by effective information management, help
commanders and staffs understand the linkage between specific tasks, the desired effects, the
commander’s objectives and end state (U.S Joint Chiefs of Staff 2011b, D-10). Among the most
important analytical tasks in the intelligence mission set is effectiveness and performance
analysis, and yet, DCGS FMV operations have no enterprise-wide framework for mission
assessment in which clear production MOPs or MOEs are defined. The highly dynamic nature of
intelligence analysis, where even basic tasks are further complicated by analyst discretion and
technique, makes the systematic evaluation of DCGS product effectiveness highly problematic
but yet, not impossible.
The question then arises: what repeatable data can be identified and how can the FMV
analysis-based production mission be assessed in order to derive MOPs and MOEs? This paper
proposes an assessment framework specifically identified to enable the quantification and
qualification of DCGS analyst production performance and product effectiveness. The demands
11
of the modern operational environment require that intelligence products disseminated to the
customer are anticipatory, timely, accurate, usable, complete, relevant, objective, and available
(U.S Joint Chiefs of Staff 2013, III-33). Also known as Attributes of Intelligence Excellence, it is
from those intelligence product requirements that performance and product effectiveness
variables will be derived.
Given their importance to industries, organizations and institutions, the topics of MOE, MOP
and operational assessment are very well researched and sound methodologies for deriving MOE
and MOP exist even for most major military operation types. From the global strategic ISR
mission down, FMV analysis-based production is micro-tactical in nature. As discussed in
Section II, Literature Review, though the provision of products are a basic intelligence function,
analytic methodologies for the performance of product effectiveness analysis has not been
refined for such a niche subset of the intelligence mission. Expanded upon in Section III,
Methodology, this paper presents a theoretical process for identifying both MOP and MOE for
DCGS FMV analysis-based production. Following an outline of the assessment methodology,
Section V, Analysis and Findings will demonstrate that repeatable, measurable DCGS FMV
analysis-based production can be identified.
12
SECTION II
LITERATURE REVIEW
No shortage of literature exists concerning the importance of MOEs and MOPs and how
MOEs and MOPs may be assessed, especially with respect to military operations. While
assessment is required at every level of military operations, strategic, operational and tactical, a
majority of the relevant literature available focuses either on operational, or combat tactical
assessments. FMV analysis-based production, an operational function of the DCGS Enterprise, is
not a combat, but rather a support function, and when compared to operational functions such as
Village Stability Operations, may be better classified as micro-tactical (when assessed
independent of the operation being supporting). Thus, for such a small scale, niche function, no
predominant school of thought exists on the matter, i.e., no prior research has attempted to
answer this research question and no theoretical framework exists which can be applied wholly
to the problem set. At least, no prior research exists on the research topic that is publically
available. Due to the highly sensitive nature of DCGS Operations, any amount of relevant
literature may exist on classified networks; however, only publically available publications will
be utilized in this research. Many open source publications exist which are critical to
understanding how FMV-analysis based production MOP and MOE may be derived. Most are
not academic or peer-reviewed research, but rather are military doctrinal publications. Prior
existing research concerning the agglomerately developed theoretical framework of this study
also include military doctrinal publications, but more so include research-based case studies.
Publications critical to understanding the nature of the problem set, considered
background information, is detailed in the sections DCGS Operations and Variable Identification
and Control. To assess production, at is conducted during DCGS Operations, the operating
13
environment (OE) must be understood to the greatest degree possible. Given the volume of
possible variables and the challenges associated with illuminating their relationships, an
understanding of operational variables affecting analyst production, performance and product
effectiveness are similarly critical. Publications relating to the theoretical framework for deriving
FMV-analysis based MOPs and MOEs are discussed in the section Measuring Performance and
Effectiveness. The uniqueness of the problem set precludes the application of any pre-proposed
theoretical framework in its entirety. Rather, the problem set in question required the
development of a new model for determining MOPs and MOEs, one which examines the
relationship between the two assessment tolls as well as the effect of some critically impacting
variables on each.
DCGS Operations
One prerequisite for identifying DCGS mission assessment metrics and deriving FMV
analysis-based production MOPs and MOEs is a thorough understanding of DCGS mission
operations. DCGS mission operations for this research focuses on the production aspect of
operations and includes production training, evaluation and execution; whereas executing
production may most predominantly determine product effectiveness, how well the production
analyst was trained, and how well they performed during their evaluation prove useful as MOP
indicators.
The four most pertinent publications to DCGS training are Air Force Instruction (AFI)
14-202 Volume 1 (2015) concerning intelligence training, the Air Force ISR Agency (AFISRA)
supplement to AFI 14-202 Volume 1 (2013) specifying ISR intelligence training, AFISRA
Instruction 14-153 Volume 1 (2013) concerning AF DCGS intelligence training and the 480th
ISR Wing supplement to AFISRA Instruction 14-153 Volume 1 (2013) which further refines the
14
document for DCGS units, all of which are subordinate to the 480th
ISR Wing. The stated
objective of AFI 14-202 Volume 1 (2015) and its AFIRSA supplement (2013) is “to develop and
maintain a high state of readiness… by ensuring all personnel performing intelligence duties
attain and maintain the qualifications and currencies needed to support their unit’s mission
effectively…” (U.S Department of the Air Force 2015b, 4). The publications, having stated the
impact of training on mission effectiveness, discuss types of training, the flow of training,
individual training and currency requirements, mission readiness levels as well as the grading
rubric for training tasks, all data critical to identifying analyst/production performance metrics
and effectiveness indicators. The same holds true for AFISRA Instruction 14-153 Volume 1, Air
Force DCGS Intelligence Training (2013) and its 480th
ISR Wing Supplement (2013). Naturally
more applicable to DCGS training than the aforementioned AFI and AFISRA supplements, the
publications not only concur with the previous publications, but additionally provides guidance
for the assessment of training programs and outlines mission training elements, more so in the
480th
ISR Wing Supplement (Air Force ISR Agency 2013b, 7-31). The Training Task List
(TTL) defines training requirements for skill set development as they support the operational
duties necessary for each position. Those duties, for some positions, include the creation, quality
control and dissemination of data, information and textual/graphic intelligence products (Air
Force ISR Agency 2013b, 17; 28-29). These training publications are detailed to the point of
exacting passing grades for training tasks, a quantitative standard of great interest to MOP
research.
AFI 14-202 Volume 2 (2015) and the 480th
ISR Wing Supplement to the AFISRA
Supplement to AFI 14-202 Volume 2 (2014) are the Air Force and DCGS doctrinal publications
covering the Intelligence unit Commander’s Standardization/Evaluation Program. The purpose
15
of these publications is “to provide commanders a tool to validate mission readiness and the
effectiveness of intelligence personnel, including documentation of individual member
qualifications and capabilities specific to their duty position,” (U.S Department of the Air Force
2014, 7). The publications, having stated the importance of standardization and evaluation on
measuring personnel and mission effectiveness, discuss how the Standardization/Evaluation
Program evaluates intelligence personnel on their knowledge of the duty position and applicable
procedures wherein the task (mission performance) phase includes evaluating a demonstrated
task, like that of production (U.S Department of the Air Force 2014, 38). Because the evaluation
profile used to fulfill the task phase requisites must incorporate all appropriate requirements and
allow accurate measurements of the proficiency of the evaluatee, as with training, the
quantitative data collected during evaluations must be considered when defining MOP indicators
for FMV-analysis based production (U.S Department of the Air Force 2014, 44).
In addition to serving as a baseline for DCGS mission assessment and providing insight
into performance and effectiveness affecting variables as discussed in their respective sections
below, AFI 14-153 Volume 3 (2013), “AF DCGS Operations Procedures” and its 480th
ISR
Supplement (2013) define the DCGS mission and provide guidance and requirements for the
execution of DCGS operations. Aligning well with this research is the specifically stated DCGS
program objective to “enhance Air Force DCGS mission effectiveness,” (U.S Department of the
Air Force 2014, 6). Moreover, when researching elements that may affect analyst performance
during production efforts, knowing which analysts conduct the production is imperative. AFI 14-
153 Volume 3 (2013) breaks down mission execution roles and responsibilities by mission crew
position. Of most interest are the positions of the Geospatial Analyst (GA), responsible for the
creation of FMV-analysis based products, and that of the Multi-Source Analyst (MSA),
16
responsible for the creation of multi-intelligence report products (U.S Department of the Air
Force 2014, 13-18). Because final products are created and disseminated during the mission
execution phase of DCGS Operations, the DCGS Operations publications above are perhaps of
more research value than either the Training or the Standardization/Evaluations Air Force
publications with respect to contextualizing DCGS operations for assessment.
Publications concerning the DCGS are limited; especially those published academically
and not as military doctrine. Two peripheral sources concerning the DCGS are Brown’s 2009
“Operating the Distributed Common Ground System,” and Langley’s 2012
“Occupational Burnout and Retention of Distributed Common Ground System (DCGS)
Intelligence Personnel.” Brown’s article focuses on the human factors of the AN/GSQ
SENTINEL (DCGS) Weapons System and how the human factor influences net-centric
operations (Brown 2009, 2). The tone of the article is that of a story, full of details that
experience may corroborate but that the referenced sources did not. Beyond this thesis’ scope
and questionable in supporting its details, the article will not be incorporated in this research
beyond mentioning that nothing in the article contradicted any of the other referenced sources.
Though much more professional in tone and with all points very well supported, Langley’s 2012
report similarly fell beyond the scope of this research, but importantly, also did not contradict
any information presented in related research. In formulating a background knowledge of DCGS
operations, military doctrinal publications must exclusively be relied upon.
Mission Assessment
Before MOE and MOP can be valuable to mission assessment, the desired end
state must first be defined and mission success criteria developed. Mission success criteria
describe the standards for determining mission accomplishment and should be applied to all
17
phases of all operations, including FMV analysis-based production (U.S Joint Chiefs of Staff.
2011b, IV-10). The initial set of success criteria determined during mission analysis, usually in
the mission planning phase, becomes the basis for assessment. If the mission is unambiguous and
limited in time and scope, mission success criteria could be readily identifiable and linked
directly to the mission statement (U.S Joint Chiefs of Staff. 2011b, IV-10). The Air Force DCGS
mission statement states that the “Air Force DCGS provides actionable, multi-discipline
intelligence derived from multiple ISR platforms” (U.S Air Force ISR Agency 2013d, 5). That
the mission statement does not detail which attributes comprise actionable intelligence
complicates the mission success criteria and requires mission analysis to indicate several
potential success criteria, analyzed via MOE and MOP, for the desired effect and task
respectively (U.S Joint Chiefs of Staff. 2011b, IV-10).
Research into mission assessment for intelligence production, specifically production
derived from FMV, is significant not only because requirements and precedents exist for
intelligence product metric collection and assessment, but also because of the impact such
products have on the operating environment and the extent to which those products are poised
to evolve over the next few decades. Requirements, precedents and assessment guidance are
outlined across several Air Force and Joint doctrinal and exploratory military publications. The
importance of ISR, and of the DCGS, as well as a look at future operations is explored in the
Air Force publication Air Force ISR 2023 (2013) and in a corroborating journal article, A
Culminating Point for Air Force Intelligence Surveillance and Reconnaissance, published in the
Air & Space Power Journal by Kimminau (2012). In sum, the literature reviewed in this section
provides an overview of existing military mission assessment and lays the groundwork for the
increasing importance of intelligence production task (MOP) and effect (MOE) assessment.
18
The military publications reviewed which outline requirements or precedents for
intelligence product assessment can be divided into Air Force publications and Joint
publications. Air Force publications include AFI 14-201, Intelligence Production and
Applications, the aforementioned AFI 14-153 Volume 3, Air Force Distributed Common
Ground System (AF DCGS) Operations Procedures, along with its 480th
ISR Wing Supplement
(2014) and the 480th ISR Wing Supplement to AFISRA Instruction 14-121 (2013), Geospatial
Intelligence Exploitation and Dissemination Quality Control Program Management. AFI 41-201
applies to all Air Force organizations from whom intelligence production is requested, a
category into which the DCGS, as a Production Center, falls; however, the AFI is directed more
towards those intelligence agencies producing “stock products,” which the DCGS most usually
does not (U.S Department of the Air Force 2002, 20). The AFI does, however, reveal an
intelligence production precedent in requiring Headquarters (HQ), Director of ISR to conduct
annual reviews of production processes and metrics and to measure customer (requirement)
satisfaction (U.S Department of the Air Force 2002, 6-20). AFI 14-153 Volume 3 (2013) and its
480th
ISR Wing Supplement (2013) deal directly with AF DCGS Operations and includes
instructions for mission assessment, most usefully outlining assessment roles and
responsibilities, albeit vaguely. While every analyst is responsible for conducting mission
management activities during mission execution, the party responsible for documenting mission
highlights and deficiencies, keeping mission statistics, performing trend analysis and quality
control, and overall assessing mission effectiveness is the Department of Operations Mission
Management (U.S Department of the Air Force 2014, 39). Mission management efforts are
further discussed in AFISRA Instruction 14-121 (2013), which details the quality control and
documentation process for DCGS products, a process which must include the identification and
19
analysis of trends by types or errors, flights or work centers (U.S Air Force ISR Agency 2013e,
7). All three Air Force publications set a standard requiring intelligence producers to conduct
mission analysis; neither of the three however, mention MOPs, MOEs, possible metrics or
specifically suggest how such mission analysis should be conducted. While the Air Force
requires mission analysis towards greater effectiveness, the responsibility appears to fall to
individual organizations and units to conduct mission assessment in accordance with the broad
guidance set forth by Air Force doctrine as they see fit. How DCGS FMV units conduct mission
analysis and how they may do so better is central to this research, thus, AFI 14-153 Volume 3 is
an invaluable publication that sets the baseline for existing DCGS mission assessment guidance.
Joint Publications (JP) 3-0 (2011) and 5-0 (2011), both concerning Joint (multi-service)
operations, discuss mission assessment as well as how MOPs and MOEs may be used as
assessment criteria, albeit with an exclusive focus on the assessment of combat operations. JP 5-
0, in its direction of assessment processes and measures, defines evaluation, assessment, MOE
and MOP, sets the criteria for assessment, and discusses the value of MOPs and MOEs at every
level of operations (U.S Joint Chiefs of Staff 2011b, xvii-D-10). JP 3-0, though less in depth
when addressing mission assessment than JP 5-0, JP 3-0 similarly defines MOE and MOP
lending continuity between the documents and further discusses assessment at the strategic,
operational and tactical levels. Of most concern to this research is assessment at the tactical
level, the assessment level about which JP 3-0 and JP 5-0 diverge. In JP 5-0, tactical level
assessment can be conducted using MOPs to evaluate task accomplishment, a discussion which
reflects the impact of assessment on operations and more so makes the case for comprehensive,
integrated assessment plans (U.S Joint Chiefs of Staff 2011b, D-6). In JP 3-0, tactical-level
assessment is described as using both MOPs and MOEs to evaluate task accomplishment where
20
the rest of the paragraph aligns with JP 3-0 almost verbatim (U.S Joint Chiefs of Staff 2011a, II-
10). Both JPs define MOE and MOP in exactly the same way, where MOEs are a tool with
which conduct effects assessment and MOPs are a tool with which to conduct task assessment;
JP 3-0, however, later suggests that MOE and MOP are both utilized for task assessment
leaving out and contradicting its earlier guidance about MOE being a discrete tool concerning
effects assessment (U.S Joint Chiefs of Staff 2011a, GL-13; U.S Joint Chiefs of Staff 2011b,
III-45). The MOP for task and MOE for effect assessment is corroborated by every other
relevant source and so JP 3-0’s contradiction will be considered an oversight.
In both Air Force and Joint Publications, a strong emphasis is placed on the conduction
of mission assessment, the latter going so far as to incorporate MOPs and MOEs specifically.
The importance of mission assessment is apparent in published literature, not only by military
requirements, precedent and guidance, but also because of the impact of intelligence production
on operations and the importance of assessment in quickly evolving mission sets, like that of
ISR.
Sentinel Force 2000+ (1997), a quick reference document for the intelligence career
field, defines the Air Force Intelligence vision, identifies intelligence consumers and highlights
the importance of intelligence in Air Force operations: with respect to intelligence, “this role is
designed build the foundation for information superiority and mission success by delivering on-
time, tailored intelligence,” (U.S Department of the Air Force 1997, 2). The publication
additionally designates seven Goal Focus Areas of Air Force Intelligence designed to assure
U.S. information dominance. One of the Focus Areas is Products and Services; the
corresponding goal is to “focus on customer needs…modernize and continuously improve our
products and services to see, shape and dominate the operating environment,” (U.S Department
21
of the Air Force 1997, 3). Intelligence is critical to information superiority and mission success.
Production is a critical component of intelligence and an Air Force goal for almost two decades
has been product improvement. The objective for improvement exists, and yet, how can
intelligence product improvement be assessed/measured?
Air Force ISR 2023: Delivering Decision Advantage (2013) accounts for both the
importance of intelligence, specifically ISR, to military operations and discusses the projected
evolution of ISR over the next decade. ISR provides commanders with the knowledge they need
to prevent surprise, make decisions, command forces and employ weapons; it enables them to
maintain decision advantage, protect friendly forces, hold targets and apply deliberate,
discriminate kinetic and non-kinetic combat power (U.S Department of Air Force 2013a, 6).
The challenge for ISR is to maintain those competencies required to provide commanders with
the knowledge they need while reconstituting and refocusing ISR toward future operations, all
under the constraints of an austere fiscal environment (U.S Department of Air Force 2013a, 2-
4). Focusing heavily on ISR capabilities, the normalization of operations and the strengthening
of collaborative partnerships, the report focuses on the revolution of analysis and exploitation, a
topic well within the scope of this research. “The most important and challenging part of our
analysis and exploitation revolution is the need to shift to a new model of intelligence analysis
and production,” (U.S Department of Air Force 2013a, 13). With an actively evolving mission
set where intelligence production is on the brink of great change, any requirement for tactical
mission assessment becomes all the more important; otherwise, how can positive or negative
changes in production or products be assessed relative to the baseline? A mission subset as
critical as ISR production, especially when poised to rapidly evolve, requires established,
repeatable, assessment measures.
22
Closely paralleling Air Force ISR 2023 (2013) with respect to the future of the ISR
Enterprise is Kimminau’s 2012 Air & Space Power Journal article, A Culminating Point for Air
Force Intelligence Surveillance and Reconnaissance. Kimminau (2013) concurs with the
imperative for ISR: “success in war depends on superior information. ISR underpins every
mission that the DOD executes,” (Kimminau 2012, 117). The article further discusses the future
of ISR through 2030. While Kimminau focuses less on an analytical and production revolution,
he does elaborate on several of the projected evolution areas that do overlap with Air Force ISR
2023 (2013). Though not as useful to this research without a discuss of analysis and production,
Kimminau’s article serves best in corroborating Air Force ISR 2023 (2013) in its view of ISR as
dynamic, revolutionary, ever more important.
Variable Identification and Control
Perhaps the most challenging aspect of researching MOPs and MOEs in non-combat
micro-tactical mission operations is the identification and control of variables. Production is just
one element of DCGS operations, albeit an extremely interconnected element dependent on
human analytical training and preferences, the quality of received data, management style, the
status of communications, mission parameters and potentially several hundred other variables.
One significant research challenge is in determining which variables to hold constant and which
variables directly affect the task of production (MOPs) and/or product effectiveness (MOEs).
As discussed in the DCGS Operations section above, several Air Force doctrinal
publications provide guidance on DCGS training, evaluation, and mission execution and
mission management. Those same publications provide insight into which variables may affect
each respective aspect of mission operations. Another publication, AFI 14-132, Geospatial-
Intelligence (2012), offers insight into additional mission variables, not through references to
23
DCGS operations, but rather to the field of Geospatial-Intelligence (GEOINT) itself. GEOINT
encompasses all imagery, Imagery Intelligence (IMINT) and geospatial information, a category
into which FMV exploitation and production falls (U.S Department of the Air Force 2012a, 3).
In fact, a prerequisite to qualify as a GA, the position which creates FMV-analysis based
products, is to be in the Geospatial Intelligence Analyst career field, Air Force Specialty Code
(AFSC) 1N1X1 (Air Force ISR Agency 2013b, 24). AFI 14-132 (2012) mentions the DCGS as
a key exploiter and producer of Air Force GEOINT wherein the DCGS as a GEOINT
production center must report up the chain of command an analysis of the quality and timeliness
of GEOINT products, revealing timeliness and quality to be significant variables in GEOINT
product effectiveness (U.S Department of the Air Force 2012a, 14). The AFI additionally
discusses GEOINT baseline and core functions revealing the types of operations GEOINT
supports and the types of GEOINT exploited for that support, variables necessary to
contextually consider. For example, would the timeliness variable have the same impact on
performance and effectiveness across all mission types supported, or, do some mission types
necessarily value timeliness more or less than others?
The Raytheon Company’s 2001 Establishing System Measures of Effectiveness by John
Green corroborates the assessment that timeliness factors into product effectiveness. In addition
to addressing the imperative for performance analysis and developing definitions for system
MOPs and MOEs, the report most importantly weighs in on timeliness as an assessment
variable. “Time is often used… [and] is attractive as an effectiveness measure,” (Green 2001,
4). Green (2001) goes on say that: “time is the independent variable and outcomes of processes
occur with respect to time,” (Green 2001, 4). Similarly, in this research, the only dependent
variable is effectiveness where all other uncontrolled variables, to include time, are considered
24
independent. Green (2001) not only nods to the common practice of using time to measure
effectiveness, but highlights the logic behind time as an independent rather than dependent
variable.
The most comprehensive categorization of intelligence mission variables can be found in
a 2003 Central Intelligence Agency publication by Rob Johnson, Developing a Taxonomy of
Intelligence Analysis Variables. Johnson (2003) provides a classification of 136 variables
potentially affecting intelligence analysis broken down into four exclusive categories: systemic
variables, systematic variables, idiosyncratic variables and communicative variables. The list
does not suggest that any of the variables affect human performance or product effectiveness,
rather it serves to identification purposes so that experimental methods may isolate and control
some variables in order to further isolate those variable affecting analysis (Johnson 2003, 3-4).
FMV products are dependent on FMV analysis, hence FMV-analysis based production. The
focus of this research is on the effectiveness of products and also on analyst performance in
producing said products, thus, many of those variables identified as potentially affecting
analysis will also potentially affect production. The list focuses more on cognitive and
situational-type variables such as analytic methodology, decision strategies, collaboration, etc.
rather than variables like timeliness, quality, precision, etc. which are the focus of this research;
however, the idea of controlling some variables in order to assess what impact others may have
on the dependent variable, here effectiveness, is a goal of this research.
RAND’s 1995 publication, A Method for Measuring the Value of Scout/Reconnaissance,
by Veit and Callero explores the topic of variable identification and control, the most relevant
piece of which concerned variable definitions. The case study well defined two variables of
interest to FMV analysis-based production: timeliness and precision. Timeliness was defined as
25
the “time between information collection and its availability for use by the situation assessor”
and precision was defined as “the level of detail about enemy weapons systems reported,” (Veit
and Callero 1995, xv). The timeliness definition is applicable verbatim; the precision definition
is geared exclusively towards scout/reconnaissance but can be easily amended to fit analysis
and production precision. Most interestingly, Veit and Callero (1995) introduced the concept of
variable factor levels. To better understand the relationship between variables, several factor
levels can be assigned to a variable in order to understand the degree of impact. For example,
the precision variable can be further specified by categorically establishing three different levels
of detail. Factor levels are further discussed in Analysis and Findings. While useful for its
variable definitions and factor level methodology, Veit and Callero (1995) was found to be
more valuable in its proposed methodology for deriving MOEs.
The most significant publication concerning potential variables for the FMV production
mission subset was JP 2-0 (2013), Joint Intelligence. Though the variables discussed were not
strictly identified as variables or put directly into the context of mission assessment, JP 2-0
introduced the concept of Attributes of Intelligence Excellence, a list of variables thought to be
critical in the production of effective products. Attributes of Intelligence Excellence include
anticipatory, timely, accurate, usable, complete, relevant, objective and available (U.S. Joint
Chiefs of Staff 2013, III-33). JP 2-0 additionally acknowledged that the effectiveness of
intelligence operations is assessed by devising metrics and indicators associated with those
attributes (U.S. Joint Chiefs of Staff 2013, I-22). Over the course of this research, along with
precision, each attribute listed will be assessed as a potential MOP, MOE or MOP/MOE
indicator.
Measuring Performance and Effectiveness
26
“There is no more critical juncture in implementing [a] successful assessment…than the
moment of methods selection,” (Johnson et al. 1993, 153). Joint Publication 3-0 (2011), Joint
Operations, and Joint Publication 5-0 (2011), Joint Operations Planning, both discuss the
importance of conducting assessment at every phase of military operations, both at the task
assessment level (MOPs) and effects assessment level (MOEs). Much attention has been paid to
defining MOP and MOE for Combat Operations, Stability Operations, Reconnaissance and even
Intelligence Simulations; however, publically available data assessing the niche operational field
of FMV analysis-based production is lacking. No “one size fits all” methodology exists for
determining MOPs and MOEs, nor does any existing methodology adequately apply to FMV
analysis-based production assessment problem set. Several publications do detail MOP and/or
MOE derivation methodologies, portions of which do relate to this problem set and aggregately
those publications enable the foundation of a new theoretical framework.
The Center for Army Lessons Learned 2010 Handbook 10-41, Assessments and
Measures of Effectiveness in Stability Operations outlines the Tactics, Techniques and
Procedures (TTPs) for conducting mission assessment and for determining MOEs for stability
operations. The handbook details the necessity for assessment and clearly distinguishes between
task-based (doing things right) and effect-based (doing the right thing) and delineates when
MOPs and MOEs are appropriately applied. The handbook provides insight into the Army’s
Stability Operations Assessment Process, including how task and effect-based assessment
support deficiency analysis. “The process of assessment and measuring effectiveness of military
operations, capacity building programs, and other actions is crucial in identifying and
implementing the necessary adjustments to meet intermediate and long term goals in stability
operations,” (Center for Army Lessons Learned 2010, 1). All such guidance is highly relevant to
27
the assessment process herein proposed for FMV analysis-based production, despite the Stability
Operation-specific assessment is beyond the scope of this research.
Greitzer’s 2005 Methodology, Metrics and Measures for Testing and Evaluation of
Intelligence Analysis Tools is a case study involving a practical framework for assessing
intelligence analysis tools based on almost-exclusively quantitative assessment. While the case
study’s framework did not align with this research’s mixed-method theoretical assessment
approach, the case study did identify several potentially replicable performance measures, both
qualitative (customer satisfaction, etc.) and quantitative (accuracy, etc.) all of which pertain
specifically to intelligence analysis. Such performance measures are pertinent to the discussion
of MOPs across any intelligence function and are especially pertinent to the analysis of FMV
analysis-based production. The case study also defined and distinguished between measurement
and metrics, definitions which were adopted for the FMV-analysis based production assessment.
The highly applicable discussion of quantitative and qualitative performance measures, as well
as of metrics, heavily influenced the theoretical framework of this research.
Bullock’s 2006 Theory of Effectiveness Measurement dissertation titularly appears
applicable to FMV-analysis based production effectiveness measurement, especially when
defining effectiveness measurement as “the difference, or conceptual distance from a given
system state to some reference system state (e.g. desired end-state);” however, the foundation of
Bullock’s effectiveness measurement was applied mathematics with all relationships between
variables explained as mathematically stated relational systems (Bullock 2006, iv-12). The
Bullock (2006) study approached the problem of effectiveness measurement quantitatively, a
method which may have applied to quantifiable FMV-analysis based production variables, but
which did not provide an applicable framework for unquantifiable (qualitative and quasi-
28
quantitative) variables.
Chapter Seven of Measures of Effectiveness for the Information-Age Army, “MOEs for
Stability and Support Operations” (Darilek et al. 2001) is a case study revolving around MOEs
for non-combat operations, a category applicable to intelligence support functions (when
assessed independently of the operation they are supporting). The case study theoretically
examined how Dominant Maneuver in Humanitarian Assistance might be assessed and how
MOE might be derived. The method of theoretical examination perfectly aligned with the
proposed theoretical examination of FMV-analysis based production MOE, although the
specifics of assessing Humanitarian Assistance were too disparate for the analytical framework
to be duplicated. As with the ISR mission, Darelik et al. (2001) emphasized the changing nature
of Assistance Operations, partially to highlight the necessity of assessment. Of all the reviewed
publications, Darelik et al. (2001), had the methodology which most closely applied to the FMV-
analysis based production mission set, most probably because it was the only theoretical case
study to focus on non-combat operations.
Evaluating the effectiveness of any intelligence analysis-related problem set presents
several methodological challenges (Greitzer and Allwein 2005, 1). Because intelligence
production cannot avoid the human elements of analyst preference and technique, assessing
analyst performance (was the product created correctly) and product effectiveness (did the
product meet its objective) is far more subjective than traditional combat operations, such as
whether a projectile hit and incapacitated it's intended target or whether or not a route was
successfully cleared. Coupled with the difficulties associated with valuing intelligence in a
strictly quantitative, i.e., numerical fashion, the pre-proposed MOP/MOE assessment
frameworks above cannot be wholly adopted. Rather, this unique problem set required the
29
development of a new model for determining MOP and MOE, one which examines the
relationship between the two as well as the effect of all variables when examined in the FMV
production operating environment. The model proposed does include effectiveness criteria
(variable factors) adapted from Method for Measuring the Value of Scout/Reconnaissance (Veit
and Callero, 1995) and a flow of assessment loosely based upon Chapter Seven of Measures of
Effectiveness for the Information-Age Army, “MOEs for Stability and Support Operations”
(Darilek et al., 2001).
The overarching hypothesis in this research, that MOP and MOE are identifiable,
measurable and can be utilized to evaluate FMV analysis-based products in the context of
circumstances defined by mission variables, is dependent on several sub-hypotheses:
 Sub-Hypothesis 1: At least one MOE and one MOP can be identified for DCGS FMV
analysis-based products
 Sub-Hypothesis 2: For each DCGS FMV production MOE and MOP, at least one
indicator can be defined based on variable and objective analysis
 Sub-Hypothesis 3: A discrete, measurable metric can be assigned to each identified
MOE and MOP
Figure 1 demonstrates the relationship between MOP, MOE, indicators and factor levels. The
outline below illustrates the theoretical framework via which the above hypotheses above will be
tested. Using a top-down approach, the research will:
1. Identify MOP and MOE: which independent variables impact product quality or
effectiveness and meet MOE/MOP criteria (repeatable, measurable, relevant and
responsive)?
2. Identify MOP and MOE factor levels: what factor levels are required to assess degree of
contribution to operational effectiveness?
3. Identify performance and effectiveness indicators: what items of information provide
insight into MOP and MOE analysis?
4. Identify MOP and MOE metrics: how will MOE and MOP be measured?
30
Figure 1: FMV Analysis-Based Production Mission Assessment
31
SECTION III
METHODOLOGY
In support of determining FMV analysis-based production MOP and MOE, this
case study’s theoretical analysis will consider how multiple independent variables affect one
dependent effectiveness variable, a precedent set by multiple studies assessing organization
dependence where performance is defined as a dependent variable and the studies seek to
determine variables that produce variations in or effects on performance (March and Sutton
1998, 698). In accordance with methodological precedent discussed in Case Studies and Theory
Development in the Social Sciences by George and Bennett (2005), the research objective
focuses not on outcomes of the dependent effectiveness variable, i.e., the impact of product
effectiveness on ISR operations, but on the importance and impact of independent variables, i.e.,
on how certain predefined variables affect product effectiveness. (George and Bennett 2005, 80-
81).
Due to environmental constraints including the highly sensitive and classified nature of
FMV production, this study will be limited to secondary over primary research and theoretical
over practical analysis. Rather than testing the effects of mission variables via the observation of
production analysts or surveying the effectiveness of specific FMV product types, the primary
research objective in this case study is to explore and explain the relationship between multiple
independent variables and a single dependent effectiveness variable. Independent variables of
theoretical interest include timeliness, accuracy, and usability.
Research Design Overview
The research problem was addressed via both archival and analytic research strategies
(Buckley, Buckley, and Chiang 1976, 15). The archival strategy involved quantitative and
32
qualitative research methods whereby research was conducted in the secondary domain
exclusively through the collection and content analysis of publically available data, primarily
journal articles and military publications. The archival research design heavily considers George
and Bennett’s 2005 “structured-focused, research design” method (George and Bennett 2005, 73-
75). The research objective, to derive FMV analysis-based production MOE and MOP, can best
be described as “Plausibility Probe” research wherein the study explores an original theory and
hypotheses to explore whether more comprehensive, operational, primary research and testing is
warranted. Theory development, not operational testing is the research design focus (George and
Bennett 2005, 73-75).
The lack of previously existing theories, hypotheses and research concerning niche ISR
mission effectiveness necessitated the modification of the initial, Plausibility Probe research in
order to build upon and develop new operational variables and intelligence variable definitions
(George and Bennett 2005, 70-71). The uniqueness of the problem set additionally required the
use of internal logic. As previously discussed, the highly classified nature of intelligence
operations coupled with the originally of the niche problem set made observational research,
empirical research and source meta-analysis impossible. Because of information and personnel
access restrictions, hypothesis testing was necessarily relegated to theoretical exploration and the
analytical research strategy was required to satisfactorily answer the research question.
The analytic research method made use primarily of internal logic, both deductive and
inductive in research nature. Deductive research involved broad source collection and content
analysis in order to determine all data relevant to the problem set and to draw original
conclusions from multiple disparate data sets. For example, extensive research was conducted on
both MOE and MOP determination methods and how the military conducts operational
33
assessment in order to determine method overlap and to develop refined assessment
methodologies where overlap failed to occur. Inductive research involved taking all relevant
aspects of the problem set and further collecting source material relevant to that subtopic in order
to better, comprehensively, understand how each subtopic relates to the overarching problem set.
For example, once the independent variables were identified, each was extensively researched in
order to better define the variables within the context of the intelligence mission and to
determine any known relationships to the dependent variable, here product effectiveness. By
utilizing the analytic research strategy, the comparability of this research to previously existing
studies will be limited, especially with respect to variable analysis. The research design is further
outlined in the following sections: Collection Methodology, Operational Variable Specification
and Analysis Methodology.
Collection Methodology
The research collection methodology for this thesis involves a two-pronged content
analysis approach. Deriving a theory from a set of data and then testing it on the same data, is
thought by some to be an illegitimate research practice (George and Bennett 2005, 76). Thus,
two sets of data will be reviewed: literature concerning FMV PED operations (theoretical testing
data) and literature discussing MOE and MOP evaluation case studies (methodological theory
data). Testing data will be collected almost exclusively from military publications, all obtained
either through the U.S Air Force e-Publishing document repository, the U.S Army Electronic
Publication repository, or the U.S Department of Defense (DOD) Joint Electronic Library.
Because the American Public University System Digital Library yielded few results related to
this highly specialized research topic, theoretical data had to be collected from known academic
defense journals, primarily the Air University Press Air and Space Power Journal digital
34
archives, known defense research corporations, primarily RAND Corporation online research
publications, and the Defense Technical Information Center digital repository. Due to the
extremely limited amount of publically available and relevant data in either of the above
categories, case selection bias as well as competing/contradictory analysis was naturally
mitigated.
Key to testing the stated hypothesis and sub-hypotheses is the identification and analysis
of operational variables which may affect product effectiveness (MOEs), effectiveness
indicators, those elements of the mission related to MOEs, MOPs, and quantitative/qualitative
metrics for both MOEs and MOPs; these tasks were accomplished through an in-depth content
analysis of the many Air Force doctrinal publications detailing Geospatial Intelligence
(GEOINT) and DCGS operations (training, evaluation, and mission execution). For those
operational variables identified, additional publications relevant to each variable were consulted
for further examination of related content.
The second content analysis approach includes a close review of previously published
theses, dissertations, journal articles and technical reports concerning the derivation of MOEs
and MOPs, with specific attention paid to those addressing analysis, intelligence analysis,
military operations and/or theoretical MOE/MOP assessments. All methodologically applicable
portions of previously conducted MOE/MOP case studies will be incorporated into the analysis
of the FMV analysis-based production mission subset.
Specification of Operational Variables
As previously mentioned, the single dependent variable in this case study will be DCGS
FMV-analysis based product effectiveness. The execution of the FMV PED mission involves
almost immeasurable and innumerable systemic, systematize, idiosyncratic communicative
35
variables, all of which could arguably impact mission effectiveness. (Johnson 2003, 4). Given
that the micro-tactical nature of FMV production necessarily narrowed scope of this research and
that research resources were limited to publically available data, most variables will be held
constant and many more will remain unknown. This study will be limited to the examination of
only those variables which most greatly affect product effectiveness and analyst performance; all
other variables, such as leadership management style, communications and systems status, etc.
will be held as fixed and non-impacting. To further specify the research objective, the focus of
this thesis is not to identify every variable which may impact effectiveness, but rather to show
that any impacting variables (MOEs) can be identified at all. As described in the Research
Design Overview section above, this Plausibility Probe research is intended to initially address
the research question of whether or not MOE and MOP can be derived for FMV-analysis based
products; given the volume of DCGS FMV PED operational variables, the research is thus
designed to avoid any impact negated (fixed/non-impacting) variables may have on the validity
of analysis.
The research design further aims to isolate the impacts of product effectiveness by
assessing the influence of variance in independent variables. Variable factor levels were
developed for each independent variable as a sensitive method of describing variable variances
(George and Bennett 2005, 80-81).
Should this theoretical assessment framework ever be practically applied, classified
mission data, site specific objectives, and local TTPs can be incorporated into the assessment
process to derive additional MOEs should those designated in this research fail to
comprehensively reflect requirements for product effectiveness.
36
Based on an in-depth review of available literature, those independent variables identified
as impacting to DCGS FMV analysis-based product effectiveness (MOEs) include: product
timeliness, product accuracy, and product usability. Those independent variables identified as
reflective of analyst performance (MOPs) also include product timeliness and product accuracy.
The Analysis and Findings section of the thesis will revolve predominantly around an
examination of how those independent variables, or potential MOEs/MOPs, impact the overall
effectiveness of the product and reflect the performance of the production analysts.
Assessment Methodology
Following the data collection, all prior existing research critical to understanding DCGS
operations and those concerning intelligence performance variables was analyzed. A prerequisite
to any effectiveness analysis is the identification of an objective against which to measure
effectiveness. The primary phase of analysis was thus an investigation into the objective of
DCGS FMV analysis-based productions.
Subsequently, variable analysis commenced. Impact analysis of each independent
variable was not practically conducted, rather the relationship between each independent
variable, MOE, and the dependent variable, product effectiveness was theoretically examined. In
précis, the research examined how product timeliness, accuracy and usability affects DCGS
FMV-analysis based product effectiveness. A discussion of MOPs followed focusing on the
relationship between product accuracy and product effectiveness. The hypothesis, that MOE for
FMV-analysis based production can be identified, was tested through the examined relationship
between the independent and dependent variables. To further refine and quantify/qualify each
MOE and MOP, factor variable levels were determined for each independent variable. For
example, for the independent variable (MOE) “accuracy,” those factor levels would be: accurate
37
(100%), marginally accurate (98-99.9%), or inaccurate (<98%). In defining variable factors
levels, not only can the independent variables be assessed in terms of whether or not they impact
product effectiveness, but degree of impact can be assessed as well.
Although theoretical, one follow-on objective of the study is a practical application of the
assessment framework. Such a practical application will require the identification of MOE and
MOP but also metrics and indicators for each identified MOE and MOP. Following MOE
identification and metric determination, a secondary analysis was conducted to determine
effectiveness and performance indicators for each MOE and MOP, i.e., could a correlative
relationship be demonstrated between certain mission execution elements and each MOE/MOP.
Lastly should this research become operationally incorporated, because of the subjective nature
of intelligence, the case study examines potential biases, both of the production analyst, and of
the intelligence consumer.
38
SECTION IV
ANALYSIS AND FINDINGS
Because traditional military assessment measures and operational MOEs are not effective
in accurately assessing the niche combat mission support element of FMV production, the
challenge is in determining what attributes make the intelligence provided by FMV analysis-
based products “actionable”. According to Joint Publication 2-0, Joint Intelligence, the
effectiveness of intelligence operations is assessed by devising metrics and indicators associated
with Attributes of Intelligence Excellence (U.S. Joint Chiefs of Staff 2013, I-22). It is here
proposed that so long as they meet the MOE and MOP criteria, MOEs, MOPs and MOE/MOP
indicators can be derived from the established list of Attributes of Intelligence Excellence:
anticipatory, timely, accurate, usable, complete, relevant, objective and available (U.S. Joint
Chiefs of Staff 2013, III-33).The focus of this research is FMV analysis-based production
MOEs; however, because aggregate analyst performance can impact the overall effectiveness of
the products, MOPs will also be discussed as they relate to MOEs.
Excessive numbers of MOEs or MOPs become unmanageable and risk the collection
cost efforts outweighing the value and benefit of assessment (Center for Army Lessons Learned
2010, 8). To maximize the assessment benefit, some attributes can be combined; complete can be
considered an accuracy indicator and relevant can be considered a usability indicator.
Anticipatory, objective and available each fail to meet MOE criteria. Further discussed in their
individual sections below, of the eight Attributes of Intelligence Excellence, which can be said to
comprise FMV product success criteria, only timely, accurate, encompassing completeness, and
useable, encompassing relevance, succeed as effective MOE.
Concerning the anticipatory attribute, to be effective, intelligence must anticipate the
39
informational needs of the customer. Anticipating the intelligence needs of the customer requires
that the intelligence staff identify and fully understand the current tasked mission, the customer’s
intent, all relevant aspects of the operational environment and all possible friendly and adversary
courses of action. Most importantly anticipation requires the aggressive involvement of
intelligence personnel in operational planning at the earliest possible time (U.S Joint Chiefs of
Staff 2013, II-7). Anticipatory products certainly may be more actionable, and thus more
effective, than non-anticipatory products and those FMV analysts responsible for production
through training, experience, and an in depth knowledge of the operational environment are most
likely capable of providing anticipatory intelligence. Anticipatory intelligence, however, fails to
meet requirements for an FMV analysis-based production MOE. Because MOE must be
measurable to be effective, the first complication with the anticipatory attribute is in designating
a metric. Unlike timeliness, which can be measured in seconds, minutes or hours, or accuracy
which can be measured on a percentage scale of one to 100, determine a standardized unit of
measurement for the degree to which a product is anticipatory is not currently feasible.
Additionally, anticipatory, or predictive, intelligence is generally used to assess threat
capabilities and emphasize and identify the most dangerous adversary Course of Action (COA)
and the most likely adversary COA; such analysis is not typically conducted during near real-
time operations by the PED Crew, but by intelligence staff tasked to those determinations (U.S.
Department of the Army 2012, 2-2). Further complicating the issue of measurement is the issue
of replicability. Even if a quantitative or qualitative method of measuring anticipation was
devised, the same metric may not be applicable to the same product type under different mission
tasking circumstances. DCGS intelligence is derived from multiple ISR platforms, may be
analyzed by active duty, Air National Guard or Air Force Reserve forces and is provided to
40
COCOMs, Component Numbered Air Forces, national command authorities and any number of
their subordinates across the globe (U.S. Air Force ISR Agency 2014, 6). Depending on
exploitation requirements, analysts may be tasked to exploit FMV in support of several discrete
mission sets including situational awareness, high-value targeting, activity monitoring, etc. (U.S.
Joint Chiefs of Staff 2012, III-33). The nature of different customers and mission sets may
impact the degree to which anticipatory intelligence is even possible. For example, the case of
dynamic retasking, where a higher priority collection requirement arises and the tasked asset is
immediately retasked to support (U.S. Department of the Air Force 2007, 15). If an asset is
dynamically retasked and products are immediately requested thereafter, the PED crew would be
operating under severely limited knowledge of the new OE and anticipating a new customer’s
production expectations beyond expressed product requirements would be very difficult if not
infeasible. The intent of this research is to identify FMV analysis-based product MOE which are
repeatable, measurable, relevant, responsive and standardized across product and mission types.
The attribute anticipatory is neither easily measurable nor repeatable given the amount of
mission variables impacting the feasibility of analysts making anticipatory assessments. A well-
produced product disseminated in support of a dynamically retasked mission may be perfectly
effective in achieving its goal of actionable intelligence despite not being anticipatory. While it
may be more effective if it were anticipatory, the failure of that attribute to meet MOE criteria
negates it as an appropriate MOE.
Due to the decisive and consequential impact of intelligence on operations and the
reliance of planning and operations decisions on intelligence, it is important for intelligence
analysts to maintain objectivity and independence in developing assessments. Intelligence, to the
greatest extent possible, must be vigilant in guarding against biases. In particular, intelligence
41
analysts should both recognize each adversary as unique, and realize the possible biases
associated with their assessment type (U.S. Joint Chiefs of Staff 2013, II-8). As with the
anticipatory attribute, the objective attribute presents problems with both measurability and
replicability. Also comparable to the anticipatory attribute, objectivity is highly important to an
effective product. Because not only each mission, but each production analyst is unique, so too
is every analyst’s assessment biases, each of which would undermine product objectivity.
Further obfuscating objectivity as a viable MOE is the difficulty in determining a standard metric
by which to measure objectiveness and a baseline against which to compare the objectivity of
products. Both anticipating the customer’s needs and remaining objective during analysis and
production should be trained, heavily reinforced and corrected prior to dissemination as required.
Biased analysis and production degrade product accuracy and effectiveness through the
misinterpretation, underestimating or overestimating of events, the adversary, or the
consequences of friendly force action. Objectivity does impact the effectiveness of DCGS FMV
analysis-based products; however, the inability of the objective attribute to be measured in a
standardized, replicable fashion disqualifies the attribute as a potential MOE.
ISR-derived information must be readily accessible to be effective, hence the
availability attribute. Availability is a function of not only timeliness and usability, discussed in
their respective sections below, but also the appropriate security classification, interoperability,
and connectivity (U.S Joint Chiefs of Staff 2013, II-8). Naturally both ISR personnel and the
consumer should have the appropriate clearances to access and use the information (U.S.
Department of the Air Force 2007, 15). Product dissemination to the customer is necessarily
dependent on full network connectivity. While a product would be completely ineffective if it
were never received by the customer, or if it were classified above what the customer could
42
Figure 2: Measures of Effectiveness
access; however, throughout the theoretical exploration of this research problem, full product
availability will be assumed. During the discussion of specific MOE below, network connectivity
will be held constant at fully functional and classification will be assumed to be the derivative
classification of the tasked mission ensuring both analyst and customer access.
Measures of Effectiveness
Dismissing anticipatory, objective and available as attributes suitable to MOE selection,
the applicable attributes of timely, accurate, usable, complete, and relevant remain. It is against
that attribute criteria, either as MOE or as MOE indicators (see Figure 2) that the effectiveness of
DCGS FMV analysis-based products in this thesis will be measured.
Timeliness
Commanders have always required timely intelligence to enable operations. Because
intelligence is a continuous process that directly supports the operations process, operations and
intelligence are inextricably linked (U.S. Department of the Army 2012, v). Timely intelligence
is essential for the prevention of surprise by the adversary, to conduct defensive operations, seize
the initiative, and utilizes forces effectively (U.S. Department of the Air Force 2007, 5). To be
effective, the commander’s decision and execution cycles must be consistently faster than the
adversary’s; thus, the success of operations to some extent hinges upon timely intelligence (U.S.
43
Joint Chiefs of Staff 2013, xv).
The responsive nature of Air Force ISR assets is an essential enabler of timeliness (U.S.
Department of the Air Force 2007, 5). In the current environment of rapidly improving
technologies, having the capability to provide timely information is vital (U.S. Department of the
Air Force 1999, 40). The dynamic nature of ISR can provide information to aid in a
commander's decision-making cycle and constantly improve the commander's view of the OE,
that is, if the products are received in time to plan and execute operations as required. Because
timeliness is so critical to intelligence effectiveness, as technology evolves, every effort should
be made to streamline and automate processes to reduce timelines from tasking through product
dissemination. (U.S. Department of the Air Force 2007, 5).
Timeliness is vital to actionable intelligence; the question is, does ‘late’ intelligence
equate to ineffective intelligence? According the concept of LTIOV (Latest Time Information of
Value), there comes a time when intelligence is no longer useful. LTIOV is the time by which
information must be delivered to the requestor in order to provide decision makers with timely
intelligence (U.S. Department of the Army 1994, Glossary-7). The implication is that
information received by the end user after the LTIOV is no longer of value, and thus, no longer
effective. To be effective at all, intelligence must be available when the commander requires it
(U.S. Joint Chiefs of Staff 2013, II-7).
All efforts must be made to ensure that the personnel and organizations that need access
to required intelligence will have it in a timely manner (U.S. Joint Chiefs of Staff 2013, II-7).
The DCGS is one intelligence dissemination system capable of providing near real-time
situational awareness and threat and target status (U.S. Department of the Air Force 1999, 40). In
the fast-paced world of near real-time intelligence assessment, production time is measured not
44
Figure 3: Timeliness Factor Levels
in days, or hours, but in minutes. Consequently, the most appropriate metric for product
timeliness is the production time, from tasking to dissemination availability, in minutes.
The GA and MSA PED Crew positions are not only responsible for the creation and
dissemination of intelligence products, but also for ensuring their production meets the DCGS
Enterprise timeline (time limit) for that product (U.S. Department of the Air Force 2013a; 29).
For this discussion of timeliness as an MOE, it will be assumed that the established DCGS
Enterprise product timelines are congruent with product intelligence of value timelines, i.e., so
long as an analyst successfully produces a product within its timeline, that product will be
considered timely by the consumer. The DCGS Enterprise product timeline will serve as the
baseline against which products will be compared. In assessing the degree of product timeliness,
the three assigned factor levels, outlined in Figure 3, will be 1. The product was completed early
(very timely) 2. The product was completed on time (timely), or 3. The product was completed
late (not timely). In this way, the effectiveness of a product may be at least partially assessed
based upon the number of minutes in took to produce the product measured against the
established timeline for that type of product.
To be relevant intelligence in ISR means that the ISR product must be tailored to the
warfighter’s needs (U.S. Department of the Air Force 1999, 10). The GA and MSA are both
trained to produce their respective products within applicable timelines; however, if the end user
should request a highly tailored product, it may push production time past the established
45
timeline. Product customization may then be considered an indicator for the timeliness MOE.
When assessing product effectiveness, it is not single products that determine
effectiveness, but the aggregate of products within a genre. For example, it may be determined
that 90 percent of Product A are produced within Factor Level 1, early. To elevate the analyst
baseline, the timeline may be adjusted to bring the remaining ten percent of products up to par
and increase product effectiveness through product timeliness. For assessment standardization
across the enterprise, the designation ‘early’ must be specified. Here, to be considered very
timely, the product must be completed 50 percent earlier than the maximum allotted production
time. For example, if the timeline is ten minutes, and a product is completed in five, that product
would be considered very timely. On the other hand, as ISR evolves and products are used to
support ever more dynamic operations, if product type B is habitually requested at a high level of
customization resulting in 80 percent of products falling into Factor Level 3, not timely, the
DCGS may want to consider adjusting that product’s timeline, refining the procedure for
producing that product, or introducing a more applicable product. Products whose vast majority
falls into Factor Level 2, timely, may be considered effective, at least, so far as reaching the
customer when it is still of value is concerned.
Accuracy
Intelligence production analysts should continuously strive to achieve the highest
possible level of accuracy in their products. The accuracy of an intelligence products is
paramount to the ability of that intelligence analyst and their parent unit to attain and maintain
credibility with intelligence consumers. (U.S. Joint Chiefs of Staff 2013, II-6). To achieve the
highest standards of excellence, intelligence products must be factually correct, relay the
situation as it actually exists, and provide an understanding of the operational environment based
on the rational judgment of available information (U.S. Department of the Air Force 2007, 4).
46
Especially for geospatial intelligence products, as FMV analysis-based products are,
geopositional accuracy, suitable for geolocation or weapons employment, is a crucial product
requirement (U.S. Department of the Air Force 1999, 10). Intelligence accuracy additionally
involves the appropriate classification labels, proper grammar, and product completeness.
For a product to be complete, it must convey all of the necessary components (U.S.
Department of the Army 2012, 2-2). For example, if DCGS standardized product A requires the
inclusion of four different graphics and only three were produced, the product would be
incomplete and thus inaccurate. The necessary components of a product also include all of the
significant aspects of the operating environment that may close intelligence gaps or impact
mission accomplishment (U.S. Joint Chiefs of Staff 2013, II-8). Complete intelligence answers
all of the customer’s questions to the greatest extent possible. For example, if a product is
requested to confirm or deny the presence of rotary- and fixed-wing aircraft at an airport, and the
production analyst annotates and assesses the fixed-wing aircraft, but negates to annotate and
assess the rotary-wing aircraft, then the product is incomplete and thus inaccurate. Similarly, if a
production analyst fails to provide coordinates when required, the product would also be
incomplete and inaccurate.
Incompleteness may be the variety of inaccuracy that most impacts the effectiveness of
an intelligence product. While grammar, errors in coordinates and even errors in classification, as
critical as those inaccuracies may be, are most likely the cause of analyst mistakes or oversight,
incomplete products may hint at an inability to produce the product, access certain information
or understand production requirements. For example, if a product required analysts to obtain a
certain type of imagery and analysts did not possess that skillset, one would expect to see a
corresponding trend in incomplete products. Similarly, if a standardized product was modified
47
and analysts were not difference trained, one may expect to see a trend in analysts failing to input
the newly required information. In this way, product completeness can serve as indicator for the
accuracy MOE. If a product is found to be incomplete, without looking any further it is likely
that that product will already be too inaccurate to be effective. If a product is found to be
complete, however, even with grammar errors, a product may still be found effective, despite not
being perfectly accurate.
When DCGS products are accounted for and quality controlled, they are measured
against a baseline checklist, which when used, assigns the product an accuracy score on a scale
of zero to 100 percent (U.S. Air Force ISR Agency 2013e; 5). The metric for the accuracy MOE
will also be a percentage point system. The baseline checklist is designed to identify both major
and minor product errors. A major error is one that causes, or should have caused, the product to
be reissued. Major errors may include erroneous classification, incomplete and erroneous
information, inaccurate geo location data, or improper distribution (U.S. Air Force ISR Agency
2013e; 5). The checklist has been designed with a numeric point system for each item evaluated
and is weighted so that any one major error will result in a failing product, as will an
accumulation of minor errors Any product which has more than two percentage points deducted
will be considered a failing product. (U.S. Air Force ISR Agency 2013e; 5). Accordingly, 98
percent is the standard against which the quality of intelligence products should be continuously
evaluated.
Using a scale of zero to 100 percent, with anything below a 98 percent resulting in a
failing product, the accuracy MOE will be broken down into three different factor levels (Figure
4): 1. An accurate product with no errors resulting in a 100%, 2. A marginally accurate product
with only minor errors resulting in a 98-99.9%, or 3. An inaccurate product with major errors or
48
Figure 4: Accuracy Factor Levels
several minor errors resulting in below a 98%.
Any passing product meeting the scores in Factor Level 1 or Factor Level 2 may be
considered effective, where any failing products, falling into Factor Level 3, may be considered
ineffective. As discussed in the Timeliness section above, the effectiveness assessment of a
product is not based on individual products, but on the aggregate of products of the same genre.
If, for example, Product A is habitually found to be inaccurate, as a product it would not be
considered effective. The problem may be that parts of the product are operationally outdated
and information required by standard production are no longer available. Or, in the case of a new
product, analysts may not have adequate training to satisfy product requests accurately.
Whatever the cause is determined to be, accuracy as a measure of effectiveness would provide
the commander and staffs insight into when products need to be updated, amended, or when
analysts need further training in order for a product to be effective.
Usability
Usability is an assessment attribute that overlaps with all other intelligence product MOE
and MOE indicators; timeliness, product customization, accuracy, and completeness are all, at
least in some degree, prerequisites for a usable product. Beyond those attributes, usability may
be understood as interpretability, relevance, precision and confidence level; in that respect
usability may serve as an additional, discrete MOE.
Intelligence must be provided to the customer in a format suitable for immediate
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015
Hendrix_Capstone_2015

More Related Content

Viewers also liked

257. edificio libre de humo y tabaco
257. edificio libre de humo y tabaco257. edificio libre de humo y tabaco
257. edificio libre de humo y tabacodec-admin
 
Potencial de lasredes sociales en el ámbito educativo
Potencial de lasredes sociales en el  ámbito educativoPotencial de lasredes sociales en el  ámbito educativo
Potencial de lasredes sociales en el ámbito educativobeadek
 
Routes Tips (a world of routes in your pocket)
Routes Tips (a world of routes in your pocket)Routes Tips (a world of routes in your pocket)
Routes Tips (a world of routes in your pocket)Julia Sidorova
 
186.club ecologico semillitas de san jose
186.club ecologico semillitas de san jose186.club ecologico semillitas de san jose
186.club ecologico semillitas de san josedec-admin
 
463. el entorno
463. el entorno463. el entorno
463. el entornodec-admin
 
Bringing smart board into sus classroom
Bringing smart board into sus classroomBringing smart board into sus classroom
Bringing smart board into sus classroomjiang luan
 
78. luchemos por el ambiente
78. luchemos por el ambiente78. luchemos por el ambiente
78. luchemos por el ambientedec-admin
 
Higiene y seguridad industrial
Higiene y seguridad industrialHigiene y seguridad industrial
Higiene y seguridad industrialcarolina herrera
 
Grupo No. 4 de Exposision de Habilitación Modulo II
Grupo No. 4 de Exposision de Habilitación Modulo II Grupo No. 4 de Exposision de Habilitación Modulo II
Grupo No. 4 de Exposision de Habilitación Modulo II drzaberkis1
 

Viewers also liked (12)

257. edificio libre de humo y tabaco
257. edificio libre de humo y tabaco257. edificio libre de humo y tabaco
257. edificio libre de humo y tabaco
 
Cibercrimen en-el-peru-diapos-ely
Cibercrimen en-el-peru-diapos-elyCibercrimen en-el-peru-diapos-ely
Cibercrimen en-el-peru-diapos-ely
 
Lecture 7
Lecture 7Lecture 7
Lecture 7
 
Potencial de lasredes sociales en el ámbito educativo
Potencial de lasredes sociales en el  ámbito educativoPotencial de lasredes sociales en el  ámbito educativo
Potencial de lasredes sociales en el ámbito educativo
 
Routes Tips (a world of routes in your pocket)
Routes Tips (a world of routes in your pocket)Routes Tips (a world of routes in your pocket)
Routes Tips (a world of routes in your pocket)
 
186.club ecologico semillitas de san jose
186.club ecologico semillitas de san jose186.club ecologico semillitas de san jose
186.club ecologico semillitas de san jose
 
463. el entorno
463. el entorno463. el entorno
463. el entorno
 
Bringing smart board into sus classroom
Bringing smart board into sus classroomBringing smart board into sus classroom
Bringing smart board into sus classroom
 
AWSome Day Intro
AWSome Day IntroAWSome Day Intro
AWSome Day Intro
 
78. luchemos por el ambiente
78. luchemos por el ambiente78. luchemos por el ambiente
78. luchemos por el ambiente
 
Higiene y seguridad industrial
Higiene y seguridad industrialHigiene y seguridad industrial
Higiene y seguridad industrial
 
Grupo No. 4 de Exposision de Habilitación Modulo II
Grupo No. 4 de Exposision de Habilitación Modulo II Grupo No. 4 de Exposision de Habilitación Modulo II
Grupo No. 4 de Exposision de Habilitación Modulo II
 

Similar to Hendrix_Capstone_2015

Approved for public release; distribution is unlimited. Sy.docx
Approved for public release; distribution is unlimited. Sy.docxApproved for public release; distribution is unlimited. Sy.docx
Approved for public release; distribution is unlimited. Sy.docxjewisonantone
 
Approved for public release; distribution is unlimited. Sy.docx
Approved for public release; distribution is unlimited. Sy.docxApproved for public release; distribution is unlimited. Sy.docx
Approved for public release; distribution is unlimited. Sy.docxfestockton
 
Adfa Itee Presentation
Adfa Itee PresentationAdfa Itee Presentation
Adfa Itee Presentationsladepb
 
The intelligence community has not always learned the lessons of i.docx
The intelligence community has not always learned the lessons of i.docxThe intelligence community has not always learned the lessons of i.docx
The intelligence community has not always learned the lessons of i.docxcherry686017
 
Vigneault A 490 Research Paper MRAP (1)
Vigneault A 490 Research Paper MRAP (1)Vigneault A 490 Research Paper MRAP (1)
Vigneault A 490 Research Paper MRAP (1)Adrian Vigneault
 
INTL304 – Intelligence CollectionStrategic and Tactical Intelligen.docx
INTL304 – Intelligence CollectionStrategic and Tactical Intelligen.docxINTL304 – Intelligence CollectionStrategic and Tactical Intelligen.docx
INTL304 – Intelligence CollectionStrategic and Tactical Intelligen.docxdoylymaura
 
A frame work for sustainability indicators at epa
A frame work for sustainability indicators at epaA frame work for sustainability indicators at epa
A frame work for sustainability indicators at epaChinmai Hemani
 
Air Force Officer Evaluation System Project
Air Force Officer Evaluation System ProjectAir Force Officer Evaluation System Project
Air Force Officer Evaluation System ProjectZaara Jensen
 
Brocas e Avaliação de Orientação Exercício
Brocas e Avaliação de Orientação ExercícioBrocas e Avaliação de Orientação Exercício
Brocas e Avaliação de Orientação ExercícioFIRE SUL PROTECTION
 
Information warfare and information operations
Information warfare and information operationsInformation warfare and information operations
Information warfare and information operationsClifford Stone
 
Agile and earned value management
Agile and earned value management  Agile and earned value management
Agile and earned value management Dr Ezzat Mansour
 
Employing Intelligence Naval Post Graduate School
Employing Intelligence Naval Post Graduate SchoolEmploying Intelligence Naval Post Graduate School
Employing Intelligence Naval Post Graduate Schoolxstokes825
 
USING ADAPTIVE MODELING TO VALIDATE CRE
USING ADAPTIVE MODELING TO VALIDATE CREUSING ADAPTIVE MODELING TO VALIDATE CRE
USING ADAPTIVE MODELING TO VALIDATE CREJames Rollins
 
Systems level feasibility analysis
Systems level feasibility analysisSystems level feasibility analysis
Systems level feasibility analysisClifford Stone
 
Enlightening Society On The Alert
Enlightening Society On The AlertEnlightening Society On The Alert
Enlightening Society On The Alertsmita gupta
 
Operations research-an-introduction
Operations research-an-introductionOperations research-an-introduction
Operations research-an-introductionManoj Bhambu
 

Similar to Hendrix_Capstone_2015 (20)

ICCRTS_MMFOntology
ICCRTS_MMFOntologyICCRTS_MMFOntology
ICCRTS_MMFOntology
 
Approved for public release; distribution is unlimited. Sy.docx
Approved for public release; distribution is unlimited. Sy.docxApproved for public release; distribution is unlimited. Sy.docx
Approved for public release; distribution is unlimited. Sy.docx
 
Approved for public release; distribution is unlimited. Sy.docx
Approved for public release; distribution is unlimited. Sy.docxApproved for public release; distribution is unlimited. Sy.docx
Approved for public release; distribution is unlimited. Sy.docx
 
Adfa Itee Presentation
Adfa Itee PresentationAdfa Itee Presentation
Adfa Itee Presentation
 
The intelligence community has not always learned the lessons of i.docx
The intelligence community has not always learned the lessons of i.docxThe intelligence community has not always learned the lessons of i.docx
The intelligence community has not always learned the lessons of i.docx
 
Vigneault A 490 Research Paper MRAP (1)
Vigneault A 490 Research Paper MRAP (1)Vigneault A 490 Research Paper MRAP (1)
Vigneault A 490 Research Paper MRAP (1)
 
05 0638
05 063805 0638
05 0638
 
INTL304 – Intelligence CollectionStrategic and Tactical Intelligen.docx
INTL304 – Intelligence CollectionStrategic and Tactical Intelligen.docxINTL304 – Intelligence CollectionStrategic and Tactical Intelligen.docx
INTL304 – Intelligence CollectionStrategic and Tactical Intelligen.docx
 
A frame work for sustainability indicators at epa
A frame work for sustainability indicators at epaA frame work for sustainability indicators at epa
A frame work for sustainability indicators at epa
 
Air Force Officer Evaluation System Project
Air Force Officer Evaluation System ProjectAir Force Officer Evaluation System Project
Air Force Officer Evaluation System Project
 
Siek 2016v3 (003)
Siek 2016v3 (003)Siek 2016v3 (003)
Siek 2016v3 (003)
 
Brocas e Avaliação de Orientação Exercício
Brocas e Avaliação de Orientação ExercícioBrocas e Avaliação de Orientação Exercício
Brocas e Avaliação de Orientação Exercício
 
Information warfare and information operations
Information warfare and information operationsInformation warfare and information operations
Information warfare and information operations
 
Agile and earned value management
Agile and earned value management  Agile and earned value management
Agile and earned value management
 
Vol1ch05
Vol1ch05Vol1ch05
Vol1ch05
 
Employing Intelligence Naval Post Graduate School
Employing Intelligence Naval Post Graduate SchoolEmploying Intelligence Naval Post Graduate School
Employing Intelligence Naval Post Graduate School
 
USING ADAPTIVE MODELING TO VALIDATE CRE
USING ADAPTIVE MODELING TO VALIDATE CREUSING ADAPTIVE MODELING TO VALIDATE CRE
USING ADAPTIVE MODELING TO VALIDATE CRE
 
Systems level feasibility analysis
Systems level feasibility analysisSystems level feasibility analysis
Systems level feasibility analysis
 
Enlightening Society On The Alert
Enlightening Society On The AlertEnlightening Society On The Alert
Enlightening Society On The Alert
 
Operations research-an-introduction
Operations research-an-introductionOperations research-an-introduction
Operations research-an-introduction
 

Hendrix_Capstone_2015

  • 1. AN ASSESSMENT FRAMEWORK FOR FULL MOTION VIDEO ANALYSIS- BASED PRODUCTION: DEFINING INTELLIGENCE PRODUCT MEASURES OF EFFECTIVENESS AND PERFORMANCE A Master Thesis Submitted to the Faculty of American Public University System by Tori Hendrix Duckworth In Partial Fulfillment of the Requirement for the Degree of Master of Arts July 2015 American Public University System Charles Town, WV
  • 2. i The author hereby grants the American Public University System the right to display these contents for educational purposes. The author assumes total responsibility for meeting the requirements set by United States copyright law for the inclusion of any materials that are not the author’s creation or in the public domain. © Copyright 2015 by Tori Hendrix Duckworth All rights reserved.
  • 3. ii DEDICATION I dedicate this thesis to my husband, Phillip Duckworth. Without his patience, support, and barista skills the completion of this thesis would never have been possible.
  • 4. iii ACKNOWLEDGMENTS I wish to thank Professor Christian Shuler of the American Public University System, both for his dedication to my academic progression and for his mentorship on this project. I would also like to thank Dr. Roger Travis of the University of Connecticut for investing the time required to develop my formal research and writing techniques early in my academic life. Thanks also to Mr. Michael Bailey, Mr. William Elkins, and Mr. Jeremy Jarrell, Mr. Linn Wisner, Mr. Bill Majors, Mr. Mikel Gault, Mr. Daniel Thomas and Mr. Donray Kirkland for their personal and professional investments toward the continuous improvement of intelligence analysis and production. The above gentlemen have always provided a sounding board, a support network and a sanity check for my intelligence research and analysis endeavors, this thesis proving no exception.
  • 5. iv ABSTRACT OF THE THESIS AN ASSESSMENT FRAMEWORK FOR FULL MOTION VIDEO ANALYSIS- BASED PRODUCTION: DEFINING INTELLIGENCE PRODUCT MEASURES OF EFFECTIVENESS AND PERFORMANCE by Tori Hendrix Duckworth American Public University System, July 5, 2015 Charles Town, West Virginia Professor Christian Shuler, Thesis Professor This paper discusses an original mixed-method analytic approach to assessing and measuring analyst performance and product effectiveness in Air Force Distributed Common Ground Station (DCGS) Full Motion Video (FMV) analysis-based production. The method remains consistent with Joint Publication doctrine on Measures of Effectiveness (MOE) and Measures of Performance (MOP) and considers established methodologies for assessment in Combat and Stability Operations, but refines the processes down to the micro-tactical level in order to theoretically assess intelligence support functions and how MOP and MOE can be determined for intelligence products. The research focuses specifically on those products derived from FMV analysis produced per customer request in support of real-time contingency operations. The outcome is an assessment process, based on Department of Defense Attributes of Intelligence Excellence. The assessment framework can not only be used to assess FMV analysis-based production, but prove a valuable assessment tool in other production-heavy sectors of the Intelligence Community. The next phase of research will be to validate the developed method through the observation and analysis of its practical application.
  • 6. v TABLE OF CONTENTS SECTION PAGE I. INTRODUCTION ........................................................................................................ 1 II. LITERATURE REVIEW ........................................................................................... 12 DCGS Operations .................................................................................................... 13 Mission Assessment................................................................................................. 16 Variable Identification and Control ......................................................................... 22 Measuring Performance and Effectiveness.............................................................. 25 III. METHODOLOGY ..................................................................................................... 31 Research Design Overview...................................................................................... 31 Collection Methodology .......................................................................................... 33 Specification of Operational Variables.................................................................... 34 Assessment Methodology........................................................................................ 36 IV. ANALYSIS AND FINDINGS ................................................................................... 38 Measures of Effectiveness ....................................................................................... 42 Measures of Performance ........................................................................................ 54 Theoretical Application ........................................................................................... 59 V. CONCLUSION........................................................................................................... 64
  • 7. vi LIST OF FIGURES FIGURE PAGE 1. FMV Analysis-Based Production Mission Assessment………………………………..29 2. Measures of Effectiveness.……………………………………………………………..42 3. Timeliness Factor Levels……………………………………………………………….44 4. Accuracy Factor Levels..………………………………………………………………48 5. Customer Satisfaction Factor Levels…………………………………………………...54 6. Task and Effectiveness Assessment…………………………………………………....55 7. Measures of Performance……………………………………………………………....56
  • 8. 1 SECTION I INTRODUCTION Assessment is an essential tool for those leaders and decisions makers responsible for the successful execution of military operations, actions and programs. The process of operationally assessing military operations is crucial to the identification and implementation of those adjustments required to meet intermediate and long term goals (Center for Army Lessons Learned 2010, 1). Assessment supports the mission by charting progress in evolving situations. Assessment allows for the determination of event impacts as they relate to overall mission accomplishment and provides data points and trends for judgments about progress in monitored mission areas (Center for Army Lessons Learned 2010, 4). In this way, assessment assists military commanders in determining where opportunities are where course corrections, modifications and adjustments must be made (U.S. Joint Chiefs of Staff 2011b, III-44). Without operational assessment, both the commander and his staff would be beleaguered to keep pace with quickly evolving situations and the commander’s decision making cycle could lack focus due to a lack of understanding about the dynamic environment (Center for Army Lessons Learned 2010, 4). Assessments are the commander’s keystone to maintaining accurate situational understanding and are vital to appropriately revising their visualization and operational approach (U.S. Joint Chiefs of Staff 2011b, III-44). From major strategic operations to the smallest of tactical missions, assessment is a crucial aspect of mission management; the Air Force Intelligence mission and its associated mission subsets are no exception. Much as assessment helps commanders and staffs understand mission progress, intelligence helps commanders and staffs understand the operational environment. Intelligence identifies enemy capabilities and vulnerabilities, projects probable intentions and actions, and is a critical
  • 9. 2 aspect of both the planning process and the execution of operations. Intelligence provides analyses that help commanders decide which forces or munitions to deploy and how to employ them in a manner that accomplishes mission objectives (U.S Joint Chiefs of Staff 2013, I-18). Most importantly, intelligence is a prerequisite for achieving information superiority, the state that is attained when a competitive advantage is derived from the ability to exploit a superior information position (Alberts, Garstka, and Stein 2000, 34). The role of intelligence in Air Force operations it to provide warfighters, policy makers, and the acquisition community comprehensive intelligence across the conflict spectrum. This role is designed to build the foundation for information superiority and mission success by delivering on-time, tailored intelligence (U.S Department of the Air Force 1997, 1). Key to operating in today’s information dominated environment and getting the right information to the right people in order for them to make the best possible decisions is the intelligence mission subset Intelligence, Surveillance and Reconnaissance (ISR) (U.S. Department of the Air Force 2012b, 1). The U.S Joint Chiefs of Staff Joint Publication 1-02, Department of Defense (DOD) Dictionary of Military and Associated Terms defines ISR as “an activity that synchronizes and integrates the planning and operation of sensors, assets, processing, exploitation, and dissemination systems in direct support of current and future operations,” (U.S. Joint Chiefs of Staff 2010, X). ISR is a strategic resource that enables commanders the ability to both make decisions, and execute those decisions more rapidly and effectively than the adversary (U.S Joint Chiefs of Staff 2011a, III-3). The fundamental job of U.S Air Force Airmen supporting the ISR mission is to analyze, inform and provide commanders at every level with the knowledge required to prevent surprise, make decisions, command forces and employ weapons (U.S Department of the Air Force 2013a, 6). In addition to enabling information superiority, ISR is
  • 10. 3 critical to the maintenance of U.S. decision advantage, the ability to prevent strategic surprise, understand emerging threats and track known threats, while adapting to the changing world (Report of the DNI Authorities Hearing 2008, 1). The joint ISR mission consists of multiple separate elements, spread across uniformed services, component commands and intelligence specialties; each element in its own right must be successful for the integrated ISR whole to support operations as required (U.S. Department of the Air Force 2012b, 1). One such ISR element is Full-Motion Video (FMV) Analysis and Production. FMV analysis and production is a tactical intelligence function. Tactical intelligence is used by commanders, planners, and operators for planning and conducting engagements, and special missions. Relevant, accurate, and timely tactical intelligence allows tactical units to achieve positional and informational advantage over their adversaries (U.S. Joint Chiefs of Staff 2013, I-25). Tactical intelligence addresses the threat across the range of military operations by identifying and assessing the adversary’s capabilities, intentions, and vulnerabilities, as well as describing the physical environment. Tactical intelligence seeks to identify when, where, and in what strength the adversary will conduct their tactical level operations. Physical identification of the adversary and their operational networks as FMV analysis and production can facilitate, allows for enhanced situational awareness, targeting, and watchlisting to track, hinder, and/or prevent insurgent movements at the regional, national, or international levels (U.S. Joint Chiefs of Staff 2013, I-25). FMV analysis and production is conducted, in part, by the Air Force Distributed Common Ground System (DCGS) Enterprise. The Air Force DCGS, also referred to as the AN/GSQ-272 SENTINEL weapon system, is the Air Force’s primary ISR collection, processing, exploitation, analysis and dissemination system. The DCGS functions as a distributed ISR operations capability which provides
  • 11. 4 worldwide, near real-time intelligence support simultaneously to multiple theaters of operation through various reachback communication architectures. ISR provides the capability to collect raw data from many different intelligence sensors across the terrestrial, aerial, and space layers of the intelligence enterprise (U.S Department of the Army 2012b, 4-12).The Air Force DCGS currently produces intelligence from data collected by a variety of sensors on the U-2, RQ-4 Global Hawk, MQ-1 Predator, MQ-9 Reaper, MC-12 and other ISR platforms (Air Force Distributed Common Ground System, 2014). ISR synchronization involves the employment of assigned and allocable platforms and sensors, to include FMV sensors, against specified collection targets; once the raw data is collected, it is routed to the appropriate processing and exploitation system, such as a DCGS node, so that it may be converted via exploitation and analysis into usable information and disseminated to the customer in a timely fashion (U.S Joint Chiefs of Staff 2013, I-11). The analysis and production portion of the intelligence process involves integrating, evaluating analyzing, and interpreting information into a finished intelligence product. Properly formatted intelligence products are then disseminated to the requester, who integrates the intelligence into the decision-making and planning processes (U.S. Joint Chiefs of Staff 2012, III-33). The usable information disseminated to the customer is the FMV analysis-based product. In the DCGS, FMV analysis-based products are produced by the GA and MSA mission crew positions (U.S. Air Force ISR Agency 2013b, 59). The combination of the above process is routinely combined into the acronym PED, Processing, Exploitation and Dissemination. Every intelligence discipline and complementary intelligence capability is different, but each conducts PED activities to support timely and effective intelligence operations. PED activities facilitate timely, relevant, usable, and tailored intelligence (U.S. Department of the Army 2012, 4-13). PED activities are prioritized and
  • 12. 5 focused on intelligence processing, analysis, and assessment to quickly support specific intelligence collection requirements and facilitate improved intelligence operations. PED began as a processing and analytical support structure for unique systems and capabilities like FMV derived from unmanned aircraft systems (U.S Department of the Army 2012, 4-12). Intelligence is essential to maintaining aerospace power, information dominance and decisions advantage; accordingly, the Air Force provides premier intelligence products and services to all its aerospace customers including warfighters, policy makers and acquisition managers (U.S Department of the Air Force 2004, 1). Intelligence is produced and provided to combat forces even at the lowest level via the integration of data collected by the various ISR platforms with Production, Exploitation and Dissemination (PED) crews. The PED crews are comprised of intelligence specialists capable of robust, multi-intelligence analysis and production (Air Force Distributed Common Ground System, 2014). DCGS mission sets depend greatly on which type of sensor the PED crew is tasked to exploit; the scope of this research will focus exclusively on Full Motion Video (FMV) analysis-based production, that is, products based on the exploitation and analysis of Full Motion Video sensors. Air Force ISR, with capabilities such as satellites and unmanned aerial vehicles, is a mission set ever more integral to global operational success. The President’s 2012 Defense Strategic Guidance directs the US military to begin transitioning from current operations to future challenge preparation; as that transition into what will likely be a highly volatile, unpredictable future occurs, Air Force ISR will be fundamental to ensuring U.S. and Allied force freedom of action (U.S Department of the Air Force 2013a, 2013). Further complicating such a transition is the reality of the currently austere fiscal environment. Balancing intelligence and remote sensing capabilities across multiple Areas of Responsibility (AOR) with limited resources will demand
  • 13. 6 the development of more efficient ISR mission management and execution processes. Intelligence organizations do recognize the need to continually improve their process performance and the analytical arm of ISR Operations, the DCGS, is no exception. Having a significant role in the maximization of ISR capabilities, the DCGS Enterprise should take every step to ensure system-wide PED efficiency, an objective that requires an established mission assessment framework. Assessment is an essential tool that allows mission managers to monitor the performance of tactical actions and to determine whether the desired effects are created in order to support goal achievement. Assessment entails two distinct tasks: continuously monitoring the situation and the progress of the operations and evaluating the operation against Measures of Effectiveness (MOEs) and Measures of Performance (MOPs) to determine progress relative to the mission, objectives, and end states. (U.S Joint Chiefs of Staff 2011b, III-45). All assessments should answer two questions: Are things being done right? Are the right things being done? Those two questions are perhaps best answered by the MOP and MOE assessment criteria. It is by monitoring available information and using MOP and MOE as assessment tools during planning, preparation, and mission execution that commanders and staffs determine progress toward completing tasks, achieving objectives and attaining the military end state (U.S Joint Chiefs of Staff 2011b, D-10). It is accepted that effectiveness is a measure associated with the problem domain (what is to be achieved) and that performance measures are associated with the solution domain (how is the problem to be solved) (Smith and Clark 2006, N -2). In other words MOE help determine if a task is achieving its intended results (if the task the right thing to do) and MOPs help determine if a task is completed properly (if the task is done right). More specifically, MOE concern how well a system tracks against its purpose (Sproles 1997,
  • 14. 7 17). MOEs are criteria used to assess changes in system behavior, capability, or operational environment and are tied to measuring the attainment of an end state, achievement of an objective, or creation of an effect. MOEs are based on effects assessment, the process of determining whether an action, program, or campaign is productive and can achieve the desired results in operations (Center for Army Lessons Learned 2010, 6). Should the desired result be achieved, then the outcome may be deemed effective; however, simply achieving the result may not measure the degree of effectiveness. For example, within context of computer programming, an effective program is one which produces the correct outcome, based on various constraints. Given that the desired result was achieved, it is generally accepted that a program which is faster and uses less resources is more effective (Smith and Clark 2006, N -2). MOPs area discrete assessment tool based on task assessment. MOPs are criteria used to assess actions that are tied to measuring task accomplishment and essentially confirm or deny that a task was performed properly. Agencies, organizations, and military units will usually perform a task assessment, based on established MOPs, to ensure work is performed to at least a minimum standard (Center for Army Lessons Learned 2010, 5). When discussing product effectiveness, it important to distinguish between how effectively the product was produced and how effective the produced product was in achieving its objective. Task assessment does have an effect on effects assessment, as any task not performed to standard will have a negative impact on effectiveness (Center for Army Lessons Learned 2010, 5). Thus, both effectiveness and consequently performance are significant in this study. It is the objective of this research to demonstrate that analyst performance can be assessed using MOPs and that product effectiveness can be measured, at least in part, by the identification of repeatable MOEs. To provide accurate assessments of performance and effectiveness, any
  • 15. 8 identified MOPs and MOEs as assessment standards must be measurable, relevant and responsive. Whether conducted explicitly or implicitly, measurement is the mechanism for extracting information from empirical observation (Bullock 2006, 17). Assessment measures must have qualitative or quantitative standards, or metrics, against which they can be measured. To effectively measure change, whether positive or negative, a baseline measurement should be established prior to execution in order to facilitate accurate assessment (U.S Joint Chiefs of Staff 2011b, D-10). MOEs and MOPs measure a distinct aspect of an operation and thus require distinct and discrete qualitative or quantitative metrics against which to be measured (Center for Army Lessons Learned 2010, 8). Determining and applying qualitative and quantitative metrics is doubly difficult for intelligence support operations, like FMV analysis-based production because few traditional war fighting metrics apply (Darilek et al. 2001, 101). Determining effectiveness is not as straight forward as assessing the number of tanks destroyed by an offensive and weighing that against the objective number of tanks destroyed. While metrics can be defined, they are generally more one-sided. The focus is centered more on the ability of the analyst to operate in an operational environment than on how well an analyst operates relative to an enemy force (Darilek et al. 2001, 101). Obtaining assessment insight is dependent on having feasible implementation methods, as well as reliable models, for approaching the task of measurement (Sink 1991, 25). Each MOP and MOE discussed in this paper will be assigned either a qualitative or a quantitative metric and each metric will be discrete. In addition to being measurable, MOPs and MOEs must also be relevant to tactical, operational, or strategic objectives as applicable as well as to the operational environment and the desired end state (Center for Army Lessons Learned 2010, 8). The relevancy criterion
  • 16. 9 prevents the collection and analysis of information that is of no value and helps ensure efficiency by eliminating redundant and futile efforts (U.S Joint Chiefs of Staff 2011b, D-9). The relevancy of each MOP and MOE presented in this paper will be demonstrated through a discussion of its relationship to either the performance or effectiveness of FMV analysis-based production. Lastly, MOP and MOE must be responsive. Both MOP and MOE must identify changes quickly and accurately enough to allow commanders and staffs to respond effectively (Center for Army Lessons Learned 2010, 9). When considering the time required for an action to produce a desired result within the operational environment, the designated MOP and MOE should respond accordingly; the result of action implementation should be capable of being measured in a timely manner (U.S Joint Chiefs of Staff 2011b, D-10). DCGS FMV analysis-based production occurs in near real-time. As such, any proposed production MOP or MOE should be readily derived from available data or mission observation. MOEs and MOPs are simply assessment criteria; in and of themselves they do not represent assessment. MOEs and MOPs require relevant information in the form of indicators for evaluation (U.S Joint Chiefs of Staff 2011b, D-3). In the context of assessments, an indicator is an item of information that provides insight into MOP or MOE. A single indicator may inform multiple MOPs and MOEs and MOPs and MOEs may be made up of several observable and collectable indicators (U.S. Joint Chiefs of Staff 2011b, D-4). For MOPs, indicators should consider the capability required to perform assigned tasks. Indicators used to perform MOE analysis should inform changes to the operational environment. Indicators provide evidence that a certain condition exists or certain results have or have not been attained, and enable decision makers to direct changes to ongoing operations to ensure the mission remains focused on the end state (U.S. Joint Chiefs of Staff 2011b, D-4). Indicators inform commanders and staff on the
  • 17. 10 current status of operational elements which may be used to predict performance and efficiency. Air Force production centers produce and provide those intelligence products based on validated/approved customer requirements (U.S. Department of the Air Force 2012b, 4). Tailoring, adapting a product to ensure it meets the customer’s format and content requests, helps to ensure the relevancy of the intelligence product. For example, if a customer requests a highly tailored product versus a standard baseline product, such a request may indicate an increased production time potentially impacting the timeliness, and thus the effectiveness, of the product. In this way, the level of customization typically requested by customers for a certain product type would serve as an indicator for the timeliness MOE, which is further discussed in the MOE subsection of Section IV, Analysis and Findings. Well-devised MOPs and MOEs, supported by effective information management, help commanders and staffs understand the linkage between specific tasks, the desired effects, the commander’s objectives and end state (U.S Joint Chiefs of Staff 2011b, D-10). Among the most important analytical tasks in the intelligence mission set is effectiveness and performance analysis, and yet, DCGS FMV operations have no enterprise-wide framework for mission assessment in which clear production MOPs or MOEs are defined. The highly dynamic nature of intelligence analysis, where even basic tasks are further complicated by analyst discretion and technique, makes the systematic evaluation of DCGS product effectiveness highly problematic but yet, not impossible. The question then arises: what repeatable data can be identified and how can the FMV analysis-based production mission be assessed in order to derive MOPs and MOEs? This paper proposes an assessment framework specifically identified to enable the quantification and qualification of DCGS analyst production performance and product effectiveness. The demands
  • 18. 11 of the modern operational environment require that intelligence products disseminated to the customer are anticipatory, timely, accurate, usable, complete, relevant, objective, and available (U.S Joint Chiefs of Staff 2013, III-33). Also known as Attributes of Intelligence Excellence, it is from those intelligence product requirements that performance and product effectiveness variables will be derived. Given their importance to industries, organizations and institutions, the topics of MOE, MOP and operational assessment are very well researched and sound methodologies for deriving MOE and MOP exist even for most major military operation types. From the global strategic ISR mission down, FMV analysis-based production is micro-tactical in nature. As discussed in Section II, Literature Review, though the provision of products are a basic intelligence function, analytic methodologies for the performance of product effectiveness analysis has not been refined for such a niche subset of the intelligence mission. Expanded upon in Section III, Methodology, this paper presents a theoretical process for identifying both MOP and MOE for DCGS FMV analysis-based production. Following an outline of the assessment methodology, Section V, Analysis and Findings will demonstrate that repeatable, measurable DCGS FMV analysis-based production can be identified.
  • 19. 12 SECTION II LITERATURE REVIEW No shortage of literature exists concerning the importance of MOEs and MOPs and how MOEs and MOPs may be assessed, especially with respect to military operations. While assessment is required at every level of military operations, strategic, operational and tactical, a majority of the relevant literature available focuses either on operational, or combat tactical assessments. FMV analysis-based production, an operational function of the DCGS Enterprise, is not a combat, but rather a support function, and when compared to operational functions such as Village Stability Operations, may be better classified as micro-tactical (when assessed independent of the operation being supporting). Thus, for such a small scale, niche function, no predominant school of thought exists on the matter, i.e., no prior research has attempted to answer this research question and no theoretical framework exists which can be applied wholly to the problem set. At least, no prior research exists on the research topic that is publically available. Due to the highly sensitive nature of DCGS Operations, any amount of relevant literature may exist on classified networks; however, only publically available publications will be utilized in this research. Many open source publications exist which are critical to understanding how FMV-analysis based production MOP and MOE may be derived. Most are not academic or peer-reviewed research, but rather are military doctrinal publications. Prior existing research concerning the agglomerately developed theoretical framework of this study also include military doctrinal publications, but more so include research-based case studies. Publications critical to understanding the nature of the problem set, considered background information, is detailed in the sections DCGS Operations and Variable Identification and Control. To assess production, at is conducted during DCGS Operations, the operating
  • 20. 13 environment (OE) must be understood to the greatest degree possible. Given the volume of possible variables and the challenges associated with illuminating their relationships, an understanding of operational variables affecting analyst production, performance and product effectiveness are similarly critical. Publications relating to the theoretical framework for deriving FMV-analysis based MOPs and MOEs are discussed in the section Measuring Performance and Effectiveness. The uniqueness of the problem set precludes the application of any pre-proposed theoretical framework in its entirety. Rather, the problem set in question required the development of a new model for determining MOPs and MOEs, one which examines the relationship between the two assessment tolls as well as the effect of some critically impacting variables on each. DCGS Operations One prerequisite for identifying DCGS mission assessment metrics and deriving FMV analysis-based production MOPs and MOEs is a thorough understanding of DCGS mission operations. DCGS mission operations for this research focuses on the production aspect of operations and includes production training, evaluation and execution; whereas executing production may most predominantly determine product effectiveness, how well the production analyst was trained, and how well they performed during their evaluation prove useful as MOP indicators. The four most pertinent publications to DCGS training are Air Force Instruction (AFI) 14-202 Volume 1 (2015) concerning intelligence training, the Air Force ISR Agency (AFISRA) supplement to AFI 14-202 Volume 1 (2013) specifying ISR intelligence training, AFISRA Instruction 14-153 Volume 1 (2013) concerning AF DCGS intelligence training and the 480th ISR Wing supplement to AFISRA Instruction 14-153 Volume 1 (2013) which further refines the
  • 21. 14 document for DCGS units, all of which are subordinate to the 480th ISR Wing. The stated objective of AFI 14-202 Volume 1 (2015) and its AFIRSA supplement (2013) is “to develop and maintain a high state of readiness… by ensuring all personnel performing intelligence duties attain and maintain the qualifications and currencies needed to support their unit’s mission effectively…” (U.S Department of the Air Force 2015b, 4). The publications, having stated the impact of training on mission effectiveness, discuss types of training, the flow of training, individual training and currency requirements, mission readiness levels as well as the grading rubric for training tasks, all data critical to identifying analyst/production performance metrics and effectiveness indicators. The same holds true for AFISRA Instruction 14-153 Volume 1, Air Force DCGS Intelligence Training (2013) and its 480th ISR Wing Supplement (2013). Naturally more applicable to DCGS training than the aforementioned AFI and AFISRA supplements, the publications not only concur with the previous publications, but additionally provides guidance for the assessment of training programs and outlines mission training elements, more so in the 480th ISR Wing Supplement (Air Force ISR Agency 2013b, 7-31). The Training Task List (TTL) defines training requirements for skill set development as they support the operational duties necessary for each position. Those duties, for some positions, include the creation, quality control and dissemination of data, information and textual/graphic intelligence products (Air Force ISR Agency 2013b, 17; 28-29). These training publications are detailed to the point of exacting passing grades for training tasks, a quantitative standard of great interest to MOP research. AFI 14-202 Volume 2 (2015) and the 480th ISR Wing Supplement to the AFISRA Supplement to AFI 14-202 Volume 2 (2014) are the Air Force and DCGS doctrinal publications covering the Intelligence unit Commander’s Standardization/Evaluation Program. The purpose
  • 22. 15 of these publications is “to provide commanders a tool to validate mission readiness and the effectiveness of intelligence personnel, including documentation of individual member qualifications and capabilities specific to their duty position,” (U.S Department of the Air Force 2014, 7). The publications, having stated the importance of standardization and evaluation on measuring personnel and mission effectiveness, discuss how the Standardization/Evaluation Program evaluates intelligence personnel on their knowledge of the duty position and applicable procedures wherein the task (mission performance) phase includes evaluating a demonstrated task, like that of production (U.S Department of the Air Force 2014, 38). Because the evaluation profile used to fulfill the task phase requisites must incorporate all appropriate requirements and allow accurate measurements of the proficiency of the evaluatee, as with training, the quantitative data collected during evaluations must be considered when defining MOP indicators for FMV-analysis based production (U.S Department of the Air Force 2014, 44). In addition to serving as a baseline for DCGS mission assessment and providing insight into performance and effectiveness affecting variables as discussed in their respective sections below, AFI 14-153 Volume 3 (2013), “AF DCGS Operations Procedures” and its 480th ISR Supplement (2013) define the DCGS mission and provide guidance and requirements for the execution of DCGS operations. Aligning well with this research is the specifically stated DCGS program objective to “enhance Air Force DCGS mission effectiveness,” (U.S Department of the Air Force 2014, 6). Moreover, when researching elements that may affect analyst performance during production efforts, knowing which analysts conduct the production is imperative. AFI 14- 153 Volume 3 (2013) breaks down mission execution roles and responsibilities by mission crew position. Of most interest are the positions of the Geospatial Analyst (GA), responsible for the creation of FMV-analysis based products, and that of the Multi-Source Analyst (MSA),
  • 23. 16 responsible for the creation of multi-intelligence report products (U.S Department of the Air Force 2014, 13-18). Because final products are created and disseminated during the mission execution phase of DCGS Operations, the DCGS Operations publications above are perhaps of more research value than either the Training or the Standardization/Evaluations Air Force publications with respect to contextualizing DCGS operations for assessment. Publications concerning the DCGS are limited; especially those published academically and not as military doctrine. Two peripheral sources concerning the DCGS are Brown’s 2009 “Operating the Distributed Common Ground System,” and Langley’s 2012 “Occupational Burnout and Retention of Distributed Common Ground System (DCGS) Intelligence Personnel.” Brown’s article focuses on the human factors of the AN/GSQ SENTINEL (DCGS) Weapons System and how the human factor influences net-centric operations (Brown 2009, 2). The tone of the article is that of a story, full of details that experience may corroborate but that the referenced sources did not. Beyond this thesis’ scope and questionable in supporting its details, the article will not be incorporated in this research beyond mentioning that nothing in the article contradicted any of the other referenced sources. Though much more professional in tone and with all points very well supported, Langley’s 2012 report similarly fell beyond the scope of this research, but importantly, also did not contradict any information presented in related research. In formulating a background knowledge of DCGS operations, military doctrinal publications must exclusively be relied upon. Mission Assessment Before MOE and MOP can be valuable to mission assessment, the desired end state must first be defined and mission success criteria developed. Mission success criteria describe the standards for determining mission accomplishment and should be applied to all
  • 24. 17 phases of all operations, including FMV analysis-based production (U.S Joint Chiefs of Staff. 2011b, IV-10). The initial set of success criteria determined during mission analysis, usually in the mission planning phase, becomes the basis for assessment. If the mission is unambiguous and limited in time and scope, mission success criteria could be readily identifiable and linked directly to the mission statement (U.S Joint Chiefs of Staff. 2011b, IV-10). The Air Force DCGS mission statement states that the “Air Force DCGS provides actionable, multi-discipline intelligence derived from multiple ISR platforms” (U.S Air Force ISR Agency 2013d, 5). That the mission statement does not detail which attributes comprise actionable intelligence complicates the mission success criteria and requires mission analysis to indicate several potential success criteria, analyzed via MOE and MOP, for the desired effect and task respectively (U.S Joint Chiefs of Staff. 2011b, IV-10). Research into mission assessment for intelligence production, specifically production derived from FMV, is significant not only because requirements and precedents exist for intelligence product metric collection and assessment, but also because of the impact such products have on the operating environment and the extent to which those products are poised to evolve over the next few decades. Requirements, precedents and assessment guidance are outlined across several Air Force and Joint doctrinal and exploratory military publications. The importance of ISR, and of the DCGS, as well as a look at future operations is explored in the Air Force publication Air Force ISR 2023 (2013) and in a corroborating journal article, A Culminating Point for Air Force Intelligence Surveillance and Reconnaissance, published in the Air & Space Power Journal by Kimminau (2012). In sum, the literature reviewed in this section provides an overview of existing military mission assessment and lays the groundwork for the increasing importance of intelligence production task (MOP) and effect (MOE) assessment.
  • 25. 18 The military publications reviewed which outline requirements or precedents for intelligence product assessment can be divided into Air Force publications and Joint publications. Air Force publications include AFI 14-201, Intelligence Production and Applications, the aforementioned AFI 14-153 Volume 3, Air Force Distributed Common Ground System (AF DCGS) Operations Procedures, along with its 480th ISR Wing Supplement (2014) and the 480th ISR Wing Supplement to AFISRA Instruction 14-121 (2013), Geospatial Intelligence Exploitation and Dissemination Quality Control Program Management. AFI 41-201 applies to all Air Force organizations from whom intelligence production is requested, a category into which the DCGS, as a Production Center, falls; however, the AFI is directed more towards those intelligence agencies producing “stock products,” which the DCGS most usually does not (U.S Department of the Air Force 2002, 20). The AFI does, however, reveal an intelligence production precedent in requiring Headquarters (HQ), Director of ISR to conduct annual reviews of production processes and metrics and to measure customer (requirement) satisfaction (U.S Department of the Air Force 2002, 6-20). AFI 14-153 Volume 3 (2013) and its 480th ISR Wing Supplement (2013) deal directly with AF DCGS Operations and includes instructions for mission assessment, most usefully outlining assessment roles and responsibilities, albeit vaguely. While every analyst is responsible for conducting mission management activities during mission execution, the party responsible for documenting mission highlights and deficiencies, keeping mission statistics, performing trend analysis and quality control, and overall assessing mission effectiveness is the Department of Operations Mission Management (U.S Department of the Air Force 2014, 39). Mission management efforts are further discussed in AFISRA Instruction 14-121 (2013), which details the quality control and documentation process for DCGS products, a process which must include the identification and
  • 26. 19 analysis of trends by types or errors, flights or work centers (U.S Air Force ISR Agency 2013e, 7). All three Air Force publications set a standard requiring intelligence producers to conduct mission analysis; neither of the three however, mention MOPs, MOEs, possible metrics or specifically suggest how such mission analysis should be conducted. While the Air Force requires mission analysis towards greater effectiveness, the responsibility appears to fall to individual organizations and units to conduct mission assessment in accordance with the broad guidance set forth by Air Force doctrine as they see fit. How DCGS FMV units conduct mission analysis and how they may do so better is central to this research, thus, AFI 14-153 Volume 3 is an invaluable publication that sets the baseline for existing DCGS mission assessment guidance. Joint Publications (JP) 3-0 (2011) and 5-0 (2011), both concerning Joint (multi-service) operations, discuss mission assessment as well as how MOPs and MOEs may be used as assessment criteria, albeit with an exclusive focus on the assessment of combat operations. JP 5- 0, in its direction of assessment processes and measures, defines evaluation, assessment, MOE and MOP, sets the criteria for assessment, and discusses the value of MOPs and MOEs at every level of operations (U.S Joint Chiefs of Staff 2011b, xvii-D-10). JP 3-0, though less in depth when addressing mission assessment than JP 5-0, JP 3-0 similarly defines MOE and MOP lending continuity between the documents and further discusses assessment at the strategic, operational and tactical levels. Of most concern to this research is assessment at the tactical level, the assessment level about which JP 3-0 and JP 5-0 diverge. In JP 5-0, tactical level assessment can be conducted using MOPs to evaluate task accomplishment, a discussion which reflects the impact of assessment on operations and more so makes the case for comprehensive, integrated assessment plans (U.S Joint Chiefs of Staff 2011b, D-6). In JP 3-0, tactical-level assessment is described as using both MOPs and MOEs to evaluate task accomplishment where
  • 27. 20 the rest of the paragraph aligns with JP 3-0 almost verbatim (U.S Joint Chiefs of Staff 2011a, II- 10). Both JPs define MOE and MOP in exactly the same way, where MOEs are a tool with which conduct effects assessment and MOPs are a tool with which to conduct task assessment; JP 3-0, however, later suggests that MOE and MOP are both utilized for task assessment leaving out and contradicting its earlier guidance about MOE being a discrete tool concerning effects assessment (U.S Joint Chiefs of Staff 2011a, GL-13; U.S Joint Chiefs of Staff 2011b, III-45). The MOP for task and MOE for effect assessment is corroborated by every other relevant source and so JP 3-0’s contradiction will be considered an oversight. In both Air Force and Joint Publications, a strong emphasis is placed on the conduction of mission assessment, the latter going so far as to incorporate MOPs and MOEs specifically. The importance of mission assessment is apparent in published literature, not only by military requirements, precedent and guidance, but also because of the impact of intelligence production on operations and the importance of assessment in quickly evolving mission sets, like that of ISR. Sentinel Force 2000+ (1997), a quick reference document for the intelligence career field, defines the Air Force Intelligence vision, identifies intelligence consumers and highlights the importance of intelligence in Air Force operations: with respect to intelligence, “this role is designed build the foundation for information superiority and mission success by delivering on- time, tailored intelligence,” (U.S Department of the Air Force 1997, 2). The publication additionally designates seven Goal Focus Areas of Air Force Intelligence designed to assure U.S. information dominance. One of the Focus Areas is Products and Services; the corresponding goal is to “focus on customer needs…modernize and continuously improve our products and services to see, shape and dominate the operating environment,” (U.S Department
  • 28. 21 of the Air Force 1997, 3). Intelligence is critical to information superiority and mission success. Production is a critical component of intelligence and an Air Force goal for almost two decades has been product improvement. The objective for improvement exists, and yet, how can intelligence product improvement be assessed/measured? Air Force ISR 2023: Delivering Decision Advantage (2013) accounts for both the importance of intelligence, specifically ISR, to military operations and discusses the projected evolution of ISR over the next decade. ISR provides commanders with the knowledge they need to prevent surprise, make decisions, command forces and employ weapons; it enables them to maintain decision advantage, protect friendly forces, hold targets and apply deliberate, discriminate kinetic and non-kinetic combat power (U.S Department of Air Force 2013a, 6). The challenge for ISR is to maintain those competencies required to provide commanders with the knowledge they need while reconstituting and refocusing ISR toward future operations, all under the constraints of an austere fiscal environment (U.S Department of Air Force 2013a, 2- 4). Focusing heavily on ISR capabilities, the normalization of operations and the strengthening of collaborative partnerships, the report focuses on the revolution of analysis and exploitation, a topic well within the scope of this research. “The most important and challenging part of our analysis and exploitation revolution is the need to shift to a new model of intelligence analysis and production,” (U.S Department of Air Force 2013a, 13). With an actively evolving mission set where intelligence production is on the brink of great change, any requirement for tactical mission assessment becomes all the more important; otherwise, how can positive or negative changes in production or products be assessed relative to the baseline? A mission subset as critical as ISR production, especially when poised to rapidly evolve, requires established, repeatable, assessment measures.
  • 29. 22 Closely paralleling Air Force ISR 2023 (2013) with respect to the future of the ISR Enterprise is Kimminau’s 2012 Air & Space Power Journal article, A Culminating Point for Air Force Intelligence Surveillance and Reconnaissance. Kimminau (2013) concurs with the imperative for ISR: “success in war depends on superior information. ISR underpins every mission that the DOD executes,” (Kimminau 2012, 117). The article further discusses the future of ISR through 2030. While Kimminau focuses less on an analytical and production revolution, he does elaborate on several of the projected evolution areas that do overlap with Air Force ISR 2023 (2013). Though not as useful to this research without a discuss of analysis and production, Kimminau’s article serves best in corroborating Air Force ISR 2023 (2013) in its view of ISR as dynamic, revolutionary, ever more important. Variable Identification and Control Perhaps the most challenging aspect of researching MOPs and MOEs in non-combat micro-tactical mission operations is the identification and control of variables. Production is just one element of DCGS operations, albeit an extremely interconnected element dependent on human analytical training and preferences, the quality of received data, management style, the status of communications, mission parameters and potentially several hundred other variables. One significant research challenge is in determining which variables to hold constant and which variables directly affect the task of production (MOPs) and/or product effectiveness (MOEs). As discussed in the DCGS Operations section above, several Air Force doctrinal publications provide guidance on DCGS training, evaluation, and mission execution and mission management. Those same publications provide insight into which variables may affect each respective aspect of mission operations. Another publication, AFI 14-132, Geospatial- Intelligence (2012), offers insight into additional mission variables, not through references to
  • 30. 23 DCGS operations, but rather to the field of Geospatial-Intelligence (GEOINT) itself. GEOINT encompasses all imagery, Imagery Intelligence (IMINT) and geospatial information, a category into which FMV exploitation and production falls (U.S Department of the Air Force 2012a, 3). In fact, a prerequisite to qualify as a GA, the position which creates FMV-analysis based products, is to be in the Geospatial Intelligence Analyst career field, Air Force Specialty Code (AFSC) 1N1X1 (Air Force ISR Agency 2013b, 24). AFI 14-132 (2012) mentions the DCGS as a key exploiter and producer of Air Force GEOINT wherein the DCGS as a GEOINT production center must report up the chain of command an analysis of the quality and timeliness of GEOINT products, revealing timeliness and quality to be significant variables in GEOINT product effectiveness (U.S Department of the Air Force 2012a, 14). The AFI additionally discusses GEOINT baseline and core functions revealing the types of operations GEOINT supports and the types of GEOINT exploited for that support, variables necessary to contextually consider. For example, would the timeliness variable have the same impact on performance and effectiveness across all mission types supported, or, do some mission types necessarily value timeliness more or less than others? The Raytheon Company’s 2001 Establishing System Measures of Effectiveness by John Green corroborates the assessment that timeliness factors into product effectiveness. In addition to addressing the imperative for performance analysis and developing definitions for system MOPs and MOEs, the report most importantly weighs in on timeliness as an assessment variable. “Time is often used… [and] is attractive as an effectiveness measure,” (Green 2001, 4). Green (2001) goes on say that: “time is the independent variable and outcomes of processes occur with respect to time,” (Green 2001, 4). Similarly, in this research, the only dependent variable is effectiveness where all other uncontrolled variables, to include time, are considered
  • 31. 24 independent. Green (2001) not only nods to the common practice of using time to measure effectiveness, but highlights the logic behind time as an independent rather than dependent variable. The most comprehensive categorization of intelligence mission variables can be found in a 2003 Central Intelligence Agency publication by Rob Johnson, Developing a Taxonomy of Intelligence Analysis Variables. Johnson (2003) provides a classification of 136 variables potentially affecting intelligence analysis broken down into four exclusive categories: systemic variables, systematic variables, idiosyncratic variables and communicative variables. The list does not suggest that any of the variables affect human performance or product effectiveness, rather it serves to identification purposes so that experimental methods may isolate and control some variables in order to further isolate those variable affecting analysis (Johnson 2003, 3-4). FMV products are dependent on FMV analysis, hence FMV-analysis based production. The focus of this research is on the effectiveness of products and also on analyst performance in producing said products, thus, many of those variables identified as potentially affecting analysis will also potentially affect production. The list focuses more on cognitive and situational-type variables such as analytic methodology, decision strategies, collaboration, etc. rather than variables like timeliness, quality, precision, etc. which are the focus of this research; however, the idea of controlling some variables in order to assess what impact others may have on the dependent variable, here effectiveness, is a goal of this research. RAND’s 1995 publication, A Method for Measuring the Value of Scout/Reconnaissance, by Veit and Callero explores the topic of variable identification and control, the most relevant piece of which concerned variable definitions. The case study well defined two variables of interest to FMV analysis-based production: timeliness and precision. Timeliness was defined as
  • 32. 25 the “time between information collection and its availability for use by the situation assessor” and precision was defined as “the level of detail about enemy weapons systems reported,” (Veit and Callero 1995, xv). The timeliness definition is applicable verbatim; the precision definition is geared exclusively towards scout/reconnaissance but can be easily amended to fit analysis and production precision. Most interestingly, Veit and Callero (1995) introduced the concept of variable factor levels. To better understand the relationship between variables, several factor levels can be assigned to a variable in order to understand the degree of impact. For example, the precision variable can be further specified by categorically establishing three different levels of detail. Factor levels are further discussed in Analysis and Findings. While useful for its variable definitions and factor level methodology, Veit and Callero (1995) was found to be more valuable in its proposed methodology for deriving MOEs. The most significant publication concerning potential variables for the FMV production mission subset was JP 2-0 (2013), Joint Intelligence. Though the variables discussed were not strictly identified as variables or put directly into the context of mission assessment, JP 2-0 introduced the concept of Attributes of Intelligence Excellence, a list of variables thought to be critical in the production of effective products. Attributes of Intelligence Excellence include anticipatory, timely, accurate, usable, complete, relevant, objective and available (U.S. Joint Chiefs of Staff 2013, III-33). JP 2-0 additionally acknowledged that the effectiveness of intelligence operations is assessed by devising metrics and indicators associated with those attributes (U.S. Joint Chiefs of Staff 2013, I-22). Over the course of this research, along with precision, each attribute listed will be assessed as a potential MOP, MOE or MOP/MOE indicator. Measuring Performance and Effectiveness
  • 33. 26 “There is no more critical juncture in implementing [a] successful assessment…than the moment of methods selection,” (Johnson et al. 1993, 153). Joint Publication 3-0 (2011), Joint Operations, and Joint Publication 5-0 (2011), Joint Operations Planning, both discuss the importance of conducting assessment at every phase of military operations, both at the task assessment level (MOPs) and effects assessment level (MOEs). Much attention has been paid to defining MOP and MOE for Combat Operations, Stability Operations, Reconnaissance and even Intelligence Simulations; however, publically available data assessing the niche operational field of FMV analysis-based production is lacking. No “one size fits all” methodology exists for determining MOPs and MOEs, nor does any existing methodology adequately apply to FMV analysis-based production assessment problem set. Several publications do detail MOP and/or MOE derivation methodologies, portions of which do relate to this problem set and aggregately those publications enable the foundation of a new theoretical framework. The Center for Army Lessons Learned 2010 Handbook 10-41, Assessments and Measures of Effectiveness in Stability Operations outlines the Tactics, Techniques and Procedures (TTPs) for conducting mission assessment and for determining MOEs for stability operations. The handbook details the necessity for assessment and clearly distinguishes between task-based (doing things right) and effect-based (doing the right thing) and delineates when MOPs and MOEs are appropriately applied. The handbook provides insight into the Army’s Stability Operations Assessment Process, including how task and effect-based assessment support deficiency analysis. “The process of assessment and measuring effectiveness of military operations, capacity building programs, and other actions is crucial in identifying and implementing the necessary adjustments to meet intermediate and long term goals in stability operations,” (Center for Army Lessons Learned 2010, 1). All such guidance is highly relevant to
  • 34. 27 the assessment process herein proposed for FMV analysis-based production, despite the Stability Operation-specific assessment is beyond the scope of this research. Greitzer’s 2005 Methodology, Metrics and Measures for Testing and Evaluation of Intelligence Analysis Tools is a case study involving a practical framework for assessing intelligence analysis tools based on almost-exclusively quantitative assessment. While the case study’s framework did not align with this research’s mixed-method theoretical assessment approach, the case study did identify several potentially replicable performance measures, both qualitative (customer satisfaction, etc.) and quantitative (accuracy, etc.) all of which pertain specifically to intelligence analysis. Such performance measures are pertinent to the discussion of MOPs across any intelligence function and are especially pertinent to the analysis of FMV analysis-based production. The case study also defined and distinguished between measurement and metrics, definitions which were adopted for the FMV-analysis based production assessment. The highly applicable discussion of quantitative and qualitative performance measures, as well as of metrics, heavily influenced the theoretical framework of this research. Bullock’s 2006 Theory of Effectiveness Measurement dissertation titularly appears applicable to FMV-analysis based production effectiveness measurement, especially when defining effectiveness measurement as “the difference, or conceptual distance from a given system state to some reference system state (e.g. desired end-state);” however, the foundation of Bullock’s effectiveness measurement was applied mathematics with all relationships between variables explained as mathematically stated relational systems (Bullock 2006, iv-12). The Bullock (2006) study approached the problem of effectiveness measurement quantitatively, a method which may have applied to quantifiable FMV-analysis based production variables, but which did not provide an applicable framework for unquantifiable (qualitative and quasi-
  • 35. 28 quantitative) variables. Chapter Seven of Measures of Effectiveness for the Information-Age Army, “MOEs for Stability and Support Operations” (Darilek et al. 2001) is a case study revolving around MOEs for non-combat operations, a category applicable to intelligence support functions (when assessed independently of the operation they are supporting). The case study theoretically examined how Dominant Maneuver in Humanitarian Assistance might be assessed and how MOE might be derived. The method of theoretical examination perfectly aligned with the proposed theoretical examination of FMV-analysis based production MOE, although the specifics of assessing Humanitarian Assistance were too disparate for the analytical framework to be duplicated. As with the ISR mission, Darelik et al. (2001) emphasized the changing nature of Assistance Operations, partially to highlight the necessity of assessment. Of all the reviewed publications, Darelik et al. (2001), had the methodology which most closely applied to the FMV- analysis based production mission set, most probably because it was the only theoretical case study to focus on non-combat operations. Evaluating the effectiveness of any intelligence analysis-related problem set presents several methodological challenges (Greitzer and Allwein 2005, 1). Because intelligence production cannot avoid the human elements of analyst preference and technique, assessing analyst performance (was the product created correctly) and product effectiveness (did the product meet its objective) is far more subjective than traditional combat operations, such as whether a projectile hit and incapacitated it's intended target or whether or not a route was successfully cleared. Coupled with the difficulties associated with valuing intelligence in a strictly quantitative, i.e., numerical fashion, the pre-proposed MOP/MOE assessment frameworks above cannot be wholly adopted. Rather, this unique problem set required the
  • 36. 29 development of a new model for determining MOP and MOE, one which examines the relationship between the two as well as the effect of all variables when examined in the FMV production operating environment. The model proposed does include effectiveness criteria (variable factors) adapted from Method for Measuring the Value of Scout/Reconnaissance (Veit and Callero, 1995) and a flow of assessment loosely based upon Chapter Seven of Measures of Effectiveness for the Information-Age Army, “MOEs for Stability and Support Operations” (Darilek et al., 2001). The overarching hypothesis in this research, that MOP and MOE are identifiable, measurable and can be utilized to evaluate FMV analysis-based products in the context of circumstances defined by mission variables, is dependent on several sub-hypotheses:  Sub-Hypothesis 1: At least one MOE and one MOP can be identified for DCGS FMV analysis-based products  Sub-Hypothesis 2: For each DCGS FMV production MOE and MOP, at least one indicator can be defined based on variable and objective analysis  Sub-Hypothesis 3: A discrete, measurable metric can be assigned to each identified MOE and MOP Figure 1 demonstrates the relationship between MOP, MOE, indicators and factor levels. The outline below illustrates the theoretical framework via which the above hypotheses above will be tested. Using a top-down approach, the research will: 1. Identify MOP and MOE: which independent variables impact product quality or effectiveness and meet MOE/MOP criteria (repeatable, measurable, relevant and responsive)? 2. Identify MOP and MOE factor levels: what factor levels are required to assess degree of contribution to operational effectiveness? 3. Identify performance and effectiveness indicators: what items of information provide insight into MOP and MOE analysis? 4. Identify MOP and MOE metrics: how will MOE and MOP be measured?
  • 37. 30 Figure 1: FMV Analysis-Based Production Mission Assessment
  • 38. 31 SECTION III METHODOLOGY In support of determining FMV analysis-based production MOP and MOE, this case study’s theoretical analysis will consider how multiple independent variables affect one dependent effectiveness variable, a precedent set by multiple studies assessing organization dependence where performance is defined as a dependent variable and the studies seek to determine variables that produce variations in or effects on performance (March and Sutton 1998, 698). In accordance with methodological precedent discussed in Case Studies and Theory Development in the Social Sciences by George and Bennett (2005), the research objective focuses not on outcomes of the dependent effectiveness variable, i.e., the impact of product effectiveness on ISR operations, but on the importance and impact of independent variables, i.e., on how certain predefined variables affect product effectiveness. (George and Bennett 2005, 80- 81). Due to environmental constraints including the highly sensitive and classified nature of FMV production, this study will be limited to secondary over primary research and theoretical over practical analysis. Rather than testing the effects of mission variables via the observation of production analysts or surveying the effectiveness of specific FMV product types, the primary research objective in this case study is to explore and explain the relationship between multiple independent variables and a single dependent effectiveness variable. Independent variables of theoretical interest include timeliness, accuracy, and usability. Research Design Overview The research problem was addressed via both archival and analytic research strategies (Buckley, Buckley, and Chiang 1976, 15). The archival strategy involved quantitative and
  • 39. 32 qualitative research methods whereby research was conducted in the secondary domain exclusively through the collection and content analysis of publically available data, primarily journal articles and military publications. The archival research design heavily considers George and Bennett’s 2005 “structured-focused, research design” method (George and Bennett 2005, 73- 75). The research objective, to derive FMV analysis-based production MOE and MOP, can best be described as “Plausibility Probe” research wherein the study explores an original theory and hypotheses to explore whether more comprehensive, operational, primary research and testing is warranted. Theory development, not operational testing is the research design focus (George and Bennett 2005, 73-75). The lack of previously existing theories, hypotheses and research concerning niche ISR mission effectiveness necessitated the modification of the initial, Plausibility Probe research in order to build upon and develop new operational variables and intelligence variable definitions (George and Bennett 2005, 70-71). The uniqueness of the problem set additionally required the use of internal logic. As previously discussed, the highly classified nature of intelligence operations coupled with the originally of the niche problem set made observational research, empirical research and source meta-analysis impossible. Because of information and personnel access restrictions, hypothesis testing was necessarily relegated to theoretical exploration and the analytical research strategy was required to satisfactorily answer the research question. The analytic research method made use primarily of internal logic, both deductive and inductive in research nature. Deductive research involved broad source collection and content analysis in order to determine all data relevant to the problem set and to draw original conclusions from multiple disparate data sets. For example, extensive research was conducted on both MOE and MOP determination methods and how the military conducts operational
  • 40. 33 assessment in order to determine method overlap and to develop refined assessment methodologies where overlap failed to occur. Inductive research involved taking all relevant aspects of the problem set and further collecting source material relevant to that subtopic in order to better, comprehensively, understand how each subtopic relates to the overarching problem set. For example, once the independent variables were identified, each was extensively researched in order to better define the variables within the context of the intelligence mission and to determine any known relationships to the dependent variable, here product effectiveness. By utilizing the analytic research strategy, the comparability of this research to previously existing studies will be limited, especially with respect to variable analysis. The research design is further outlined in the following sections: Collection Methodology, Operational Variable Specification and Analysis Methodology. Collection Methodology The research collection methodology for this thesis involves a two-pronged content analysis approach. Deriving a theory from a set of data and then testing it on the same data, is thought by some to be an illegitimate research practice (George and Bennett 2005, 76). Thus, two sets of data will be reviewed: literature concerning FMV PED operations (theoretical testing data) and literature discussing MOE and MOP evaluation case studies (methodological theory data). Testing data will be collected almost exclusively from military publications, all obtained either through the U.S Air Force e-Publishing document repository, the U.S Army Electronic Publication repository, or the U.S Department of Defense (DOD) Joint Electronic Library. Because the American Public University System Digital Library yielded few results related to this highly specialized research topic, theoretical data had to be collected from known academic defense journals, primarily the Air University Press Air and Space Power Journal digital
  • 41. 34 archives, known defense research corporations, primarily RAND Corporation online research publications, and the Defense Technical Information Center digital repository. Due to the extremely limited amount of publically available and relevant data in either of the above categories, case selection bias as well as competing/contradictory analysis was naturally mitigated. Key to testing the stated hypothesis and sub-hypotheses is the identification and analysis of operational variables which may affect product effectiveness (MOEs), effectiveness indicators, those elements of the mission related to MOEs, MOPs, and quantitative/qualitative metrics for both MOEs and MOPs; these tasks were accomplished through an in-depth content analysis of the many Air Force doctrinal publications detailing Geospatial Intelligence (GEOINT) and DCGS operations (training, evaluation, and mission execution). For those operational variables identified, additional publications relevant to each variable were consulted for further examination of related content. The second content analysis approach includes a close review of previously published theses, dissertations, journal articles and technical reports concerning the derivation of MOEs and MOPs, with specific attention paid to those addressing analysis, intelligence analysis, military operations and/or theoretical MOE/MOP assessments. All methodologically applicable portions of previously conducted MOE/MOP case studies will be incorporated into the analysis of the FMV analysis-based production mission subset. Specification of Operational Variables As previously mentioned, the single dependent variable in this case study will be DCGS FMV-analysis based product effectiveness. The execution of the FMV PED mission involves almost immeasurable and innumerable systemic, systematize, idiosyncratic communicative
  • 42. 35 variables, all of which could arguably impact mission effectiveness. (Johnson 2003, 4). Given that the micro-tactical nature of FMV production necessarily narrowed scope of this research and that research resources were limited to publically available data, most variables will be held constant and many more will remain unknown. This study will be limited to the examination of only those variables which most greatly affect product effectiveness and analyst performance; all other variables, such as leadership management style, communications and systems status, etc. will be held as fixed and non-impacting. To further specify the research objective, the focus of this thesis is not to identify every variable which may impact effectiveness, but rather to show that any impacting variables (MOEs) can be identified at all. As described in the Research Design Overview section above, this Plausibility Probe research is intended to initially address the research question of whether or not MOE and MOP can be derived for FMV-analysis based products; given the volume of DCGS FMV PED operational variables, the research is thus designed to avoid any impact negated (fixed/non-impacting) variables may have on the validity of analysis. The research design further aims to isolate the impacts of product effectiveness by assessing the influence of variance in independent variables. Variable factor levels were developed for each independent variable as a sensitive method of describing variable variances (George and Bennett 2005, 80-81). Should this theoretical assessment framework ever be practically applied, classified mission data, site specific objectives, and local TTPs can be incorporated into the assessment process to derive additional MOEs should those designated in this research fail to comprehensively reflect requirements for product effectiveness.
  • 43. 36 Based on an in-depth review of available literature, those independent variables identified as impacting to DCGS FMV analysis-based product effectiveness (MOEs) include: product timeliness, product accuracy, and product usability. Those independent variables identified as reflective of analyst performance (MOPs) also include product timeliness and product accuracy. The Analysis and Findings section of the thesis will revolve predominantly around an examination of how those independent variables, or potential MOEs/MOPs, impact the overall effectiveness of the product and reflect the performance of the production analysts. Assessment Methodology Following the data collection, all prior existing research critical to understanding DCGS operations and those concerning intelligence performance variables was analyzed. A prerequisite to any effectiveness analysis is the identification of an objective against which to measure effectiveness. The primary phase of analysis was thus an investigation into the objective of DCGS FMV analysis-based productions. Subsequently, variable analysis commenced. Impact analysis of each independent variable was not practically conducted, rather the relationship between each independent variable, MOE, and the dependent variable, product effectiveness was theoretically examined. In précis, the research examined how product timeliness, accuracy and usability affects DCGS FMV-analysis based product effectiveness. A discussion of MOPs followed focusing on the relationship between product accuracy and product effectiveness. The hypothesis, that MOE for FMV-analysis based production can be identified, was tested through the examined relationship between the independent and dependent variables. To further refine and quantify/qualify each MOE and MOP, factor variable levels were determined for each independent variable. For example, for the independent variable (MOE) “accuracy,” those factor levels would be: accurate
  • 44. 37 (100%), marginally accurate (98-99.9%), or inaccurate (<98%). In defining variable factors levels, not only can the independent variables be assessed in terms of whether or not they impact product effectiveness, but degree of impact can be assessed as well. Although theoretical, one follow-on objective of the study is a practical application of the assessment framework. Such a practical application will require the identification of MOE and MOP but also metrics and indicators for each identified MOE and MOP. Following MOE identification and metric determination, a secondary analysis was conducted to determine effectiveness and performance indicators for each MOE and MOP, i.e., could a correlative relationship be demonstrated between certain mission execution elements and each MOE/MOP. Lastly should this research become operationally incorporated, because of the subjective nature of intelligence, the case study examines potential biases, both of the production analyst, and of the intelligence consumer.
  • 45. 38 SECTION IV ANALYSIS AND FINDINGS Because traditional military assessment measures and operational MOEs are not effective in accurately assessing the niche combat mission support element of FMV production, the challenge is in determining what attributes make the intelligence provided by FMV analysis- based products “actionable”. According to Joint Publication 2-0, Joint Intelligence, the effectiveness of intelligence operations is assessed by devising metrics and indicators associated with Attributes of Intelligence Excellence (U.S. Joint Chiefs of Staff 2013, I-22). It is here proposed that so long as they meet the MOE and MOP criteria, MOEs, MOPs and MOE/MOP indicators can be derived from the established list of Attributes of Intelligence Excellence: anticipatory, timely, accurate, usable, complete, relevant, objective and available (U.S. Joint Chiefs of Staff 2013, III-33).The focus of this research is FMV analysis-based production MOEs; however, because aggregate analyst performance can impact the overall effectiveness of the products, MOPs will also be discussed as they relate to MOEs. Excessive numbers of MOEs or MOPs become unmanageable and risk the collection cost efforts outweighing the value and benefit of assessment (Center for Army Lessons Learned 2010, 8). To maximize the assessment benefit, some attributes can be combined; complete can be considered an accuracy indicator and relevant can be considered a usability indicator. Anticipatory, objective and available each fail to meet MOE criteria. Further discussed in their individual sections below, of the eight Attributes of Intelligence Excellence, which can be said to comprise FMV product success criteria, only timely, accurate, encompassing completeness, and useable, encompassing relevance, succeed as effective MOE. Concerning the anticipatory attribute, to be effective, intelligence must anticipate the
  • 46. 39 informational needs of the customer. Anticipating the intelligence needs of the customer requires that the intelligence staff identify and fully understand the current tasked mission, the customer’s intent, all relevant aspects of the operational environment and all possible friendly and adversary courses of action. Most importantly anticipation requires the aggressive involvement of intelligence personnel in operational planning at the earliest possible time (U.S Joint Chiefs of Staff 2013, II-7). Anticipatory products certainly may be more actionable, and thus more effective, than non-anticipatory products and those FMV analysts responsible for production through training, experience, and an in depth knowledge of the operational environment are most likely capable of providing anticipatory intelligence. Anticipatory intelligence, however, fails to meet requirements for an FMV analysis-based production MOE. Because MOE must be measurable to be effective, the first complication with the anticipatory attribute is in designating a metric. Unlike timeliness, which can be measured in seconds, minutes or hours, or accuracy which can be measured on a percentage scale of one to 100, determine a standardized unit of measurement for the degree to which a product is anticipatory is not currently feasible. Additionally, anticipatory, or predictive, intelligence is generally used to assess threat capabilities and emphasize and identify the most dangerous adversary Course of Action (COA) and the most likely adversary COA; such analysis is not typically conducted during near real- time operations by the PED Crew, but by intelligence staff tasked to those determinations (U.S. Department of the Army 2012, 2-2). Further complicating the issue of measurement is the issue of replicability. Even if a quantitative or qualitative method of measuring anticipation was devised, the same metric may not be applicable to the same product type under different mission tasking circumstances. DCGS intelligence is derived from multiple ISR platforms, may be analyzed by active duty, Air National Guard or Air Force Reserve forces and is provided to
  • 47. 40 COCOMs, Component Numbered Air Forces, national command authorities and any number of their subordinates across the globe (U.S. Air Force ISR Agency 2014, 6). Depending on exploitation requirements, analysts may be tasked to exploit FMV in support of several discrete mission sets including situational awareness, high-value targeting, activity monitoring, etc. (U.S. Joint Chiefs of Staff 2012, III-33). The nature of different customers and mission sets may impact the degree to which anticipatory intelligence is even possible. For example, the case of dynamic retasking, where a higher priority collection requirement arises and the tasked asset is immediately retasked to support (U.S. Department of the Air Force 2007, 15). If an asset is dynamically retasked and products are immediately requested thereafter, the PED crew would be operating under severely limited knowledge of the new OE and anticipating a new customer’s production expectations beyond expressed product requirements would be very difficult if not infeasible. The intent of this research is to identify FMV analysis-based product MOE which are repeatable, measurable, relevant, responsive and standardized across product and mission types. The attribute anticipatory is neither easily measurable nor repeatable given the amount of mission variables impacting the feasibility of analysts making anticipatory assessments. A well- produced product disseminated in support of a dynamically retasked mission may be perfectly effective in achieving its goal of actionable intelligence despite not being anticipatory. While it may be more effective if it were anticipatory, the failure of that attribute to meet MOE criteria negates it as an appropriate MOE. Due to the decisive and consequential impact of intelligence on operations and the reliance of planning and operations decisions on intelligence, it is important for intelligence analysts to maintain objectivity and independence in developing assessments. Intelligence, to the greatest extent possible, must be vigilant in guarding against biases. In particular, intelligence
  • 48. 41 analysts should both recognize each adversary as unique, and realize the possible biases associated with their assessment type (U.S. Joint Chiefs of Staff 2013, II-8). As with the anticipatory attribute, the objective attribute presents problems with both measurability and replicability. Also comparable to the anticipatory attribute, objectivity is highly important to an effective product. Because not only each mission, but each production analyst is unique, so too is every analyst’s assessment biases, each of which would undermine product objectivity. Further obfuscating objectivity as a viable MOE is the difficulty in determining a standard metric by which to measure objectiveness and a baseline against which to compare the objectivity of products. Both anticipating the customer’s needs and remaining objective during analysis and production should be trained, heavily reinforced and corrected prior to dissemination as required. Biased analysis and production degrade product accuracy and effectiveness through the misinterpretation, underestimating or overestimating of events, the adversary, or the consequences of friendly force action. Objectivity does impact the effectiveness of DCGS FMV analysis-based products; however, the inability of the objective attribute to be measured in a standardized, replicable fashion disqualifies the attribute as a potential MOE. ISR-derived information must be readily accessible to be effective, hence the availability attribute. Availability is a function of not only timeliness and usability, discussed in their respective sections below, but also the appropriate security classification, interoperability, and connectivity (U.S Joint Chiefs of Staff 2013, II-8). Naturally both ISR personnel and the consumer should have the appropriate clearances to access and use the information (U.S. Department of the Air Force 2007, 15). Product dissemination to the customer is necessarily dependent on full network connectivity. While a product would be completely ineffective if it were never received by the customer, or if it were classified above what the customer could
  • 49. 42 Figure 2: Measures of Effectiveness access; however, throughout the theoretical exploration of this research problem, full product availability will be assumed. During the discussion of specific MOE below, network connectivity will be held constant at fully functional and classification will be assumed to be the derivative classification of the tasked mission ensuring both analyst and customer access. Measures of Effectiveness Dismissing anticipatory, objective and available as attributes suitable to MOE selection, the applicable attributes of timely, accurate, usable, complete, and relevant remain. It is against that attribute criteria, either as MOE or as MOE indicators (see Figure 2) that the effectiveness of DCGS FMV analysis-based products in this thesis will be measured. Timeliness Commanders have always required timely intelligence to enable operations. Because intelligence is a continuous process that directly supports the operations process, operations and intelligence are inextricably linked (U.S. Department of the Army 2012, v). Timely intelligence is essential for the prevention of surprise by the adversary, to conduct defensive operations, seize the initiative, and utilizes forces effectively (U.S. Department of the Air Force 2007, 5). To be effective, the commander’s decision and execution cycles must be consistently faster than the adversary’s; thus, the success of operations to some extent hinges upon timely intelligence (U.S.
  • 50. 43 Joint Chiefs of Staff 2013, xv). The responsive nature of Air Force ISR assets is an essential enabler of timeliness (U.S. Department of the Air Force 2007, 5). In the current environment of rapidly improving technologies, having the capability to provide timely information is vital (U.S. Department of the Air Force 1999, 40). The dynamic nature of ISR can provide information to aid in a commander's decision-making cycle and constantly improve the commander's view of the OE, that is, if the products are received in time to plan and execute operations as required. Because timeliness is so critical to intelligence effectiveness, as technology evolves, every effort should be made to streamline and automate processes to reduce timelines from tasking through product dissemination. (U.S. Department of the Air Force 2007, 5). Timeliness is vital to actionable intelligence; the question is, does ‘late’ intelligence equate to ineffective intelligence? According the concept of LTIOV (Latest Time Information of Value), there comes a time when intelligence is no longer useful. LTIOV is the time by which information must be delivered to the requestor in order to provide decision makers with timely intelligence (U.S. Department of the Army 1994, Glossary-7). The implication is that information received by the end user after the LTIOV is no longer of value, and thus, no longer effective. To be effective at all, intelligence must be available when the commander requires it (U.S. Joint Chiefs of Staff 2013, II-7). All efforts must be made to ensure that the personnel and organizations that need access to required intelligence will have it in a timely manner (U.S. Joint Chiefs of Staff 2013, II-7). The DCGS is one intelligence dissemination system capable of providing near real-time situational awareness and threat and target status (U.S. Department of the Air Force 1999, 40). In the fast-paced world of near real-time intelligence assessment, production time is measured not
  • 51. 44 Figure 3: Timeliness Factor Levels in days, or hours, but in minutes. Consequently, the most appropriate metric for product timeliness is the production time, from tasking to dissemination availability, in minutes. The GA and MSA PED Crew positions are not only responsible for the creation and dissemination of intelligence products, but also for ensuring their production meets the DCGS Enterprise timeline (time limit) for that product (U.S. Department of the Air Force 2013a; 29). For this discussion of timeliness as an MOE, it will be assumed that the established DCGS Enterprise product timelines are congruent with product intelligence of value timelines, i.e., so long as an analyst successfully produces a product within its timeline, that product will be considered timely by the consumer. The DCGS Enterprise product timeline will serve as the baseline against which products will be compared. In assessing the degree of product timeliness, the three assigned factor levels, outlined in Figure 3, will be 1. The product was completed early (very timely) 2. The product was completed on time (timely), or 3. The product was completed late (not timely). In this way, the effectiveness of a product may be at least partially assessed based upon the number of minutes in took to produce the product measured against the established timeline for that type of product. To be relevant intelligence in ISR means that the ISR product must be tailored to the warfighter’s needs (U.S. Department of the Air Force 1999, 10). The GA and MSA are both trained to produce their respective products within applicable timelines; however, if the end user should request a highly tailored product, it may push production time past the established
  • 52. 45 timeline. Product customization may then be considered an indicator for the timeliness MOE. When assessing product effectiveness, it is not single products that determine effectiveness, but the aggregate of products within a genre. For example, it may be determined that 90 percent of Product A are produced within Factor Level 1, early. To elevate the analyst baseline, the timeline may be adjusted to bring the remaining ten percent of products up to par and increase product effectiveness through product timeliness. For assessment standardization across the enterprise, the designation ‘early’ must be specified. Here, to be considered very timely, the product must be completed 50 percent earlier than the maximum allotted production time. For example, if the timeline is ten minutes, and a product is completed in five, that product would be considered very timely. On the other hand, as ISR evolves and products are used to support ever more dynamic operations, if product type B is habitually requested at a high level of customization resulting in 80 percent of products falling into Factor Level 3, not timely, the DCGS may want to consider adjusting that product’s timeline, refining the procedure for producing that product, or introducing a more applicable product. Products whose vast majority falls into Factor Level 2, timely, may be considered effective, at least, so far as reaching the customer when it is still of value is concerned. Accuracy Intelligence production analysts should continuously strive to achieve the highest possible level of accuracy in their products. The accuracy of an intelligence products is paramount to the ability of that intelligence analyst and their parent unit to attain and maintain credibility with intelligence consumers. (U.S. Joint Chiefs of Staff 2013, II-6). To achieve the highest standards of excellence, intelligence products must be factually correct, relay the situation as it actually exists, and provide an understanding of the operational environment based on the rational judgment of available information (U.S. Department of the Air Force 2007, 4).
  • 53. 46 Especially for geospatial intelligence products, as FMV analysis-based products are, geopositional accuracy, suitable for geolocation or weapons employment, is a crucial product requirement (U.S. Department of the Air Force 1999, 10). Intelligence accuracy additionally involves the appropriate classification labels, proper grammar, and product completeness. For a product to be complete, it must convey all of the necessary components (U.S. Department of the Army 2012, 2-2). For example, if DCGS standardized product A requires the inclusion of four different graphics and only three were produced, the product would be incomplete and thus inaccurate. The necessary components of a product also include all of the significant aspects of the operating environment that may close intelligence gaps or impact mission accomplishment (U.S. Joint Chiefs of Staff 2013, II-8). Complete intelligence answers all of the customer’s questions to the greatest extent possible. For example, if a product is requested to confirm or deny the presence of rotary- and fixed-wing aircraft at an airport, and the production analyst annotates and assesses the fixed-wing aircraft, but negates to annotate and assess the rotary-wing aircraft, then the product is incomplete and thus inaccurate. Similarly, if a production analyst fails to provide coordinates when required, the product would also be incomplete and inaccurate. Incompleteness may be the variety of inaccuracy that most impacts the effectiveness of an intelligence product. While grammar, errors in coordinates and even errors in classification, as critical as those inaccuracies may be, are most likely the cause of analyst mistakes or oversight, incomplete products may hint at an inability to produce the product, access certain information or understand production requirements. For example, if a product required analysts to obtain a certain type of imagery and analysts did not possess that skillset, one would expect to see a corresponding trend in incomplete products. Similarly, if a standardized product was modified
  • 54. 47 and analysts were not difference trained, one may expect to see a trend in analysts failing to input the newly required information. In this way, product completeness can serve as indicator for the accuracy MOE. If a product is found to be incomplete, without looking any further it is likely that that product will already be too inaccurate to be effective. If a product is found to be complete, however, even with grammar errors, a product may still be found effective, despite not being perfectly accurate. When DCGS products are accounted for and quality controlled, they are measured against a baseline checklist, which when used, assigns the product an accuracy score on a scale of zero to 100 percent (U.S. Air Force ISR Agency 2013e; 5). The metric for the accuracy MOE will also be a percentage point system. The baseline checklist is designed to identify both major and minor product errors. A major error is one that causes, or should have caused, the product to be reissued. Major errors may include erroneous classification, incomplete and erroneous information, inaccurate geo location data, or improper distribution (U.S. Air Force ISR Agency 2013e; 5). The checklist has been designed with a numeric point system for each item evaluated and is weighted so that any one major error will result in a failing product, as will an accumulation of minor errors Any product which has more than two percentage points deducted will be considered a failing product. (U.S. Air Force ISR Agency 2013e; 5). Accordingly, 98 percent is the standard against which the quality of intelligence products should be continuously evaluated. Using a scale of zero to 100 percent, with anything below a 98 percent resulting in a failing product, the accuracy MOE will be broken down into three different factor levels (Figure 4): 1. An accurate product with no errors resulting in a 100%, 2. A marginally accurate product with only minor errors resulting in a 98-99.9%, or 3. An inaccurate product with major errors or
  • 55. 48 Figure 4: Accuracy Factor Levels several minor errors resulting in below a 98%. Any passing product meeting the scores in Factor Level 1 or Factor Level 2 may be considered effective, where any failing products, falling into Factor Level 3, may be considered ineffective. As discussed in the Timeliness section above, the effectiveness assessment of a product is not based on individual products, but on the aggregate of products of the same genre. If, for example, Product A is habitually found to be inaccurate, as a product it would not be considered effective. The problem may be that parts of the product are operationally outdated and information required by standard production are no longer available. Or, in the case of a new product, analysts may not have adequate training to satisfy product requests accurately. Whatever the cause is determined to be, accuracy as a measure of effectiveness would provide the commander and staffs insight into when products need to be updated, amended, or when analysts need further training in order for a product to be effective. Usability Usability is an assessment attribute that overlaps with all other intelligence product MOE and MOE indicators; timeliness, product customization, accuracy, and completeness are all, at least in some degree, prerequisites for a usable product. Beyond those attributes, usability may be understood as interpretability, relevance, precision and confidence level; in that respect usability may serve as an additional, discrete MOE. Intelligence must be provided to the customer in a format suitable for immediate