This document provides an abstract for a methodology to forecast terrorism using indicators. It identifies 68 indicators of terrorism and employs analytic techniques to safeguard against common warning failures. The full research provides: 1) an explanation of how to forecast terrorism using the indicators, 2) an evaluation of how the system addresses past warning failures, and 3) recommendations for implementing the system. The methodology could be applied to other intelligence topics by changing the indicators. The full research is available for purchase, with author royalties donated to non-profit counterterrorism organizations.
Admiralty scale is referred to as the system that is used to measure the credibility of the source of the information, and reliability of the information gathered. This is taken into consideration so that the intelligence may be able to make viable decision based on the information that has been gained in the investigation. This system usually comprises of two known character notation which are adequately implemented in assessing the source of information reliability and evaluation of the information confidence and accuracy. This system is usually executed by the military enforcement and National Security Intelligence of the NATO member nations and also by the AUSCANZUKUS members. However, analyst also uses this system to evaluate and validate the authentication of the information gathered. This system involves four stages for the information gathered to be assumed reliable and credible this includes evaluation, reliability, credibility and reporting (Beesly, 1989).
The intelligence cycle is a set of processes used to provide useful information for decision-making. The cycle consists of several processes. The related counter-intelligence area is tasked with preventing information efforts from others. A basic model of the process of collecting and analyzing information is called the "intelligence cycle". This model can be applied, and, like all the basic models, it does not reflect the fullness of real-world operations. Through intelligence cycle activities, information is collected and assembled, raw information is transformed into processed information, analyzed and made available to users.
DOI: 10.13140/RG.2.2.25665.81760
Attack graph based risk assessment and optimisation approachIJNSA Journal
Attack graphs are models that offer significant cap
abilities to analyse security in network systems. A
n
attack graph allows the representation of vulnerabi
lities, exploits and conditions for each attack in
a single
unifying model. This paper proposes a methodology
to explore the graph using a genetic algorithm (GA)
.
Each attack path is considered as an independent at
tack scenario from the source of attack to the targ
et.
Many such paths form the individuals in the evoluti
onary GA solution. The population-based strategy of
a
GA provides a natural way of exploring a large numb
er of possible attack paths to find the paths that
are
most important. Thus unlike many other optimisation
solutions a range of solutions can be presented to
a
user of the methodology.
Combined Multi-Annual Analysis, Estimation, and Trends Assessment Method for ...Iulian Popa
he authors have drafted a multi-annual analysis, estimation, and trends assessment matrix for the evaluation of cybersecurity environment in Romania. Besides that, the matrix serves for building worst-case and best-case scenarios related to the cybersecurity in Romania. As their proposal is a beta-version, their work is intended for experts’ consideration and debate only.
In summary, Mr. Nițu refines the initial and simplified method of analysis, estimation, and trends assessment established earlier by Mr. Popa. The indicators the authors proposed here were chosen based on a well balanced impact-relevance methodology and they are designed strictly for evaluation, feedback and scenario building purposes. The authors acknowledge some of them may be subject to ongoing change.
Admiralty scale is referred to as the system that is used to measure the credibility of the source of the information, and reliability of the information gathered. This is taken into consideration so that the intelligence may be able to make viable decision based on the information that has been gained in the investigation. This system usually comprises of two known character notation which are adequately implemented in assessing the source of information reliability and evaluation of the information confidence and accuracy. This system is usually executed by the military enforcement and National Security Intelligence of the NATO member nations and also by the AUSCANZUKUS members. However, analyst also uses this system to evaluate and validate the authentication of the information gathered. This system involves four stages for the information gathered to be assumed reliable and credible this includes evaluation, reliability, credibility and reporting (Beesly, 1989).
The intelligence cycle is a set of processes used to provide useful information for decision-making. The cycle consists of several processes. The related counter-intelligence area is tasked with preventing information efforts from others. A basic model of the process of collecting and analyzing information is called the "intelligence cycle". This model can be applied, and, like all the basic models, it does not reflect the fullness of real-world operations. Through intelligence cycle activities, information is collected and assembled, raw information is transformed into processed information, analyzed and made available to users.
DOI: 10.13140/RG.2.2.25665.81760
Attack graph based risk assessment and optimisation approachIJNSA Journal
Attack graphs are models that offer significant cap
abilities to analyse security in network systems. A
n
attack graph allows the representation of vulnerabi
lities, exploits and conditions for each attack in
a single
unifying model. This paper proposes a methodology
to explore the graph using a genetic algorithm (GA)
.
Each attack path is considered as an independent at
tack scenario from the source of attack to the targ
et.
Many such paths form the individuals in the evoluti
onary GA solution. The population-based strategy of
a
GA provides a natural way of exploring a large numb
er of possible attack paths to find the paths that
are
most important. Thus unlike many other optimisation
solutions a range of solutions can be presented to
a
user of the methodology.
Combined Multi-Annual Analysis, Estimation, and Trends Assessment Method for ...Iulian Popa
he authors have drafted a multi-annual analysis, estimation, and trends assessment matrix for the evaluation of cybersecurity environment in Romania. Besides that, the matrix serves for building worst-case and best-case scenarios related to the cybersecurity in Romania. As their proposal is a beta-version, their work is intended for experts’ consideration and debate only.
In summary, Mr. Nițu refines the initial and simplified method of analysis, estimation, and trends assessment established earlier by Mr. Popa. The indicators the authors proposed here were chosen based on a well balanced impact-relevance methodology and they are designed strictly for evaluation, feedback and scenario building purposes. The authors acknowledge some of them may be subject to ongoing change.
The analysts are in the field of "knowledge". Intelligence refers to knowledge and the types of problems addressed are knowledge problems. So, we need a concept of work based on knowledge. We need a basic understanding of what we know and how we know, what we do not know, and even what can be known and what is not known. The analysis should provide a useful basis for conceptualizing intelligence functions, of which the most important are "estimation" and "prediction". The intelligence itself, in its basic form, has a decision-making function. Intelligence analysis applies individual and collective cognitive methods to assess data and test assumptions in a secret socio-cultural context.
DOI: 10.13140/RG.2.2.25298.40646
Analysis and ProductionInformation collected in the previous ste.docxnettletondevon
Analysis and Production
Information collected in the previous steps will be analyzed, validated, and fused into intelligence in the analysis process to be included into finished intelligence products. Analysis is defined by the ODNI as “The process by which information is transformed into intelligence; a systematic examination of information to identify significant facts, make judgments, and draw conclusions.” (ODNI, 2013) This transformation occurs when All-Source Analysts use all the Single Source INTs to create a fused intelligence product.
Single Source INT
We learned about “Single Source INTs” in Module three and they are HUMINT, SIGINT, GEOINT, OSINT, and MASINT. They are referred to as single source because they are derived from a single type of INT. You may hear of a single source SIGINT analyst, who is an expert in signals intelligence collection, or a single source IMINT analyst, who is an expert in imagery intelligence collection.
All-Source INT
The All-Source Analyst is not an expert in collection of the single sources, but is instead an expert in a region or intelligence function (terrorism, counter-drug, crime, etc.) that is the consumer of the raw single source information from all INTs. Analysts receive incoming information, evaluate it, test it against other information and against their personal knowledge and expertise, produce an assessment of the current status of a particular area under analysis, and then forecast future trends or outcomes. The analyst also develops requirements for the collection of new information. (ODNI, 2013) The All-Source Analyst will learn through experience to validate information using multiple INTs to confirm information collected through a single source INT. For example, if information is collected from a HUMINT source, the All-Source Analyst will look for another HUMINT source to collaborate that information or possibly validate the original HUMINT source through SIGINT sources.
Validation
Validation is important to also counter an adversary’s attempts to conduct deception; however, the well-organized adversary will release multiple pieces of information through various INTs to mislead All-Source Analysts. A good adversarial deception plan will not only allow deceptive pieces of information to be collected to attempt to fool our intelligence analysts, but will also play into an analyst’s bias. The common biases of an analyst are mirror imaging (thinking the adversary will act the same way Americans do), cry-wolf syndrome (conducting an action multiple times so that when the adversary truly intends to act, our analysts do not take it seriously), and mission-creep/new normal (an adversary slowly changes tactics so that our analysts do not suspect nefarious activities).
It is important for All-Source Analysts to be confident in their assessments,
but they should be wary of overconfidence
Many think that the job of an intelligence analyst is to predict the future. In fact, many early analytic.
How To Turbo-Charge Incident Response With Threat IntelligenceResilient Systems
Minutes, hours, days - each one counts when responding to a security incident. Yet most firms have a lot of room for improvement. According to the 2013 Verizon Data Breach Investigations Report, in 66% of cases (up from 56% last year), breaches remained undiscovered for years, and in 22% of cases, it took months to fully contain the incident.
This webinar will review the challenges firms face in trying to create a rapid and decisive incident response (IR) process. It will then highlight the crucial role that timely, contextual threat intelligence can play in turbo-charging incident response, particularly when tightly integrated with the broader IR discipline. Finally, it will reveal the power of this approach by demonstrating Co3's integrated threat intelligence capabilities including intel from industry-leader iSIGHT Partners.
Cyber Threat Intelligence is a process in which information from different sources is collected, then analyzed to identify and detect threats against any environment. The information collected could be evidence-based knowledge that could support the context, mechanism, indicators, or implications about an already existing threat against an environment, and/or the knowledge about an upcoming threat that could potentially affect the environment. Credit: Marlabs Inc
A Behavior Based Intrusion Detection System Using Machine Learning AlgorithmsCSCJournals
Humans are consistently referred to as the weakest link in information security. Human factors such as individual differences, cognitive abilities and personality traits can impact on behavior and play a significant role in information security. The purpose of this study is to identify, describe and classify the human factors affecting Information Security and develop a model to reduce the risk of insider misuse and assess the use and performance of the best-suited artificial intelligence techniques in detection of misuse. More specifically, this study provides a comprehensive view of the human related information security risks and threats, classification study of the human related threats in information security, a methodology developed to reduce the risk of human related threats by detecting insider misuse by a behavior-based intrusion detection system using machine learning algorithms, and the comparison of the numerical experiments for analysis of this approach. Specifically, by using the machine learning algorithm with the best learning performance, the detection rates of the attack types defined in the organized five dimensional human threats taxonomy were determined. Lastly, the possible human factors affecting information security as linked to the detection rates were sorted upon the evaluation of the taxonomy.
Exploring the Psychological Mechanisms used in Ransomware Splash ScreensJeremiah Grossman
The present study examined a selection of 76 ransomware splash screens collected from a variety of sources. These splash screens were analysed according to surface information, including aspects of visual appearance, the use of language, cultural icons, payment and payment types. The results from the current study showed that, whilst there was a wide variation in the construction of ransomware splash screens, there was a good degree of commonality, particularly in terms of the structure and use of key aspects of social engineering used to elicit payment from the victims. There was the emergence of a sub-set of ransomware that, in the context of this report, was termed ‘Cuckoo’ ransomware. This type of attack often purported to be from an official source requesting payment for alleged transgressions.
On 4 October 2016, as part of the GDPR Workshop series, the Brussels Privacy Hub hosted a workshop on implementation of the EU GDPR and Privacy Impact Assessment. Trilateral delivered a joint presentation by Rowena Rodrigues and Julia Muraszkewicz, exploring some of the challenges associated to DPIAs and EPIAs. The presentation was based upon two of Trilateral’s research projects: SATORI and iTRACK.
Project 4 Threat Analysis and ExploitationTranscript (backgroun.docxstilliegeorgiana
Project 4: Threat Analysis and Exploitation
Transcript (background):
You are part of a collaborative team that was created to address cyber threats and exploitation of US financial systems critical infrastructure. Your team has been assembled by the White House Cyber National security staff to provide situational awareness about a current network breach and cyber attack against several financial service institutions. Your team consists of four roles, a representative from the financial services sector who has discovered the network breach and the cyber attacks. These attacks include distributed denial of service attacks, DDOS, web defacements, sensitive data exfiltration, and other attack vectors typical of this nation state actor. A representative from law enforcement who has provided additional evidence of network attacks found using network defense tools. A representative from the intelligence agency who has identified the nation state actor from numerous public and government provided threat intelligence reports. This representative will provide threat intelligence on the tools, techniques, and procedures of this nation state actor. A representative from the Department of Homeland Security who will provide the risk, response, and recovery actions taken as a result of this cyber threat. Your team will have to provide education and security awareness to the financial services sector about the threats, vulnerabilities, risks, and risk mitigation and remediation procedures to be implemented to maintain a robust security posture. Finally, your team will take the lessons learned from this cyber incident and share that knowledge with the rest of the cyber threat analysis community. At the end of the response to this cyber incident, your team will provide two deliverables, a situational analysis report, or SAR, to the White House Cyber National security staff and an After Action Report and lesson learned to the cyber threat analyst community.
Step 2: Assessing Suspicious Activity
Your team is assembled and you have a plan. It's time to get to work. You have a suite of tools at your disposal from your work in Project 1, Project 2, and Project 3, which can be used together to create a full common operating picture of the cyber threats and vulnerabilities that are facing the US critical infrastructure.
To be completed by all team members: Leverage the network security skills of using port scans, network scanning tools, and analyzing Wireshark files, to assess any suspicious network activity and network vulnerabilities.
Step 3: The Financial Sector
To be completed by the Financial Services Representative: Provide a description of the impact the threat would have on the financial services sector. These impact statements can include the loss of control of the systems, the loss of data integrity or confidentiality, exfiltration of data, or something else. Also provide impact assessments as a result of this security incident to the financial ...
Cyber Threat Intelligence (CTI) primarily focuses on analysing raw data gathered from recent and past events to monitor, detect and prevent threats to an organisation, shifting the focus from reactive to preventive intelligent security measures.
The analysts are in the field of "knowledge". Intelligence refers to knowledge and the types of problems addressed are knowledge problems. So, we need a concept of work based on knowledge. We need a basic understanding of what we know and how we know, what we do not know, and even what can be known and what is not known. The analysis should provide a useful basis for conceptualizing intelligence functions, of which the most important are "estimation" and "prediction". The intelligence itself, in its basic form, has a decision-making function. Intelligence analysis applies individual and collective cognitive methods to assess data and test assumptions in a secret socio-cultural context.
DOI: 10.13140/RG.2.2.25298.40646
Analysis and ProductionInformation collected in the previous ste.docxnettletondevon
Analysis and Production
Information collected in the previous steps will be analyzed, validated, and fused into intelligence in the analysis process to be included into finished intelligence products. Analysis is defined by the ODNI as “The process by which information is transformed into intelligence; a systematic examination of information to identify significant facts, make judgments, and draw conclusions.” (ODNI, 2013) This transformation occurs when All-Source Analysts use all the Single Source INTs to create a fused intelligence product.
Single Source INT
We learned about “Single Source INTs” in Module three and they are HUMINT, SIGINT, GEOINT, OSINT, and MASINT. They are referred to as single source because they are derived from a single type of INT. You may hear of a single source SIGINT analyst, who is an expert in signals intelligence collection, or a single source IMINT analyst, who is an expert in imagery intelligence collection.
All-Source INT
The All-Source Analyst is not an expert in collection of the single sources, but is instead an expert in a region or intelligence function (terrorism, counter-drug, crime, etc.) that is the consumer of the raw single source information from all INTs. Analysts receive incoming information, evaluate it, test it against other information and against their personal knowledge and expertise, produce an assessment of the current status of a particular area under analysis, and then forecast future trends or outcomes. The analyst also develops requirements for the collection of new information. (ODNI, 2013) The All-Source Analyst will learn through experience to validate information using multiple INTs to confirm information collected through a single source INT. For example, if information is collected from a HUMINT source, the All-Source Analyst will look for another HUMINT source to collaborate that information or possibly validate the original HUMINT source through SIGINT sources.
Validation
Validation is important to also counter an adversary’s attempts to conduct deception; however, the well-organized adversary will release multiple pieces of information through various INTs to mislead All-Source Analysts. A good adversarial deception plan will not only allow deceptive pieces of information to be collected to attempt to fool our intelligence analysts, but will also play into an analyst’s bias. The common biases of an analyst are mirror imaging (thinking the adversary will act the same way Americans do), cry-wolf syndrome (conducting an action multiple times so that when the adversary truly intends to act, our analysts do not take it seriously), and mission-creep/new normal (an adversary slowly changes tactics so that our analysts do not suspect nefarious activities).
It is important for All-Source Analysts to be confident in their assessments,
but they should be wary of overconfidence
Many think that the job of an intelligence analyst is to predict the future. In fact, many early analytic.
How To Turbo-Charge Incident Response With Threat IntelligenceResilient Systems
Minutes, hours, days - each one counts when responding to a security incident. Yet most firms have a lot of room for improvement. According to the 2013 Verizon Data Breach Investigations Report, in 66% of cases (up from 56% last year), breaches remained undiscovered for years, and in 22% of cases, it took months to fully contain the incident.
This webinar will review the challenges firms face in trying to create a rapid and decisive incident response (IR) process. It will then highlight the crucial role that timely, contextual threat intelligence can play in turbo-charging incident response, particularly when tightly integrated with the broader IR discipline. Finally, it will reveal the power of this approach by demonstrating Co3's integrated threat intelligence capabilities including intel from industry-leader iSIGHT Partners.
Cyber Threat Intelligence is a process in which information from different sources is collected, then analyzed to identify and detect threats against any environment. The information collected could be evidence-based knowledge that could support the context, mechanism, indicators, or implications about an already existing threat against an environment, and/or the knowledge about an upcoming threat that could potentially affect the environment. Credit: Marlabs Inc
A Behavior Based Intrusion Detection System Using Machine Learning AlgorithmsCSCJournals
Humans are consistently referred to as the weakest link in information security. Human factors such as individual differences, cognitive abilities and personality traits can impact on behavior and play a significant role in information security. The purpose of this study is to identify, describe and classify the human factors affecting Information Security and develop a model to reduce the risk of insider misuse and assess the use and performance of the best-suited artificial intelligence techniques in detection of misuse. More specifically, this study provides a comprehensive view of the human related information security risks and threats, classification study of the human related threats in information security, a methodology developed to reduce the risk of human related threats by detecting insider misuse by a behavior-based intrusion detection system using machine learning algorithms, and the comparison of the numerical experiments for analysis of this approach. Specifically, by using the machine learning algorithm with the best learning performance, the detection rates of the attack types defined in the organized five dimensional human threats taxonomy were determined. Lastly, the possible human factors affecting information security as linked to the detection rates were sorted upon the evaluation of the taxonomy.
Exploring the Psychological Mechanisms used in Ransomware Splash ScreensJeremiah Grossman
The present study examined a selection of 76 ransomware splash screens collected from a variety of sources. These splash screens were analysed according to surface information, including aspects of visual appearance, the use of language, cultural icons, payment and payment types. The results from the current study showed that, whilst there was a wide variation in the construction of ransomware splash screens, there was a good degree of commonality, particularly in terms of the structure and use of key aspects of social engineering used to elicit payment from the victims. There was the emergence of a sub-set of ransomware that, in the context of this report, was termed ‘Cuckoo’ ransomware. This type of attack often purported to be from an official source requesting payment for alleged transgressions.
On 4 October 2016, as part of the GDPR Workshop series, the Brussels Privacy Hub hosted a workshop on implementation of the EU GDPR and Privacy Impact Assessment. Trilateral delivered a joint presentation by Rowena Rodrigues and Julia Muraszkewicz, exploring some of the challenges associated to DPIAs and EPIAs. The presentation was based upon two of Trilateral’s research projects: SATORI and iTRACK.
Project 4 Threat Analysis and ExploitationTranscript (backgroun.docxstilliegeorgiana
Project 4: Threat Analysis and Exploitation
Transcript (background):
You are part of a collaborative team that was created to address cyber threats and exploitation of US financial systems critical infrastructure. Your team has been assembled by the White House Cyber National security staff to provide situational awareness about a current network breach and cyber attack against several financial service institutions. Your team consists of four roles, a representative from the financial services sector who has discovered the network breach and the cyber attacks. These attacks include distributed denial of service attacks, DDOS, web defacements, sensitive data exfiltration, and other attack vectors typical of this nation state actor. A representative from law enforcement who has provided additional evidence of network attacks found using network defense tools. A representative from the intelligence agency who has identified the nation state actor from numerous public and government provided threat intelligence reports. This representative will provide threat intelligence on the tools, techniques, and procedures of this nation state actor. A representative from the Department of Homeland Security who will provide the risk, response, and recovery actions taken as a result of this cyber threat. Your team will have to provide education and security awareness to the financial services sector about the threats, vulnerabilities, risks, and risk mitigation and remediation procedures to be implemented to maintain a robust security posture. Finally, your team will take the lessons learned from this cyber incident and share that knowledge with the rest of the cyber threat analysis community. At the end of the response to this cyber incident, your team will provide two deliverables, a situational analysis report, or SAR, to the White House Cyber National security staff and an After Action Report and lesson learned to the cyber threat analyst community.
Step 2: Assessing Suspicious Activity
Your team is assembled and you have a plan. It's time to get to work. You have a suite of tools at your disposal from your work in Project 1, Project 2, and Project 3, which can be used together to create a full common operating picture of the cyber threats and vulnerabilities that are facing the US critical infrastructure.
To be completed by all team members: Leverage the network security skills of using port scans, network scanning tools, and analyzing Wireshark files, to assess any suspicious network activity and network vulnerabilities.
Step 3: The Financial Sector
To be completed by the Financial Services Representative: Provide a description of the impact the threat would have on the financial services sector. These impact statements can include the loss of control of the systems, the loss of data integrity or confidentiality, exfiltration of data, or something else. Also provide impact assessments as a result of this security incident to the financial ...
Cyber Threat Intelligence (CTI) primarily focuses on analysing raw data gathered from recent and past events to monitor, detect and prevent threats to an organisation, shifting the focus from reactive to preventive intelligent security measures.
Can you teach coding to kids in a mobile game app in local languages. Do you need to be good in English to learn coding in R or Python?
How young can we train people in coding-
something we worked on for six months but now we are giving up due to lack of funds is this idea.
Feel free to use it, it is licensed cc-by-sa
1. Abstract
This forecasting methodology identifies 68 indi-
cators of terrorism and employs proven analytic
techniques in a systematic process that safe-
guards against 36 of the 42 common warning pit-
falls that experts have identified throughout his-
tory. The complete version of this research pro-
vides: 1) a step-by-step explanation of how to
forecast terrorism, 2) an evaluation of the fore-
casting system against the 42 common warning
pitfalls that have caused warning failures in the
past, and 3) recommendations for implementa-
tion. The associated CD has the website interface
to this methodology to forecast terrorist attacks.
This methodology could be applied to any intel-
ligence topic (not just terrorism) by simply
changing the list of indicators. The complete
version of this research is available in Forecast-
ing Terrorism: Indicators and Proven Analytic
Techniques, Scarecrow Press, Inc., ISBN 0-
8108-5017-6, for which 100% of the author roy-
alties are being donated to the non-profit Na-
tional Memorial Institute for the Prevention of
Terrorism (www.mipt.org) and the Joint Military
Intelligence College Foundation which supports
the Defense Intelligence Agency, Joint Military
Intelligence College (www.dia.mil.Jmic).
1. Introduction: Correcting Misconcep-
tions
Important lessons have arisen from the study of intelli-
gence warning failures, but some common misconcep-
tions have prevented the Intelligence Community from
recognizing and incorporating these lessons. Analysis,
Rather Than Collection, Is the Most Effective Way to
Improve Warning. The focus to improve warning nor-
mally turns to intelligence collection, rather than analysis
(Kam 1988). That trend continues after September 11th
(Anonymous intelligence source 2002). However, warning
failures are rarely due to inadequate intelligence collec-
tion, are more frequently due to weak analysis, and are
most often due to decision makers ignoring intelligence
(Garst 2000). Decision makers, however, ignore intelli-
gence largely because analytical product is weak (Hey-
mann 2000). Hiring Smart People Does Not Necessarily
Lead to Good Analysis. Studies show that, “frequently
groups of smart, well-motivated people . . . agree . . . on
the wrong solution. . . . They didn’t fail because they
were stupid. They failed because they followed a poor
process in arriving at their decisions” (Russo and Schoe-
maker 1989). A Systematic Process Is the Most Effective
Way to Facilitate Good Analysis. The nonstructured
approach has become the norm in the Intelligence Com-
munity. A key misunderstanding in the debate over intui-
tion versus structured technique is that an analyst must
choose either intuition or structured technique (Folker
2000). In fact, both intuition and structured technique can
be used together in a systematic process. “Anything that
is qualitative can be assigned meaningful numerical val-
ues. These values can then be manipulated to help us
achieve greater insight into the meaning of the data and
to help us examine specific hypotheses” (Trochim 2002). It
is not only possible to combine intuition and structure in
a system, research shows the combination is more effec-
tive than intuition alone. “Doing something systematic is
better in almost all cases than seat-of-the-pants predic-
tion” (Russo 1989). Moreover, decision makers have
called on the Intelligence Community to use methodol-
ogy. “The Rumsfeld Commission noted that, ‘. . . an ex-
pansion of the methodology used by the IC [Intelligence
Community] is needed.’ . . . Keeping chronologies, main-
taining databases and arraying data are not fun or glam-
orous. These techniques are the heavy lifting of analysis,
but this is what analysts are supposed to do. If decision
makers only needed talking heads, those are readily
available elsewhere” (McCarthy 1998).
Forecasting Terrorism: Indicators and Proven Analytic Techniques
Captain Sundri K. Khalsa
USAF
PO BOX 5124
Alameda, California, 94501, United States
SundriKK@hotmail.com
Keywords: Terrorism, Predictive Analysis and Hypothesis Management, Multiple Competing Hypotheses, Question
Answering, Structured Argumentation, Novel Intelligence from Massive Data, Knowledge Discovery and Dissemina-
tion, Information Sharing and Collaboration, Multi-INT/fusion, All Source Intelligence, Visualization, Fusion.
2. 2. How to Forecast Terrorism: Abbrevi-
ated Step-By-Step Explanation of the
Methodology
This forecasting system is based on indicators. The ex-
planation of this methodology begins at the lowest level of
indicators and then builds up to the big picture of countries
within a region. The forecasting assessments of this meth-
odology are maintained on a website display, which is
available on the associated CD. Figure 1 shows a break-
down of the 3 primary types of warning picture views from
the web homepage: 1) country list view, 2) target list view,
and 3) indicator list view. Each potential-terrorist target is
evaluated in a webpage hypothesis matrix based on the
status of 68 indicators of terrorism. The indicators are up-
dated near-real-time with incoming raw intelligence re-
ports/evidence The methodology consists of 23 tasks and 6
phases of warning analysis, which are described very briefly
in this paper. The 23 tasks include 14 daily tasks, 3 monthly
tasks, 4 annual tasks, and 2 as-required tasks. The 14 daily
tasks can be completed in 1 day because tasks have been
automated wherever possible. Three types of analysts are
required to operate this methodology: Raw Reporting Pro-
filers, Indicator Specialists, and Senior Warning Officers.
2.1 Phase I: Define/Validate Key Ele-
ments of the Intelligence Problem (Using
Indicators)
Task 1: Identify/Validate Indicators (Annually). Indi-
cators are the building blocks of this forecasting system.
Indicators are “those [collectable] things that would have
to happen and those that would likely happen as [a] sce-
nario unfolded” (McDevitt 2002). For a terrorist attack,
those would be things like: terrorist travel, weapons
movement, terrorist training, target surveillance, and tests
of security. This project research has identified 68 indica-
tors of terrorism encompassing terrorist intentions, terror-
ist capability, and target vulnerability, which are the
three components of risk. The indicators are also identi-
fied as either quantitative (information that can be
counted) or qualitative (information that cannot be
counted, such as terrorist training). Only 7 of the 68 Ter-
rorism Indicators are quantitative. For security reasons,
many indicators are not shown here.
In task 1 of this methodology, the leading counterter-
rorism experts meet on at least an annual basis to deter-
mine if indicators should be added to or removed from
the list. A list of indicators should never be considered
final and comprehensive. To determine if the list of indi-
cators should be altered, analysts:
1. Review the raw intelligence reports filed under the
Miscellaneous Indicators to determine if any kinds
of significant terrorist activity have been over-
looked.
2. Review U.S. collection capabilities to determine if
the U.S. has gained or lost the capability to collect
on any terrorist activities.
3. Review case studies of terrorist operations to iden-
tify changes in terrorist modus operandi and de-
termine if terrorists are conducting any new activi-
ties against which U.S. intelligence can collect.
Task 2: Prioritize Indicators (Annually). Some indi-
cators are more significant than others. For instance,
among the indicators of terrorist intentions, weapons
movement to the target area must take place before an
attack; whereas an increase in terrorist propaganda is not
a prerequisite. Therefore, weapons movement would
carry a higher significance/priority than increased propa-
ganda. On at least an annual basis, the leading counterter-
rorism experts determine if the priority of any of the in-
dicators needs to be adjusted on a scale of 1 through 3
according to definitions defined in the complete explana-
tion of this methodology.
Task 3: Develop/Validate Key Question Sets on
Each Indicator (Annually). Since indicators are the
foundation and building blocks of this methodology, the
type of information required to assess the status of an
indicator should be clearly defined and recorded. Thus,
on at least an annual basis, the leading counterterrorism
experts validate a list of key questions for each indicator.
The question sets identify the key factors that experts
have determined are necessary to assess the status of a
given indicator. They also function as the list of priori-
tized Collection Requirements for intelligence collectors.
The entire list of indicators and corresponding key ques-
tion sets form an intelligence collection plan against ter-
rorism.
Task 4: Prioritize Questions in Key Question Sets
(Annually). The leading counterterrorism experts also
prioritize the key questions on a scale of 1 through 3 with
1 being the most significant. These priorities affect both
intelligence collection priorities and analysts’ assess-
ments.
2.2 Phase II: Consolidate Information (Us-
ing Master Database)
Task 5: Intelligence Community Master Database Re-
ceives All Raw Intelligence Reports from Intelligence
Collectors (Daily). The daily process begins with the
requirement that all fifteen Member Organizations of the
Intelligence Community and other U.S. government or-
ganizations that may have terrorism-related information
forward all their raw intelligence reports (on all intelli-
gence topics, not just terrorism) to an Intelligence Com-
munity Master Database. The major benefit of hindsight
investigation after intelligence warning failures is that it
is the first time all the information has been consolidated.
2.3 Phase III: Sort Information (Using
Hypothesis Matrices)
Task 6: Enter All Terrorism Related Raw Intelligence
Reports into Terrorism Forecasting Database Under
Appropriate Indicators, Key Questions, Targets,
Countries, Terrorist Groups, and Other Data Profile
Elements (Daily). A large group of junior analysts,
3. called Raw Reporting Profilers, reads through the Intelli-
gence Community’s incoming raw intelligence reports
(an estimated 2500 per day that are already marked as
terrorism related) and enters all of them into a Terrorism
Forecasting Database according to the indicators, key
questions, targets, countries, terrorist groups, and other
terrorism forecasting-specific data profile elements iden-
tified in this methodology.
Task 7: Terrorism Forecasting Database Creates
Potential-Target Hypothesis Matrices with Raw Intelli-
gence Reports Filed by Indicators (Daily). After ana-
lysts enter raw intelligence reports into the Terrorism
Forecasting Database, the computer program automati-
cally creates corresponding Potential-Target Hypothesis
Matrix webpages and displays hyperlinks to the reports
under the appropriate indicator(s) within the hypothesis
matrices.
Task 8: Terrorism Forecasting Database Feeds Raw
Intelligence Reports into Appropriate Indicator Key
Questions, Answers, & Evidence Logs within the Hy-
pothesis Matrices (Daily). The master database also
feeds the raw reports into the appropriate Indicator Key
Questions, Answers, & Evidence Logs within the Hy-
pothesis Matrices, as shown in figure 2.
Task 9: Assess Raw Intelligence Reports’ Informa-
tion Validity (Daily). The computer program combines
Source Credibility and Information Feasibility/Viability
according to rules in utility matrix logic to determine a
report’s Information Validity [on a 5-level scale of: 1)
“Almost Certainly Valid (~90%),” color coded red on the
website, 2) “Probably Valid (~70%),” orange 3) “Proba-
bly Not Valid (~30%),” yellow, 4) “Almost Certainly Not
Valid (~10%),” gray, and 5) “Unknown Validity (or
~50%),” black]. Source Credibility and Information Fea-
sibility/Viability are determined by analysts when they
check a list of boxes for each in the database on a 5-level
scale of: 1) “Almost Certainly (~90%),” about 90 percent
probability, 2) “Probably (~70%),” 3) “Probably Not
(~30%),” 4) “Almost Certainly Not (~10%),” and 5)
“Unknown (or ~50%)”].
2.4 Phase IV: Draw Conclusions (Using
Intuitive and Structured Techniques)
Task 10: Assess Indicator Warning Levels (Daily). The
computer program combines the Indicator Priority and
an Indicator Activity Level to determine an Indicator
Warning Level [on a 5-level scale of: 1) Critical (~90%),
2) Significant (~70%), 3) Minor (~30%), 4) Slight
(~10%), and 5) Unknown (or ~50%)]. The Indicator Ac-
tivity Level is determined by a second group of analysts
using either rules in utility matrix logic (for the quantita-
tive indicators) or the Indicator Key Questions, Answers,
& Evidence Logs (for the qualitative indicators). These
analysts are Indicator Specialists, the Counterterrorism
Community’s designated experts in determining an Indi-
cator Activity Level.
From this point forward, all the warning level calcula-
tions are automated. The third group of analysts, Senior
Warning Officers, is responsible for monitoring and ap-
proving all the warning levels that the computer applica-
tion automatically produces and updates on the web-
pages.
Task 11: Assess Terrorist Intention Warning Level
(Daily). Now that analysts have assessed an Indicator
Warning Level for each of the 68 indicators of terrorist
intentions, terrorist capability, and target vulnerability
(the 3 components of risk), the computer can calculate a
warning level for each of those 3 components. The com-
puter calculates the Terrorist Intention Warning Level for
a target [on a 5-level scale of: 1) Critical (~90%), 2) Sig-
nificant (~70%), 3) Minor (~30%), 4) Slight (~10%), and
5) Unknown (or ~50%)] from the Indicator Warning
Levels of the active terrorist intention indicators using
averages and rules in utility matrix logic.
Task 12: Assess Terrorist Capability Warning Level
(Daily). The computer determines the Terrorist Capabil-
ity Warning Level for a given country [on a 5-level scale
of: 1) Critical (~90%), 2) Significant (~70%), 3) Minor
(~30%), 4) Slight (~10%), and 5) Unknown (or ~50%)]
by taking the highest of all the Indicator Warning Levels
of the active terrorist capability, lethal agent/technique
indicators.
Task 13: Assess Target Vulnerability Warning Level
(Daily). The computer program calculates a target’s Vul-
nerability Warning Level [on a 5-level scale of: 1) Criti-
cal (~90%), 2) Significant (~70%), 3) Minor (~30%), 4)
Slight (~10%), and 5) Unknown (or ~50%)] from the
Indicator Warning Levels of the active terrorist vulner-
ability indicators using averages and rules in utility ma-
trix logic.
Task 14: Assess Target Risk Warning Level (Daily).
Now that analysts have a warning level for each of the 3
components of risk (terrorist intentions, terrorist capabil-
ity, and target vulnerability), the computer can calculate
a risk warning level for a given target. The computer
program calculates the Target Risk Warning Level [on a
5-level scale of: 1) Critical (~90%), 2) Significant
(~70%), 3) Minor (~30%), 4) Slight (~10%), and 5) Un-
known (or ~50%)] by averaging the Terrorist Intention,
Terrorist Capability, and Target Vulnerability Warning
Levels.
Task 15: Assess Country Risk Warning Level
(Daily). The computer program determines the Country
Risk Warning Level by taking the highest Target Risk
Warning Level in the country.
Task 16: Update/Study Trend Analysis of Indicator
Warning Levels (Monthly). This explanation has now
shown how the methodology determines various warning
levels, and that they are displayed in 3 primary views:
indicator list, target list, and country list. There is also
trend analysis for each view. The computer automatically
captures the Indicator Warning Level as it stood for the
majority of each month and plots it on a graph. Senior
Warning Officers write analytical comments discussing
warning failures and successes and how the methodology
will be adjusted, if necessary.
4. Task 17: Update/Study Trend Analysis of Target
Risk Warning Levels (Monthly). A target-oriented
trend analysis is also maintained in the same manner.
Task 18: Update/Study Trend Analysis of Country
Risk Warning Levels (Monthly). Finally, a country-
oriented trend analysis is maintained in the same manner.
2.5 Phase V: Focus Collectors on Intelli-
gence Gaps to Refine/Update Conclusions
(Using Narratives that Describe What We
Know, Think, and Need to Know)
Task 19: Write/Update Indicator Warning Narrative:
What We Know, Think, & Need to Know (Daily). Thus
far, the methodology has provided color-coded graphic
representations of warning assessments, but narratives
are necessary to explain the details behind each color-
coded warning level. Narratives are provided for each of
the 3 graphic views—indicators, targets, and countries.
The key question set provides an outline of all the major
points that the indicator narrative should address. The
narrative begins with a description of what the analyst
knows and thinks about the indicator, then is followed by
a list of questions from the Indicator Key Questions, An-
swers, & Evidence Logs on what he doesn't know—
Intelligence Gaps for intelligence collectors.
Task 20: Write/Update Executive Summary for
Target Warning Narrative: What We Know, Think, &
Need to Know (Daily). The computer program combines
the indicator narratives into a potential-target narrative.
Senior Warning Officers write and maintain executive
summaries for each potential-target narrative.
Task 21: Write/Update Executive Summary for
Country Warning Narrative: What We Know, Think, &
Need to Know (Daily). Finally, the computer program
combines the potential-target executive summaries into a
country narrative. Again, Senior Warning Officers are
responsible for maintaining executive summaries for the
country narratives.
2.6 Phase VI: Communicate Conclu-
sions/Give Warning (Using Website Tem-
plates)
Task 22: Brief Decision Maker with Website Tem-
plates (As Required). The final task of a warning system
is to convey the warning. Senior Warning Officers brief
the warning levels to a decision maker using the website
templates. The website templates are designed to convey
the structured reasoning process behind each warning
level.
Task 23: Rebrief Decision Maker with New Evi-
dence in Website Templates (As Required). If a deci-
sion maker does not heed the warning and does not alter
a security posture/FPCON after hearing the analyst’s
warning assessment, the analyst returns to the decision
maker with new evidence to press the assessment until
the decision maker heeds the warning. Former Secretary
of State Colin Powell describes this requirement elo-
quently, “[Analysts must] recognize that more often than
not, I will throw them out, saying ‘na doesn’t sound
right, get outta here.’ What I need from my I&W system
at that point is, ‘That old bastard, I’m going to prove him
wrong.’ And go back and accumulate more information,
come back the next day and give me some more and get
thrown out again. Constantly come back . . . and per-
suade me that I better start paying attention” (Powell
1991). This is in line with the DCI’s statement that the
purpose of intelligence is “not to observe and comment,
but to warn and protect” (DCI Warning Committee 2002).
3. Conclusion
Rather than face the War on Terrorism with the traditional
intuition-dominated approach, this methodology offers a
systematic forecasting tool that:
Guards against nearly 81 percent of common warning
pitfalls, and ultimately, improves the terrorism warning
process.
Coordinates analysts in a comprehensive, systematic ef-
fort.
Automates many proven analytic techniques into a com-
prehensive system, which is near-real-time, saves time,
saves manpower, and ensures accuracy in calculations
and consistency in necessary, recurring judgments.
Enables collection to feed analysis, and analysis to also
feed collection, which is the way the intelligence cycle is
supposed to work.
Fuses interagency intelligence into a meaningful warning
picture while still allowing for compartmenting necessary
to protect sensitive sources and methods.
Provides a continuously updated analysis of competing
hypotheses for each potential-terrorist target based on the
status of the 68 indicators of terrorism.
Is the first target specific terrorism warning system; thus
far, systems have only been country specific.
Is the first terrorism warning system with built in trend
analysis.
Combines threat (adversary intentions and adversary ca-
pability) with friendly vulnerability to determine risk and
provide a truer risk assessment than typical intelligence
analysis.
Includes a CD that is the tool to implement this terrorism
forecasting system.
Officials in the FBI and the Defense Intelligence Agency
(DIA) characterized this terrorism forecasting system as
“light-years ahead,” “the bedrock for the evolving approach
to terrorism analysis,” and an “unprecedented forecasting
model.”
Declaration of Originality
This paper has not already been accepted by and is not cur-
rently under review for a journal or another conference, nor
will it be submitted for such during IA’s review period.
5. Website Homepage
Select a region of the
world
Country List View
Select a country
Target List View
Select a potential
target
Indicator List Views
Select an indicator of:
Terrorist Intentions,
Terrorist Capability, or
Target Vulnerability
Figure 1. The 3 Primary Warning Picture Views.
References
A source, mid-level intelligence professional at a national intelli-
gence organization, who wishes to remain anonymous. Inter-
view by author, 10 July 2002.
Colin Powell on I&W: Address to the Department of Defense
Warning Working Group. Distributed by the Joint Military
Intelligence College, Washington, D.C. 1991. Videocas-
sette.
Donald Rumsfeld, press conference, quoted in Mary O.
McCarthy, “The Mission to Warn: Disaster Looms,” Defense
Intelligence Journal 7 no. 2 (Fall 1998): 21.
Folker, Robert D. Jr., MSgt, USAF. Intelligence Analysis in
Theater Joint Intelligence Centers: An Experiment in Apply-
ing Structured Methods. Joint Military Intelligence College
Occasional Paper, no. 7. Washington, D.C.: Joint Military
Intelligence College, January 2000.
Garst, Ronald D. “Fundamentals of Intelligence Analysis.” 5-7
in Intelligence Analysis ANA 630, no. 1, edited by Joint
Military Intelligence College. Washington, D.C.: Joint Mili-
tary Intelligence College, 2000.
Heymann, Hans, Jr. “The Intelligence—Policy Relationship.”
53-62 in Intelligence Analysis ANA 630, no. 1, edited by
Joint Military Intelligence College. Washington, D.C.: Joint
Military Intelligence College, 2000.
6. In tasks 19, 20, and
21, this question,
answered “Unknown
(or ~50%)”, auto-
matically appears as
a Collection Re-
quest on the appro-
priate Warning Nar-
rative: What We
Know, Think, &
Need to Know.
Figure 2. Indicator Key Questions, Answers, & Evidence Log (in Hypothesis Matrix),
Kam, Ephraim. Surprise Attack: The Victim’s Perspective.
Cambridge, MA: Harvard University Press, 1988.
McDevitt, James J. Summary of Indicator-Based-Methodology.
Unpublished handout, n.p., n.d. Provided in January 2002 at
the Joint Military Intelligence College.
National Warning Staff, DCI Warning Committee. “National
Warning System.” Handout provided in January 2002 at the
Joint Military Intelligence College.
Russo, J. Edward, and Paul J. H. Schoemaker. Decision Traps:
The Ten Barriers to Brilliant Decision-Making and How to
Overcome Them. New York: Rockefeller Center, 1989.
Trochim, William M. K. “Qualitative Data.” Cornell Univer-
sity: Research Methods Knowledge Base. 2002. tro-
chim.human.cornell.edu/kb/qualdata.htm (31 May 2002).