SlideShare a Scribd company logo
1 of 22
Download to read offline
Threat Image Projection and Computer Based Training
Are they effective training tools or just check the block training?
John D. Howell Security Consultant
Threat image projection (TIP) and x-ray computer-based training (CBT) have been around for a
while and are used to measure the effectiveness of x-ray operators at detecting threats. In some
places in the world they are used to certify x-ray operators and are actually "required" before a
security officer can work at a screening checkpoint. There have been many technical papers done
on the effectiveness of TIP and CBT and all of them say that they are effective. However, when
you talk to the subject matter experts (SME's) in the field you will find that they typically do not
agree on the overall effectiveness of TIP and CBT as a training tool.
So, I asked an expert who runs the world’s largest and most comprehensive penetration testing
program and he had this to say about TIP and CBT:
TIP/CBT doesn’t accurately represent threats in the environment that an x-ray operator might
encounter during operational screening. Right now, it is similar to standardized tests in public
schools that are made for the administrators but not the students.
I reached out to SME's all over the world and the general response was the same across the
board. The below best sums up what I got back from everybody I asked:
I would say that his thoughts pretty much are the same as what I think, TIP is an excellent
tool for screeners to find TIPs and that is partly because of the objects that are being used to
make the TIPs. With CBT, I have yet to find a screener who has actually improved his/her
skills with CBT as it is currently being used. In my opinion the most valuable training is hands
on with real objects to get a real understanding of what an IED is composed of and how the
different components look like separated, both in x-rays and live. Since I started using the
realistic IEDs, first in penetration tests and after the test showing the threat to the screener, I
have seen more improvement than years of CBT and TIP.
So why are TIP and CBT not considered "realistic" by the subject matter experts in the field? To
really answer that question, you must first understand how the TIP and CBT programs work and
how the threat images are being created for each library. I make TIP libraries and have seen all
the X-ray vendors’ TIP libraries. This includes the bag sets (clean bags) that are used for CBT
and the operator training systems (simulators) built into the x-rays. As a bomb technician, I know
improvised explosive devices (IED'S) and being from America I know guns and knives. So,
when I start looking at these TIP and CBT programs I must totally agree with the experts that
they are NOT realistic.
So, let’s break down what is wrong with TIP and CBT and detail why they are not considered
"realistic":
Computer Based Training Problems:
1. Fictional Threat Images (FTI's) are low quality and are not realistic
2. Limited number FTI Categories based on all the different types and configurations of threats
3. Limited number FTI angles (Typically 4) do not represent real world complexity
4. Bag sets have no false alarms for almost all CBT systems on the market
5. Bag sets not categorized by the amount of clutter and how they can affect FTI, nor do they
have the ability to alarm or not alarm based on FTI placement in the bag
6. No automatic explosive detection or "missed detections" for explosives on almost all CBT
systems on the market
7. Explosive Detection “windows” not accurately represented in CBT system based on end user
settings (threat mass or size cut offs)
8. No real quality standards or oversight for libraries
9. FTI difficulty levels poorly standardized in analytics and reporting
10. Bag sets not representative of country nor checkpoint type
11. Virtual keyboards not the same as actual button pushing
12. Image and programmable key settings generic and not per end users SOP
Threat Image Projection Problems:
1. FTI Quality/Realism (same as CBT)
2. Limited FTI Categories (same as CBT)
3. Limited FTI angles (same as CBT)
4. Unrealistic Automatic Detection or Missed Detection's for explosives (same as CBT)
5. No real quality standards or oversight (same as CBT)
6. FTI difficulty poorly standardized (same as CBT)
FTI Quality/Realism is poor (Guns):
When you look at the gun’s images in a threat library you will see that depending on where it
was made (US vs all others) the quality of the guns becomes much lower outside of the US. The
reason is obvious as we all know that getting access to guns overseas to use in a threat library is
much more challenging vs the US. What happens is the X-ray vendors and CBT companies will
use anything they can get, and you end up seeing many toys guns, BB guns, Pellet C02 guns, and
Air soft pistols in the threat libraries. In some of the worse cases these toy guns are being
represented as a real gun to the student’s vs being told what it actually is. Toy guns and the
BB/Pellet guns are an issue and need to be located in a bag, but they DO NOT respond in x-ray
like real guns. The below image is from a technical paper on TIP/CBT and the writers asked the
airport SME's to provide them "REALISTIC" threats to use and what they were provided was a
toy gun.
When you look at Toy/BB/Pellet guns next to real guns you can see that they look nothing alike
in x-ray and the density and Zeff of the metal in a real gun is much higher vs the toys. That fact
alone makes these very different in how they respond in an x-ray system and how they will be
seen by the operator. Toy guns need to be part of an FTI library, but they need to be in a separate
category because of how different they are in comparison to a real gun. This fact ties back into
the fact that the current categories are poorly defined based on how each type of threat responds
in the x-ray. The below image is the difference between toy guns and a real gun.
The use of high-density automatic detection at checkpoints is becoming more common and when
you look at how a real gun responds in x-ray vs the toys guns there is literally no comparison.
Because real guns have a much higher density and Zeff the high-density automatic detection will
respond much differently vs a toy gun. The high-density feature works by selecting the
maximum amount of absorption based on a % and a squared surface area of pixels. This means
that toy guns (BB guns, etc.) will not respond the same way in an x-ray system using this feature.
I have yet to find any CBT that incorporates this into the software and FTI's. As you can see
below high-density alert, when set correctly, is very effective at detecting a real gun threat but
with toy guns it would not detect these items. This is another perfect example of why gun FTI's
are not realistic.
Guns can be presented to an x-ray operator in several different configurations and each
configuration looks different. Your TIP and CBT gun threat images must cover all these different
configurations, or they are not realistic. The below are the different configurations that a single
gun can be presented to an x-ray operator.
FTI Quality/Realism is poor (IED's):
As a Bomb Technician when you start looking at the IED images that are being used in TIP and
CBT libraries, it is obvious that whoever is making them are not Bomb Technicians. You will
typically see massive amounts of IED FTI's that are the worst configurations of IED's I have ever
seen. They are not even remotely what the bad guys are using nor are they even close to being
technically correct based on circuit design. From the detonators not being x-ray correct and never
inserted into the explosives to the actual explosives being the worst simulations I have ever seen
when compared to a real explosive. What is even worse is that the explosive simulants are not
density and Zeff correct and there are no standards in place for explosive simulants. The below
are some IED FTI's that are perfect examples of the typical quality of images you find in a CBT
or TIP library.
In the below image the FTI is supposed to be a 1lb TNT demolition block but when you compare
it to a real one they look nothing alike. This type of issue is very common in IED FTI's. The real
TNT demo block also generated a "Red Box" automatic detection and you can see from the color
that the simulated TNT is a much lower Zeff and density. It appears that whoever is making these
FTI's does not understand the density and Zeff ranges for explosives.
Another very common issue with IED FTI's are the blasting caps/detonators and the complete
lack of realism. Many libraries I have seen are just using empty tubes with a wire tied in a knot
inside of them which looks nothing like a real detonator. You will also see entire libraries that
use maybe 2-3 different types of detonators in the entire library. If you are familiar with
detonators and initiators you would understand that there are many different types on the
commercial market and each type looks different in an x-ray. When you add the improvised
detonators that terrorists like to use, the number of different types can be over 30 different
configurations. I have never seen TIP or CBT IED threat Library where the caps are
comprehensive and realistic. The below images are very common, and you see this all the time.
The detonators are typically never inserted into the explosives and are just stuck to the outside.
So why do you see this in just about every CBT and TIP threat library? My answer is that the
people making these do not understand how explosives work, otherwise you would want to make
it as realistic as possible. Either way this is very unrealistic and when you consider that many
SOP's tell you that if you see a RED BOX (explosive auto detection) you look inside of the red
box for a detonator.
One of the biggest realism issues with IED's is that TIP and CBT do not accurately capture how
explosives respond when automatic detection is being used by the site. Most CBT systems on the
market today do not even have the explosive auto detection built into their systems. The other
issue is that if they are showing auto detection of the FTI's the software cannot consider the
amount of clutter in the bag. Explosives surrounded by higher Zeff materials typically will not
generate an automatic detection alarm and TIP and CBT cannot currently simulate this level of
realism.
The next HUGE problem with IED's and lack of realism with automatic explosive detection is
how "Threat Mass" detection algorithms are used. CBT vendors do not have any idea what these
ranges are so anything that they develop will not be able to simulate how threat mass effects
detection. I am not a fan of threat mass and think it is a bad concept but if it is used and NOT
incorporated into TIP and CBT they will never be realistic. The proof on just this issue alone is
when you see high TIP and CBT scores, but the penetration testing scores are low. When you dig
into it you will find that the highest % of missed detections lead directly back to "Threat Mass".
One of the other major problems with CBT and TIP IED Threats images is the fact that you
never see the low destiny range explosives. The entire world seems to think everything that is
explosive has a density at 1.4 g/cc and above (TNT, C-4, Semtex, etc.). I challenge any CBT and
TIP image developer to show me where they used any of the below in their library and verify to
me it was density and Zeff correct.
Homemade explosives (HME's), TATP, HMTD, PETN, AN, ANFO, AN/AL, UREA, Chlorates,
Double base smokeless powder, Single base smokeless powder, Black powder, Black powder
replacement, Nitro methane, AN + NM, etc.
When you research all the different explosives that are on the market today along with the
HME's you will find that the clear majority fall into the 1.2 g/cc and below range. When you
look at what is being used in TIP and CBT libraries they are almost always high-density
explosive simulants.
The next issue with CBT and TIP IED Threat images are the circuits that are being used. I took a
bunch of them and showed them to an electronics technician who specializes in terrorist IED
circuits. This guy worked at the U.S. EOD school and taught the electronic courses on terrorist
IED firing circuits. The below is a quote from him when I showed him many of the circuits you
find in TIP and CBT threat libraries.
"Yeah, they are all pretty much crap"
Jeff Jennings: CEO Improvised Electronics
As an Explosive Ordnance Disposal Technician, I can tell you that from what I have seen in TIP
and CBT libraries whoever is making these has no clue about electronics nor even a remote idea
what the terrorists are using. This issue has a massive effect on the realism of TIP and CBT
programs and until such time SME's start building the IED threats based on actual terrorist
tactics, techniques, and procedures (TTP’s) you are going to have IED's that are not realistic in
TIP and CBT threat libraries.
The next issue with IED's in TIP and CBT libraries is that they can come in three very different
and distinct configurations. You never see these IED's broken down into these categories in any
TIP or CBT library. These configurations have a direct effect on the learning process and
teaching a person how to identify an IED threat in an x-ray image. If you do not have these IED
sub categories in your FTI's, you are not providing the end user a realistic view of the IED threat.
This complexity and difficulty breakdown in a sub menu will allow you to better track
performance based on the levels of difficulty IED's are to identify. One size fits all categories are
not effective nor are they realistic.
FTI Quality/Realism is poor (Not Enough Angles):
A major issue with the realism of TIP and CBT threat libraries is that they do not have multiple
angles of each threat object. Most "might" have 2-4 angles for each threat (some only one) and
this is just not realistic. When you run a threat object through an x-ray machine at every angle
possible it becomes obvious that 1-4 angles are not enough. If an operator is only trained on
threats where the angle would be considered easy to identify the threat the x-ray operator is not
going to be adequately prepared when they are presented with a threat at a hard-to-identify angle.
A bare minimum should be at least 8 angles for each threat object and there are
recommendations for even more. If a CBT or TIP library only has a one or more angles for each
threat that threat library is not realistic.
Even when you look at IED’s you can see that the number of angles you can potentially scan a
threat will change the overall complexity and difficulty at detecting the threat. Unless the
screener is exposed to all the different angles they will not be properly prepared to identify the
threat in a bag. TIP and CBT software must incorporate all these angles into the library for each
threat or the realism will suffer. This issue can also be tied directly back to sites that score high
on TIP and CBT but score low on penetration testing. The below image is an example of the
same IED x-rayed at 16 different angles. Each one of the images below are different and some of
them are drastically different. For a screener to be able to learn how to identify a threat in an x-
ray they must be exposed and trained on all the different angles the threat can be potentially
presented to them.
Bag Sets for CBT Quality/Realism is Poor:
When I was working at the U.S. Marshals Service and we started having the Court Security
Officers use the Operator Training System (OTS) all the bags in the session bag files were
airport bags and making it even worse you could see that the bags were done in Europe. The odd
power cords in the bags and numerous other items almost made using the system ineffective. At
the Marshals they deal with people who are coming to the courthouse and not somebody getting
on a plane and most CBT systems do not have bag sets for the different types of entry points.
Even if they do have them they never have any of the very common false alarms that a screener
would see at a checkpoint. The Marshals use the automated explosive detection and the high-
density alert on their systems. These autodetection features will generate false alarms and any
CBT or x-ray simulator program MUST have the false alarms built into the session bags on the
system. If these false alarms are not part of the system, the realism of the CBT software is very
low. We were able to fix this for the U.S. Marshals Service by taking bag file images from the
history files (online recording) where an x-ray unit had been running at one of their courthouses.
Because the auto detection features were turned on, all the false alarms were recorded in the
bags. We used these bags to replace the European airline-type bags and the realism when using
the operator training system improved dramatically.
Number of TIP images in current library standard size is too small:
Most of the current standards and or models for the size of a TIP or CBT library are very small
once you look at how many different types of threats that are out there and the number of
orientations required. A common number that you will see is 1000 to 1500 threats in a library
and if they run each of those threats at 4 different orientations you are looking at a total of 6000
images. That might sound like a large number of images but when you look at numbers more
closely you will find that a 6000 image TIP library only covers a small segment of all the
different threats a screener could encounter. As a bomb technician I put together a set of circuits
that covered all the different way an IED could be set off. I was able to construct of 125 different
IED circuits that were either electrically of non-electrically initiated. I then collected
commercial, military, and Home-made explosives (HME) simulants and packaged them in ½, 1,
1.5, and 2-pound configurations. I ended up with over 200 different explosive combinations that
could be married up to the 125 different IED circuits.
If I were to attach each IED circuit to each different explosive once in a holistic configuration
and once in a component-based configuration you would end up with 50,000 different explosive
and circuit combinations. Now we must add the total number of orientations for each IED and
the minimum ideal number would be 8 orientation per threat object. That would be 400,000
different images for just 125 different IED circuits and 200 different explosives.
When you compare 400,000 IED images to a standard 6000 image library you can see that
realistically we are only really exposing the x-ray operator to a very small sample of all the
potential threats. When you use a 6000-image library to evaluate a screener you are only
evaluating them on that specific library and those threats in the library. The proof of this is when
you see penetration testing scores that are much lower then the TIP and CBT scores. The
penetration test is exposing them to a threat that the screener has never seen before.
How TIP and CBT are set up for collecting performance data is inefficient and not
realistic:
When you look at how TIP and CBT systems measure a screener’s performance, the current
model needs to be improved. When you look at the issues I have discussed, one of the biggest
problems is how the systems categorize the difficulty levels of detecting threats in their many
different configurations. This complexity is not accurately broken down in the basic model used
for TIP and CBT scoring and the downloadable reports. It is possible to develop a more
comprehensive breakdown of the threats and bags based on varying factors of complexity. To
accomplish this, you have to create a standard model on how each threat category and bag
complexity will be measured. This will result in a report that would provide a more detailed view
of the performance of each screener based on the difficulty of the threat and how it was
presented. A one size fits all approach that is currently being used is not an accurate assessment
of screener performance. This detailed break-down will also help identify areas where follow up
training needs to focus.
Catagorizing session bags based on amount of clutter
The amount of clutter in a bag plays a huge role in a screeners ability to detect a threat. The
amount of bag clutter can also play havoc with any auto detection features that are being used,
especially explosive detection. Bags must be measured for clutter and be given a level of
difficulty (e.g. LV 1-3). One method to accomplish this is by simply measuring the number of
pixels that are in the higher Zeff ranges (11 and up). The above image is an example of how this
could be accomplished, and each bag is given a level from 1 through 3 of difficulty.
Catagorizing IED’s based on the way they are constructed
The nest issue is to better categorize the threats based on the level of difficulty they can present
to a screener. In the example above, we have taken the category “IED’s” and added
subcategories solely based on the level of difficulty they can give the operator. Level 1 is an IED
threat that is displayed in a component-based configuration and would normally be easy for an x-
ray operator to find. Level 2 becomes more challenging in a holistic layout as the IED
components become more difficult to identify. The last level is the most challenging for an x-ray
operator. This is an explosive device that has been hidden inside of an object. The are called
concealed IED’s and are the most challenging type of IED to identify in an x-ray.
Catagorizing IED’s/Gun’s/Knives/Other Theats based on the angle
they are presented
When a CBT or TIP library is created, there is currently no requirement or standard to categorize
a threat based on how hard the angle is for the item. When you scan threats at more than one
angle, we have already shown that many of the angles are much more difficult to identify vs
others. These differences need to be identified and categorized in the threat library and software.
The entire concept behind using TIP and CBT is to measure performance and identify the
effectiveness of training. You cannot do this accurately if you do not have a detailed breakdown
of what the screener is detecting and/or not detecting. In the example above, we have set three
levels of difficulty for each threat based on the complexity of the angle. Angles that allow easy
identification of the threat would be categorized as “easy” and as the angle become more
challenging the hardest angle would be categorized as “hard”
Catagorizing IED’s (and Guns) based on the presence of a
autodetection or not
Many studies have been done proving the effectiveness of using automatic detection, but most
CBT systems do not incorporate this capability. When they do incorporate it they typically have
everything alarm, which is not realistic. When a TIP or CBT threat image is captured it is
normally done by placing the threat onto the belt of the x-ray and running it by itself. In this
configuration the x-ray will have a higher percentage of autodetecting the threat because the
explosive is not being affected by the bag clutter. The reality is that even though auto detection is
effective it does not always work, and explosive threats can be missed by the system. When you
add threat mass to this equation the amount of potential missed detections increases. To make
TIP and CBT more realistic each threat object should be scanned with the auto detection on and
off. This will allow you to capture the threat object in both potential configurations the operator
could potentially see the item in a bag. To better score performance the presence of an auto
detection would make the threat an “easy” level category and the absence of any auto detection
would be categorized as “hard”. Exposing your screeners to both scenarios is more realistic and
needs to be incorporated into any TIP and CBT program.
Examples of improved scoring matrix for TIP and CBT
Once you have created a better process of catagorizing the complexity of the threats and bags the
combintion of all of these factors can now be used to give each scenerio an overall complexity
score. Using the information we provided on establishing diffculty levels for threats based on
angles, bag clutter, auto detection, and construction techniques (Holistic, Compont, Concealed)
we are going to show you how differnet bags with threats can be scored. This improved scoring
system will allow improved reporting on screener performace and provide a much better
assesment of the quality of your training.
In this first example, we have a bag that is low in clutter (Level 1 Easy) but the IED in the bag is
concealed inside of a curling iron (Level 3 hard). The IED inside of the curling iron is visible
(easy angle Level 1) but the screener would really have to know what they are looking for to
identify it as a threat. The IED explosive material did not auto detect but there are false alarms in
the bag (Level 3 hard) that could potentially distract the screener from the real threat. The
combination of all the different factors gives you a difficulty level of 7-8 intermediate for the
overall scenario.
In the next example the bag has a higher level of clutter and is rated as a Level 2 difficulty. The
IED is at a holistic configuration (Level 2) in the bag and at a hard angle (Level 3). Because none
of the high Zeff material is blocking the explosive material in the IED, the x-ray automatic
detection was able to “red box” the explosive and there are also no false alarms in the bag (Level
1 easy). The overall difficulty level for this scenario is 7-8 intermediate.
In the last example, the bag has a very high level of high Zeff materials that make trying to
identify shape challenging (Level 3 hard). The IED in this bag is in a holistic configuration
(Level 2) and at a hard angle (Level 3). The explosive did not “red box” and there is also a false
alarm in the bag (Level 3). The overall difficulty level for this bag is 11-12 master.
By using a scoring system as we described above, you would create a higher level of realism in
TIP and CBT training programs. You would also be able to generate much more detailed reports
that would provide you more information about the screener performance capabilities at
detecting threats. This information in turn would allow trainers to focus training efforts more
concisely based on what the testing has exposed.
The below image is the standard report download for a TIP screener performance. This style of
report from a TIP program has been around for a very long time and it provides many different
datapoints that can be examined. Depending on the agency, the most commonly used data points
are the overall detection percentage based on the number of threats presented and detected or the
D prime number. The D prime number is more than just scoring detection percentage but also
looks at the number of false alarms, and the speed at which the operator made their decision.
Both measurements are what is used to verify if a screener is detecting threats at a level that is
acceptable within that agency/group.
The problem with these measurements is that the scoring is based solely on the quality of the TIP
library and as we have already discussed many of the libraries on the market today are not
realistic. If you took the scoring matrix we suggested earlier that classifies each threat based on
level of difficulty, you could also potentially score the library and give it an overall level of
difficulty. I would surmise that most of the libraries would be a Level 1 (easy) and I would be
surprised to see any threat libraries score higher.
Once you have updated the bag sets and libraries and assigned a level of difficulty to all aspects
of a threat library, the screener performance report could also be improved to provide a more
detailed report based on each screener’s ability to detect threats based on the difficulty and
complexity of the threat. This would be a much more efficient system and provide the manager a
more detailed assessment of their screener’s ability to detect threats. It would also give your
trainers a better view of what areas they need to focus on to improve training.
There are many ways this could be accomplished but the above is just a suggested model that
would provide much more relevant screener detection capability vs what we are using today. We
know that a threat in a cluttered bag at a hard angle and no auto detection is going to have a
lower detection rate then a threat in an uncluttered bag. The problem is that we do not track this
data with the current systems we are using. TIP and CBT need to have a complete re- work and
new standards established to make the systems more realistic. Until the agencies address this
issue TIP and CBT will be limited in their ability to train screeners to detect threats.

More Related Content

What's hot

ACO-10 Aircraft Cargo Hazards, Including Haz-Mat and Dangerous Goods
ACO-10 Aircraft Cargo Hazards, Including Haz-Mat and Dangerous Goods ACO-10 Aircraft Cargo Hazards, Including Haz-Mat and Dangerous Goods
ACO-10 Aircraft Cargo Hazards, Including Haz-Mat and Dangerous Goods
Brock Jester
 
ACO- 11 Familiarization with Firefighter Duties Under the Airport Emergency P...
ACO- 11 Familiarization with Firefighter Duties Under the Airport Emergency P...ACO- 11 Familiarization with Firefighter Duties Under the Airport Emergency P...
ACO- 11 Familiarization with Firefighter Duties Under the Airport Emergency P...
Brock Jester
 
ACO- 9 Adapting and Using Structural and Firefighting Equipment for Aircraft ...
ACO- 9 Adapting and Using Structural and Firefighting Equipment for Aircraft ...ACO- 9 Adapting and Using Structural and Firefighting Equipment for Aircraft ...
ACO- 9 Adapting and Using Structural and Firefighting Equipment for Aircraft ...
Brock Jester
 
Valu Jet - Corporate Affairs
Valu Jet - Corporate AffairsValu Jet - Corporate Affairs
Valu Jet - Corporate Affairs
rec05e
 
Chapter 01 Qualification for Aircraft Rescue and Firefighting Personnel
Chapter 01 Qualification for Aircraft Rescue and Firefighting Personnel  Chapter 01 Qualification for Aircraft Rescue and Firefighting Personnel
Chapter 01 Qualification for Aircraft Rescue and Firefighting Personnel
Brock Jester
 
Chapter 06
Chapter 06Chapter 06
Chapter 06
Joe
 

What's hot (20)

Aircraft rescue and Fire Fighting training report at CIAL
Aircraft rescue and Fire Fighting training report at CIALAircraft rescue and Fire Fighting training report at CIAL
Aircraft rescue and Fire Fighting training report at CIAL
 
Drone advisor
Drone advisorDrone advisor
Drone advisor
 
drones future need
drones future needdrones future need
drones future need
 
ACO-3 Rescue and Firefighting Personnel Safety
ACO-3 Rescue and Firefighting Personnel SafetyACO-3 Rescue and Firefighting Personnel Safety
ACO-3 Rescue and Firefighting Personnel Safety
 
ACO-7 Emergency Aircraft Evacuation and Assistance
ACO-7 Emergency Aircraft Evacuation and Assistance ACO-7 Emergency Aircraft Evacuation and Assistance
ACO-7 Emergency Aircraft Evacuation and Assistance
 
ACO-10 Aircraft Cargo Hazards, Including Haz-Mat and Dangerous Goods
ACO-10 Aircraft Cargo Hazards, Including Haz-Mat and Dangerous Goods ACO-10 Aircraft Cargo Hazards, Including Haz-Mat and Dangerous Goods
ACO-10 Aircraft Cargo Hazards, Including Haz-Mat and Dangerous Goods
 
Drones presentation
Drones presentationDrones presentation
Drones presentation
 
IFSTA ARFF Chapter 11 presentation
IFSTA ARFF Chapter 11 presentationIFSTA ARFF Chapter 11 presentation
IFSTA ARFF Chapter 11 presentation
 
IED's: america's future
IED's:  america's futureIED's:  america's future
IED's: america's future
 
Black box
Black boxBlack box
Black box
 
Black Box
Black BoxBlack Box
Black Box
 
ACO- 11 Familiarization with Firefighter Duties Under the Airport Emergency P...
ACO- 11 Familiarization with Firefighter Duties Under the Airport Emergency P...ACO- 11 Familiarization with Firefighter Duties Under the Airport Emergency P...
ACO- 11 Familiarization with Firefighter Duties Under the Airport Emergency P...
 
ACO- 9 Adapting and Using Structural and Firefighting Equipment for Aircraft ...
ACO- 9 Adapting and Using Structural and Firefighting Equipment for Aircraft ...ACO- 9 Adapting and Using Structural and Firefighting Equipment for Aircraft ...
ACO- 9 Adapting and Using Structural and Firefighting Equipment for Aircraft ...
 
Valu Jet - Corporate Affairs
Valu Jet - Corporate AffairsValu Jet - Corporate Affairs
Valu Jet - Corporate Affairs
 
Dgr recurrent 2020 edition 61th
Dgr recurrent 2020 edition 61thDgr recurrent 2020 edition 61th
Dgr recurrent 2020 edition 61th
 
fire
firefire
fire
 
IEDs awareness
IEDs awarenessIEDs awareness
IEDs awareness
 
Maritime Robotics
Maritime RoboticsMaritime Robotics
Maritime Robotics
 
Chapter 01 Qualification for Aircraft Rescue and Firefighting Personnel
Chapter 01 Qualification for Aircraft Rescue and Firefighting Personnel  Chapter 01 Qualification for Aircraft Rescue and Firefighting Personnel
Chapter 01 Qualification for Aircraft Rescue and Firefighting Personnel
 
Chapter 06
Chapter 06Chapter 06
Chapter 06
 

Similar to Threat image projection (TIP) and computer based training (CBT) viable training or just check the block

Production management stage 1
Production management stage 1Production management stage 1
Production management stage 1
Benedict Terry
 
Production management stage 1
Production management stage 1Production management stage 1
Production management stage 1
Benedict Terry
 
Production management stage 2 2015
Production management stage 2 2015Production management stage 2 2015
Production management stage 2 2015
Benedict Terry
 
Production management stage 1
Production management stage 1Production management stage 1
Production management stage 1
Benedict Terry
 
Evaluation – media
Evaluation – mediaEvaluation – media
Evaluation – media
guestbbf2222a
 
Production management stage 2 2015
Production management stage 2 2015Production management stage 2 2015
Production management stage 2 2015
Benedict Terry
 
Evaluation – media
Evaluation – mediaEvaluation – media
Evaluation – media
guestbbf2222a
 
Production management stage 2 2015
Production management stage 2 2015Production management stage 2 2015
Production management stage 2 2015
Benedict Terry
 

Similar to Threat image projection (TIP) and computer based training (CBT) viable training or just check the block (20)

Production management stage 1
Production management stage 1Production management stage 1
Production management stage 1
 
Production management stage 1
Production management stage 1Production management stage 1
Production management stage 1
 
Production management stage 2 2015
Production management stage 2 2015Production management stage 2 2015
Production management stage 2 2015
 
Production management stage 1
Production management stage 1Production management stage 1
Production management stage 1
 
Evaluation – media
Evaluation – mediaEvaluation – media
Evaluation – media
 
Immutable Laws
Immutable LawsImmutable Laws
Immutable Laws
 
Production management stage 2 2015
Production management stage 2 2015Production management stage 2 2015
Production management stage 2 2015
 
Evaluation Task 1
Evaluation Task 1Evaluation Task 1
Evaluation Task 1
 
Evaluation – media
Evaluation – mediaEvaluation – media
Evaluation – media
 
Evaluation one media
Evaluation one mediaEvaluation one media
Evaluation one media
 
View Argumentative Essay Format Template The Late
View Argumentative Essay Format Template The LateView Argumentative Essay Format Template The Late
View Argumentative Essay Format Template The Late
 
FMP proposal
FMP proposalFMP proposal
FMP proposal
 
3. proposal sf 2017
3. proposal sf 20173. proposal sf 2017
3. proposal sf 2017
 
Evaluation one comparing
Evaluation one comparing Evaluation one comparing
Evaluation one comparing
 
Gametech orlandothefutureofvirtualworlds
Gametech orlandothefutureofvirtualworldsGametech orlandothefutureofvirtualworlds
Gametech orlandothefutureofvirtualworlds
 
AS Foundation Project Evaluation
AS Foundation Project EvaluationAS Foundation Project Evaluation
AS Foundation Project Evaluation
 
AS Foundation Project Evaluation
AS Foundation Project EvaluationAS Foundation Project Evaluation
AS Foundation Project Evaluation
 
Proposal
ProposalProposal
Proposal
 
How To Write A Transfer Essay
How To Write A Transfer EssayHow To Write A Transfer Essay
How To Write A Transfer Essay
 
Production management stage 2 2015
Production management stage 2 2015Production management stage 2 2015
Production management stage 2 2015
 

Recently uploaded

Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Sérgio Sacani
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
Areesha Ahmad
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
ssuser79fe74
 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Lokesh Kothari
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
Lokesh Kothari
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...
RohitNehra6
 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptx
gindu3009
 

Recently uploaded (20)

Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptx
 
Botany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdfBotany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdf
 
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxCOST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
 
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICESAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
 
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...
 
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on Io
 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptx
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 

Threat image projection (TIP) and computer based training (CBT) viable training or just check the block

  • 1. Threat Image Projection and Computer Based Training Are they effective training tools or just check the block training? John D. Howell Security Consultant Threat image projection (TIP) and x-ray computer-based training (CBT) have been around for a while and are used to measure the effectiveness of x-ray operators at detecting threats. In some places in the world they are used to certify x-ray operators and are actually "required" before a security officer can work at a screening checkpoint. There have been many technical papers done on the effectiveness of TIP and CBT and all of them say that they are effective. However, when you talk to the subject matter experts (SME's) in the field you will find that they typically do not agree on the overall effectiveness of TIP and CBT as a training tool. So, I asked an expert who runs the world’s largest and most comprehensive penetration testing program and he had this to say about TIP and CBT: TIP/CBT doesn’t accurately represent threats in the environment that an x-ray operator might encounter during operational screening. Right now, it is similar to standardized tests in public schools that are made for the administrators but not the students. I reached out to SME's all over the world and the general response was the same across the board. The below best sums up what I got back from everybody I asked: I would say that his thoughts pretty much are the same as what I think, TIP is an excellent tool for screeners to find TIPs and that is partly because of the objects that are being used to make the TIPs. With CBT, I have yet to find a screener who has actually improved his/her skills with CBT as it is currently being used. In my opinion the most valuable training is hands on with real objects to get a real understanding of what an IED is composed of and how the different components look like separated, both in x-rays and live. Since I started using the realistic IEDs, first in penetration tests and after the test showing the threat to the screener, I have seen more improvement than years of CBT and TIP. So why are TIP and CBT not considered "realistic" by the subject matter experts in the field? To really answer that question, you must first understand how the TIP and CBT programs work and how the threat images are being created for each library. I make TIP libraries and have seen all the X-ray vendors’ TIP libraries. This includes the bag sets (clean bags) that are used for CBT and the operator training systems (simulators) built into the x-rays. As a bomb technician, I know improvised explosive devices (IED'S) and being from America I know guns and knives. So, when I start looking at these TIP and CBT programs I must totally agree with the experts that they are NOT realistic.
  • 2. So, let’s break down what is wrong with TIP and CBT and detail why they are not considered "realistic": Computer Based Training Problems: 1. Fictional Threat Images (FTI's) are low quality and are not realistic 2. Limited number FTI Categories based on all the different types and configurations of threats 3. Limited number FTI angles (Typically 4) do not represent real world complexity 4. Bag sets have no false alarms for almost all CBT systems on the market 5. Bag sets not categorized by the amount of clutter and how they can affect FTI, nor do they have the ability to alarm or not alarm based on FTI placement in the bag 6. No automatic explosive detection or "missed detections" for explosives on almost all CBT systems on the market 7. Explosive Detection “windows” not accurately represented in CBT system based on end user settings (threat mass or size cut offs) 8. No real quality standards or oversight for libraries 9. FTI difficulty levels poorly standardized in analytics and reporting 10. Bag sets not representative of country nor checkpoint type 11. Virtual keyboards not the same as actual button pushing 12. Image and programmable key settings generic and not per end users SOP Threat Image Projection Problems: 1. FTI Quality/Realism (same as CBT) 2. Limited FTI Categories (same as CBT) 3. Limited FTI angles (same as CBT) 4. Unrealistic Automatic Detection or Missed Detection's for explosives (same as CBT) 5. No real quality standards or oversight (same as CBT) 6. FTI difficulty poorly standardized (same as CBT)
  • 3. FTI Quality/Realism is poor (Guns): When you look at the gun’s images in a threat library you will see that depending on where it was made (US vs all others) the quality of the guns becomes much lower outside of the US. The reason is obvious as we all know that getting access to guns overseas to use in a threat library is much more challenging vs the US. What happens is the X-ray vendors and CBT companies will use anything they can get, and you end up seeing many toys guns, BB guns, Pellet C02 guns, and Air soft pistols in the threat libraries. In some of the worse cases these toy guns are being represented as a real gun to the student’s vs being told what it actually is. Toy guns and the BB/Pellet guns are an issue and need to be located in a bag, but they DO NOT respond in x-ray like real guns. The below image is from a technical paper on TIP/CBT and the writers asked the airport SME's to provide them "REALISTIC" threats to use and what they were provided was a toy gun.
  • 4. When you look at Toy/BB/Pellet guns next to real guns you can see that they look nothing alike in x-ray and the density and Zeff of the metal in a real gun is much higher vs the toys. That fact alone makes these very different in how they respond in an x-ray system and how they will be seen by the operator. Toy guns need to be part of an FTI library, but they need to be in a separate category because of how different they are in comparison to a real gun. This fact ties back into the fact that the current categories are poorly defined based on how each type of threat responds in the x-ray. The below image is the difference between toy guns and a real gun. The use of high-density automatic detection at checkpoints is becoming more common and when you look at how a real gun responds in x-ray vs the toys guns there is literally no comparison. Because real guns have a much higher density and Zeff the high-density automatic detection will respond much differently vs a toy gun. The high-density feature works by selecting the maximum amount of absorption based on a % and a squared surface area of pixels. This means that toy guns (BB guns, etc.) will not respond the same way in an x-ray system using this feature. I have yet to find any CBT that incorporates this into the software and FTI's. As you can see below high-density alert, when set correctly, is very effective at detecting a real gun threat but with toy guns it would not detect these items. This is another perfect example of why gun FTI's are not realistic.
  • 5. Guns can be presented to an x-ray operator in several different configurations and each configuration looks different. Your TIP and CBT gun threat images must cover all these different configurations, or they are not realistic. The below are the different configurations that a single gun can be presented to an x-ray operator.
  • 6. FTI Quality/Realism is poor (IED's): As a Bomb Technician when you start looking at the IED images that are being used in TIP and CBT libraries, it is obvious that whoever is making them are not Bomb Technicians. You will typically see massive amounts of IED FTI's that are the worst configurations of IED's I have ever seen. They are not even remotely what the bad guys are using nor are they even close to being technically correct based on circuit design. From the detonators not being x-ray correct and never inserted into the explosives to the actual explosives being the worst simulations I have ever seen when compared to a real explosive. What is even worse is that the explosive simulants are not density and Zeff correct and there are no standards in place for explosive simulants. The below are some IED FTI's that are perfect examples of the typical quality of images you find in a CBT or TIP library.
  • 7. In the below image the FTI is supposed to be a 1lb TNT demolition block but when you compare it to a real one they look nothing alike. This type of issue is very common in IED FTI's. The real TNT demo block also generated a "Red Box" automatic detection and you can see from the color that the simulated TNT is a much lower Zeff and density. It appears that whoever is making these FTI's does not understand the density and Zeff ranges for explosives. Another very common issue with IED FTI's are the blasting caps/detonators and the complete lack of realism. Many libraries I have seen are just using empty tubes with a wire tied in a knot inside of them which looks nothing like a real detonator. You will also see entire libraries that use maybe 2-3 different types of detonators in the entire library. If you are familiar with detonators and initiators you would understand that there are many different types on the commercial market and each type looks different in an x-ray. When you add the improvised detonators that terrorists like to use, the number of different types can be over 30 different configurations. I have never seen TIP or CBT IED threat Library where the caps are comprehensive and realistic. The below images are very common, and you see this all the time. The detonators are typically never inserted into the explosives and are just stuck to the outside. So why do you see this in just about every CBT and TIP threat library? My answer is that the people making these do not understand how explosives work, otherwise you would want to make it as realistic as possible. Either way this is very unrealistic and when you consider that many SOP's tell you that if you see a RED BOX (explosive auto detection) you look inside of the red box for a detonator.
  • 8. One of the biggest realism issues with IED's is that TIP and CBT do not accurately capture how explosives respond when automatic detection is being used by the site. Most CBT systems on the market today do not even have the explosive auto detection built into their systems. The other issue is that if they are showing auto detection of the FTI's the software cannot consider the amount of clutter in the bag. Explosives surrounded by higher Zeff materials typically will not generate an automatic detection alarm and TIP and CBT cannot currently simulate this level of realism. The next HUGE problem with IED's and lack of realism with automatic explosive detection is how "Threat Mass" detection algorithms are used. CBT vendors do not have any idea what these ranges are so anything that they develop will not be able to simulate how threat mass effects detection. I am not a fan of threat mass and think it is a bad concept but if it is used and NOT incorporated into TIP and CBT they will never be realistic. The proof on just this issue alone is when you see high TIP and CBT scores, but the penetration testing scores are low. When you dig into it you will find that the highest % of missed detections lead directly back to "Threat Mass".
  • 9. One of the other major problems with CBT and TIP IED Threats images is the fact that you never see the low destiny range explosives. The entire world seems to think everything that is explosive has a density at 1.4 g/cc and above (TNT, C-4, Semtex, etc.). I challenge any CBT and TIP image developer to show me where they used any of the below in their library and verify to me it was density and Zeff correct. Homemade explosives (HME's), TATP, HMTD, PETN, AN, ANFO, AN/AL, UREA, Chlorates, Double base smokeless powder, Single base smokeless powder, Black powder, Black powder replacement, Nitro methane, AN + NM, etc. When you research all the different explosives that are on the market today along with the HME's you will find that the clear majority fall into the 1.2 g/cc and below range. When you look at what is being used in TIP and CBT libraries they are almost always high-density explosive simulants. The next issue with CBT and TIP IED Threat images are the circuits that are being used. I took a bunch of them and showed them to an electronics technician who specializes in terrorist IED circuits. This guy worked at the U.S. EOD school and taught the electronic courses on terrorist IED firing circuits. The below is a quote from him when I showed him many of the circuits you find in TIP and CBT threat libraries. "Yeah, they are all pretty much crap" Jeff Jennings: CEO Improvised Electronics
  • 10. As an Explosive Ordnance Disposal Technician, I can tell you that from what I have seen in TIP and CBT libraries whoever is making these has no clue about electronics nor even a remote idea what the terrorists are using. This issue has a massive effect on the realism of TIP and CBT programs and until such time SME's start building the IED threats based on actual terrorist tactics, techniques, and procedures (TTP’s) you are going to have IED's that are not realistic in TIP and CBT threat libraries. The next issue with IED's in TIP and CBT libraries is that they can come in three very different and distinct configurations. You never see these IED's broken down into these categories in any TIP or CBT library. These configurations have a direct effect on the learning process and teaching a person how to identify an IED threat in an x-ray image. If you do not have these IED sub categories in your FTI's, you are not providing the end user a realistic view of the IED threat. This complexity and difficulty breakdown in a sub menu will allow you to better track performance based on the levels of difficulty IED's are to identify. One size fits all categories are not effective nor are they realistic.
  • 11. FTI Quality/Realism is poor (Not Enough Angles): A major issue with the realism of TIP and CBT threat libraries is that they do not have multiple angles of each threat object. Most "might" have 2-4 angles for each threat (some only one) and this is just not realistic. When you run a threat object through an x-ray machine at every angle possible it becomes obvious that 1-4 angles are not enough. If an operator is only trained on threats where the angle would be considered easy to identify the threat the x-ray operator is not going to be adequately prepared when they are presented with a threat at a hard-to-identify angle. A bare minimum should be at least 8 angles for each threat object and there are recommendations for even more. If a CBT or TIP library only has a one or more angles for each threat that threat library is not realistic.
  • 12. Even when you look at IED’s you can see that the number of angles you can potentially scan a threat will change the overall complexity and difficulty at detecting the threat. Unless the screener is exposed to all the different angles they will not be properly prepared to identify the threat in a bag. TIP and CBT software must incorporate all these angles into the library for each threat or the realism will suffer. This issue can also be tied directly back to sites that score high on TIP and CBT but score low on penetration testing. The below image is an example of the same IED x-rayed at 16 different angles. Each one of the images below are different and some of them are drastically different. For a screener to be able to learn how to identify a threat in an x- ray they must be exposed and trained on all the different angles the threat can be potentially presented to them. Bag Sets for CBT Quality/Realism is Poor: When I was working at the U.S. Marshals Service and we started having the Court Security Officers use the Operator Training System (OTS) all the bags in the session bag files were airport bags and making it even worse you could see that the bags were done in Europe. The odd power cords in the bags and numerous other items almost made using the system ineffective. At the Marshals they deal with people who are coming to the courthouse and not somebody getting on a plane and most CBT systems do not have bag sets for the different types of entry points. Even if they do have them they never have any of the very common false alarms that a screener would see at a checkpoint. The Marshals use the automated explosive detection and the high- density alert on their systems. These autodetection features will generate false alarms and any CBT or x-ray simulator program MUST have the false alarms built into the session bags on the system. If these false alarms are not part of the system, the realism of the CBT software is very low. We were able to fix this for the U.S. Marshals Service by taking bag file images from the history files (online recording) where an x-ray unit had been running at one of their courthouses. Because the auto detection features were turned on, all the false alarms were recorded in the
  • 13. bags. We used these bags to replace the European airline-type bags and the realism when using the operator training system improved dramatically. Number of TIP images in current library standard size is too small: Most of the current standards and or models for the size of a TIP or CBT library are very small once you look at how many different types of threats that are out there and the number of orientations required. A common number that you will see is 1000 to 1500 threats in a library and if they run each of those threats at 4 different orientations you are looking at a total of 6000 images. That might sound like a large number of images but when you look at numbers more closely you will find that a 6000 image TIP library only covers a small segment of all the different threats a screener could encounter. As a bomb technician I put together a set of circuits that covered all the different way an IED could be set off. I was able to construct of 125 different IED circuits that were either electrically of non-electrically initiated. I then collected commercial, military, and Home-made explosives (HME) simulants and packaged them in ½, 1, 1.5, and 2-pound configurations. I ended up with over 200 different explosive combinations that could be married up to the 125 different IED circuits. If I were to attach each IED circuit to each different explosive once in a holistic configuration and once in a component-based configuration you would end up with 50,000 different explosive and circuit combinations. Now we must add the total number of orientations for each IED and the minimum ideal number would be 8 orientation per threat object. That would be 400,000 different images for just 125 different IED circuits and 200 different explosives.
  • 14. When you compare 400,000 IED images to a standard 6000 image library you can see that realistically we are only really exposing the x-ray operator to a very small sample of all the potential threats. When you use a 6000-image library to evaluate a screener you are only evaluating them on that specific library and those threats in the library. The proof of this is when you see penetration testing scores that are much lower then the TIP and CBT scores. The penetration test is exposing them to a threat that the screener has never seen before. How TIP and CBT are set up for collecting performance data is inefficient and not realistic: When you look at how TIP and CBT systems measure a screener’s performance, the current model needs to be improved. When you look at the issues I have discussed, one of the biggest problems is how the systems categorize the difficulty levels of detecting threats in their many different configurations. This complexity is not accurately broken down in the basic model used for TIP and CBT scoring and the downloadable reports. It is possible to develop a more comprehensive breakdown of the threats and bags based on varying factors of complexity. To accomplish this, you have to create a standard model on how each threat category and bag complexity will be measured. This will result in a report that would provide a more detailed view of the performance of each screener based on the difficulty of the threat and how it was presented. A one size fits all approach that is currently being used is not an accurate assessment of screener performance. This detailed break-down will also help identify areas where follow up training needs to focus. Catagorizing session bags based on amount of clutter The amount of clutter in a bag plays a huge role in a screeners ability to detect a threat. The amount of bag clutter can also play havoc with any auto detection features that are being used,
  • 15. especially explosive detection. Bags must be measured for clutter and be given a level of difficulty (e.g. LV 1-3). One method to accomplish this is by simply measuring the number of pixels that are in the higher Zeff ranges (11 and up). The above image is an example of how this could be accomplished, and each bag is given a level from 1 through 3 of difficulty. Catagorizing IED’s based on the way they are constructed The nest issue is to better categorize the threats based on the level of difficulty they can present to a screener. In the example above, we have taken the category “IED’s” and added subcategories solely based on the level of difficulty they can give the operator. Level 1 is an IED threat that is displayed in a component-based configuration and would normally be easy for an x- ray operator to find. Level 2 becomes more challenging in a holistic layout as the IED components become more difficult to identify. The last level is the most challenging for an x-ray operator. This is an explosive device that has been hidden inside of an object. The are called concealed IED’s and are the most challenging type of IED to identify in an x-ray.
  • 16. Catagorizing IED’s/Gun’s/Knives/Other Theats based on the angle they are presented When a CBT or TIP library is created, there is currently no requirement or standard to categorize a threat based on how hard the angle is for the item. When you scan threats at more than one angle, we have already shown that many of the angles are much more difficult to identify vs others. These differences need to be identified and categorized in the threat library and software. The entire concept behind using TIP and CBT is to measure performance and identify the effectiveness of training. You cannot do this accurately if you do not have a detailed breakdown of what the screener is detecting and/or not detecting. In the example above, we have set three levels of difficulty for each threat based on the complexity of the angle. Angles that allow easy identification of the threat would be categorized as “easy” and as the angle become more challenging the hardest angle would be categorized as “hard”
  • 17. Catagorizing IED’s (and Guns) based on the presence of a autodetection or not Many studies have been done proving the effectiveness of using automatic detection, but most CBT systems do not incorporate this capability. When they do incorporate it they typically have everything alarm, which is not realistic. When a TIP or CBT threat image is captured it is normally done by placing the threat onto the belt of the x-ray and running it by itself. In this configuration the x-ray will have a higher percentage of autodetecting the threat because the explosive is not being affected by the bag clutter. The reality is that even though auto detection is effective it does not always work, and explosive threats can be missed by the system. When you add threat mass to this equation the amount of potential missed detections increases. To make TIP and CBT more realistic each threat object should be scanned with the auto detection on and off. This will allow you to capture the threat object in both potential configurations the operator could potentially see the item in a bag. To better score performance the presence of an auto detection would make the threat an “easy” level category and the absence of any auto detection would be categorized as “hard”. Exposing your screeners to both scenarios is more realistic and needs to be incorporated into any TIP and CBT program. Examples of improved scoring matrix for TIP and CBT Once you have created a better process of catagorizing the complexity of the threats and bags the combintion of all of these factors can now be used to give each scenerio an overall complexity score. Using the information we provided on establishing diffculty levels for threats based on angles, bag clutter, auto detection, and construction techniques (Holistic, Compont, Concealed) we are going to show you how differnet bags with threats can be scored. This improved scoring
  • 18. system will allow improved reporting on screener performace and provide a much better assesment of the quality of your training. In this first example, we have a bag that is low in clutter (Level 1 Easy) but the IED in the bag is concealed inside of a curling iron (Level 3 hard). The IED inside of the curling iron is visible (easy angle Level 1) but the screener would really have to know what they are looking for to identify it as a threat. The IED explosive material did not auto detect but there are false alarms in the bag (Level 3 hard) that could potentially distract the screener from the real threat. The combination of all the different factors gives you a difficulty level of 7-8 intermediate for the overall scenario.
  • 19. In the next example the bag has a higher level of clutter and is rated as a Level 2 difficulty. The IED is at a holistic configuration (Level 2) in the bag and at a hard angle (Level 3). Because none of the high Zeff material is blocking the explosive material in the IED, the x-ray automatic detection was able to “red box” the explosive and there are also no false alarms in the bag (Level 1 easy). The overall difficulty level for this scenario is 7-8 intermediate.
  • 20. In the last example, the bag has a very high level of high Zeff materials that make trying to identify shape challenging (Level 3 hard). The IED in this bag is in a holistic configuration (Level 2) and at a hard angle (Level 3). The explosive did not “red box” and there is also a false alarm in the bag (Level 3). The overall difficulty level for this bag is 11-12 master. By using a scoring system as we described above, you would create a higher level of realism in TIP and CBT training programs. You would also be able to generate much more detailed reports that would provide you more information about the screener performance capabilities at detecting threats. This information in turn would allow trainers to focus training efforts more concisely based on what the testing has exposed. The below image is the standard report download for a TIP screener performance. This style of report from a TIP program has been around for a very long time and it provides many different datapoints that can be examined. Depending on the agency, the most commonly used data points are the overall detection percentage based on the number of threats presented and detected or the D prime number. The D prime number is more than just scoring detection percentage but also looks at the number of false alarms, and the speed at which the operator made their decision. Both measurements are what is used to verify if a screener is detecting threats at a level that is acceptable within that agency/group. The problem with these measurements is that the scoring is based solely on the quality of the TIP library and as we have already discussed many of the libraries on the market today are not realistic. If you took the scoring matrix we suggested earlier that classifies each threat based on
  • 21. level of difficulty, you could also potentially score the library and give it an overall level of difficulty. I would surmise that most of the libraries would be a Level 1 (easy) and I would be surprised to see any threat libraries score higher. Once you have updated the bag sets and libraries and assigned a level of difficulty to all aspects of a threat library, the screener performance report could also be improved to provide a more detailed report based on each screener’s ability to detect threats based on the difficulty and complexity of the threat. This would be a much more efficient system and provide the manager a more detailed assessment of their screener’s ability to detect threats. It would also give your trainers a better view of what areas they need to focus on to improve training.
  • 22. There are many ways this could be accomplished but the above is just a suggested model that would provide much more relevant screener detection capability vs what we are using today. We know that a threat in a cluttered bag at a hard angle and no auto detection is going to have a lower detection rate then a threat in an uncluttered bag. The problem is that we do not track this data with the current systems we are using. TIP and CBT need to have a complete re- work and new standards established to make the systems more realistic. Until the agencies address this issue TIP and CBT will be limited in their ability to train screeners to detect threats.