An Application of
Combinatorial
Methods for
Explainability in
Artificial Intelligence
and Machine Learning
D. RICHARD KUHN
C O M P U T E R S E C U R I T Y D I V I S I O N
I N F O R M A T I O N T E C H N O L O G Y L A B O R A T O R Y
RAGHU N. KACKER
C O M P U T A T I O N A L A N D A P P L I E D M A T H
D I V I S I O N I N F O R M A T I O N T E C H N O L O G Y
L A B O R A T O R Y
Tradeoff
between AI
accuracy and
explainability
CONVOLUTIONAL
NEURAL NETS
(CNNS), PROVIDE NO
EXPLANATIONS
UNDERSTANDABLE
METHODS, SUCH AS
RULE-BASED, TEND
TO BE LESS
ACCURATE
COMXAI
A TOOL TO EXPLAIN
AI USING FAULT
LOCATION
Fault Location
• IF A COMBINATION OF FACTOR
VALUES OCCURS IN A PASSING
TEST, THEN CLEARLY IT DID NOT
TRIGGER THE FAILURE.
• COMBINATIONS THAT OCCUR
ONLY IN FAILING TESTS ARE
THOSE THAT ARE CONSIDERED IN
NARROWING DOWN THE SET OF
SUSPECT COMBINATIONS
COMXAI A COMBINATORIAL
TESTING TOOL
Explain why below Traits are REPTILE class
Single Value Fail
observations
NO SINGLE TRAIT IS SUFFICIENT
AS ALL ARE MORE THAN 44%
SHARED WITH REPTILES
Two Value Fail
observations
NO PAIR IS SUFFICIENT AS ALL
ARE MORE THAN 2% SHARED WITH
REPTILES
Three Value Fail observations
3-WAY COMBINATIONS WILL DISTINGUISH THIS ANIMAL FROM OTHER
ANIMAL TYPES IN THIS DATABASE.
CONVERT TO RULE
Final Output UNDERSTANDABLE RULE
Limitations
1. THOUSAND OF
COMBINATIONS
CAN MAKE RULE
SIZE VERY LARGE
2. OVERFITTING, IN
WHICH A LEARNING
MODEL
INCORPORATED
NOISE VARIATIONS
FROM THE
TRAINING DATA
WILL CREATE
PROBLEM FOR THIS
TOOL.
Future Work
• DETECT
OVERFITTING
• IMPROVED UI FOR
BETTER
VALIDATION OF AI
MODELS
• SECURITY FROM
MALWARE NEURAL
NET

COMXAI A tool to explain AI USING FAULT LOCATION