ntegrating Knowledge Bases with Neural Networks - by Nick Powell:
Knowledge bases are used as the under-pinning for reasoning systems. This talk will describe experiences using deep learning to facilitate knowledge base completion. With an existing knowledge base as a training set, we programmed the neural net as a binary classifier to find likely relationships and then insert them back into the graph. We'll describe lessons learned and next steps.
2. What are we working with?
Knowledge Base
layer
Predictive
layer
1. The facts that
we know
2. An inference
engine
1. Neural network
for binary
classification
4. What makes graph databases good at
modeling these knowledge bases?
A
B
C
D
E
F
G
H
5. What makes graph databases good at
modeling these knowledge bases?
A
B
C
D
E
F
G
H
Inference Rules
6. What are we working with?
https://nlp.stanford.edu/~socherr/SocherChenManningNg_NIPS2013.pdf
7. If we can predict the dotted-line
relationships, we add to our knowledge!
8. Goals:
Maintain Grakn as a versatile and robust knowledge base even as
additional (possibly false) relationships are added to it.
See if the accuracy of the neural net classifier is improved with Grakn
inferences
9. Algorithmic Flow
1. Build the project’s ontology and rule set in GRAKN
(define the inference rules, and provide a structure to the knowledge base)
1. Train the neural tensor network, and calculate an initial accuracy on the test set
2. Use the results of the network to scan for likely triplets (these are not taken from
the training/test data, but rather are constructed anew)
3. Insert n most likely triplets into the GRAKN knowledge base, and using the
inference rules you have, loop through the test set again, calculating an updated
accuracy.
4. Repeat steps 3 and 4 several times
10. Algorithmic Flow
1. Build the project’s ontology and rule set in GRAKN
(define the inference rules, and provide a structure to the knowledge base)
1. Train the neural tensor network, and calculate an initial accuracy on the
test set
2. Use the results of the network to scan for likely triplets (these are not taken from
the training/test data, but rather are constructed anew)
3. Insert n most likely triplets into the GRAKN knowledge base, and using the
inference rules you have, loop through the test set again, calculating an updated
accuracy.
4. Repeat steps 3 and 4 several times
11. Algorithmic Flow
1. Build the project’s ontology and rule set in GRAKN
(define the inference rules, and provide a structure to the knowledge base)
1. Train the neural tensor network, and calculate an initial accuracy on the test set
2. Use the results of the network to scan for likely triplets (these are not taken
from the training/test data, but rather are constructed anew)
3. Insert n most likely triplets into the GRAKN knowledge base, and using the
inference rules you have, loop through the test set again, calculating an updated
accuracy.
4. Repeat steps 3 and 4 several times
12. Algorithmic Flow
1. Build the project’s ontology and rule set in GRAKN
(define the inference rules, and provide a structure to the knowledge base)
1. Train the neural tensor network, and calculate an initial accuracy on the test set
2. Use the results of the network to scan for likely triplets (these are not taken from
the training/test data, but rather are constructed anew)
3. Insert n most likely triplets into the GRAKN knowledge base, and using the
inference rules you have, loop through the test set again, calculating an
updated accuracy.
4. Repeat steps 3 and 4 several times
13. Algorithmic Flow
1. Build the project’s ontology and rule set in GRAKN
(define the inference rules, and provide a structure to the knowledge base)
1. Train the neural tensor network, and calculate an initial accuracy on the test set
2. Use the results of the network to scan for likely triplets (these are not taken from
the training/test data, but rather are constructed anew)
3. Insert n most likely triplets into the GRAKN knowledge base, and using the
inference rules you have, loop through the test set again, calculating an updated
accuracy.
4. Repeat steps 3 and 4 several times
15. Findings
The default inference rules were not extensive
enough to cover the whole dataset.
However, the knowledge base was consistently
able to absorb more correct information than
incorrect information - we can be very confident
that this improves the accuracy of the neural net
alone.
17. Further applications?
Using GRAKN inferences to give clues about ground truths.
This could be done before the neural network is trained, perhaps to
intelligently initialize network weights.
Create inference rules by training neural networks - similar to this
project, but much more difficult (and maybe rewarding!)
...and more!