2. TYPES OF MACHINE LEARNING (ML)
‘NOW WE DON’T HAVE TO PROGRAM COMPUTERS; THEY PROGRAM THEMSELVES.’
5 MAIN SCHOOLS OF ML THOUGHT ACCORDING TO DOMINGOS:
1. ‘SYMBOLISTS’ LEARNING AS INVERSE OF DEDUCTION (E.G. DECISION TREES)
2. ‘CONNECTIONISTS’ REVERSE ENGINEER THE BRAIN (E.G. USE BACKPROPAGATION
ALGORITHMS AND NEURAL NETWORKS AS IN TODAY’S PAPER)
3. ‘EVOLUTIONISTS’ EVOLUTIONARY ALGORITHMS
4. ‘BAYESIANS’ LEARNING AS A FORM OF PROBABILISTIC INFERENCE
5. ‘ANALOGIZERS’ LEARNING BY EXTRAPOLATION FROM SIMILARITY JUDGEMENTS
DEEP LEARNING IS A SUBSET OF MACHINE LEARNING
3. DEEP LEARNING & GOOGLE DEEPMIND
HTTPS://WWW.YOUTUBE.COM/WATCH?V=V1EYNIJ0RNK
5. WIKIPEDIA TO THE RESCUE…
• DNN = DEEP NEURAL NETWORK, PREDOMINANT DEEP LEARNING MODEL
• MULTIPLE LAYERS OF NON-LINEAR TRANSFORMATIONS
• ‘ACTIVATION FUNCTION’ MUST BE NON-LINEAR TO ALLOW NETWORK TO LEARN NON-LINEAR MODELS
• ALLOWS DNN MODELS TO APPROXIMATE FUNCTIONS OF ARBITRARY COMPLEXITY
• EACH LEVEL LEARNS TO TRANSFORM ITS INPUT DATA INTO SLIGHTLY MORE ABSTRACT REPRESENTATION
• DEEP LEARNING PROCESS CAN LEARN WHICH FEATURES TO OPTIMALLY PLACE IN WHICH LEVEL ON ITS OWN
• STILL NEEDS HAND-TUNING – E.G. VARYING NUMBERS OF LAYERS AND LAYER SIZES CAN PROVIDE DIFFERENT DEGREES OF
ABSTRACTION
6. SOME MORE DEFINITIONS…
• ‘DEEP’ REFERS TO THE NUMBER OF LAYERS DATA IS TRANSFORMED THROUGH
• CREDIT ASSIGNMENT PATH (CAP) DEPTH = CHAIN OF TRANSFORMATIONS FROM INPUT
TO OUTPUT
• FEEDFORWARD NEURAL NETWORK
• DEPTH OF THE CAPS IS NUMBER OF HIDDEN LAYERS +1 (OUTPUT LAYER ALSO PARAMETERIZED)
• RECURRENT NEURAL NETWORKS
• SIGNAL MAY PROPAGATE THROUGH A LAYER MORE THAN ONCE
• CAP DEPTH IS POTENTIALLY UNLIMITED
7. D. Davies et al, 2016, Chem 1 (4), 617-627, doi: 10.1016/j.chempr.2016.09.010
???
???
THE SEARCH SPACE FOR MATERIALS DISCOVERY…
8. FROM CHEMICAL INTUITION…TO MACHINE LEARNING?
PREDICTING STRUCTURES WITHOUT ANY PRIOR
KNOWLEDGE OF CHEMISTRY…?
LIKE IN THE EPIC RAMPAGE IN THE ATARI
GAME…
COULD IT BE POSSIBLE TO DETERMINE NEW AND
BETTER SOLUTIONS THAN HUMANS HAVE COME
UP WITH BEFORE...?
9. DISCLAIMER
• PAPER PRESENTED TODAY USES DNN TO PICK UP TRENDS FOR ALL CRYSTAL
STRUCTURES BASED PURELY ON GEOMETRY OF CRYSTAL STRUCTURES IN DATA SET
• I *THINK* IN THIS PAPER IT IS TECHNICALLY ‘SUPERVISED LEARNING’, WHEREAS I
*THINK* DEEPMIND IS UNSUPERVISED LEARNING
• SUPERVISED LEARNING = MAPS AN INPUT TO AN OUTPUT BASED ON EXAMPLE INPUT-
OUTPUT PAIRS. IT INFERS A FUNCTION FROM LABELED TRAINING DATA CONSISTING OF
A SET OF TRAINING EXAMPLES. E.G. CLASSIFICATION
• UNSUPERVISED LEARNING DOES NOT REQUIRE LABELLED DATA. E.G. PATTERN
RECOGNITION
11. OVERVIEW
• DEVELOP AN INPUT REPRESENTATION TO DESCRIBE
COORDINATION TOPOLOGY OF UNIQUE CRYSTALLOGRAPHIC
SITES
• TRAIN NEURAL NETWORK TO DISTINGUISH ELEMENTS BASED
ON TOPOLOGY IN A CRYSTAL
• MODEL IDENTIFIES STRUCTURALLY SIMILAR ATOMS
• TRENDS REFLECTED PERIODIC TABLE
• USED TRAINED MODEL TO ANALYSE TEMPLATES FROM KNOWN
CRYSTAL STRUCTURES TO PREDICT LIKELIHOOD OF FORMING
NEW COMPOUNDS BY SUBSTITUTING ELEMENTS IN
COMBINATORIAL FASHION
• IN ~30% OF CASES KNOWN COMPOSITIONS WERE FOUND IN
TOP 10 MOST LIKELY CANDIDATES PROPOSED BY THE MODEL
Essentially… chemical experience/ intuition on
steroids?
(If humans were able to digest the entire ICSD and
COD and then notice trends before their brain
exploded)
‘discover hidden relationships in such large
datasets’
Since the input data contain purely geometrical
and topological information, any chemical
knowledge residing within the DNN output
must have been learned during training, and
thus was “discovered”
12. METHODS
1. CLEANING THE DATASET
• ICSD (INORGANIC CRYSTAL STRUCTURE DATABASE) AND COD (CRYSTALLOGRAPHIC OPEN DATABASE)
• + CERTAIN SELECTION CRITERIA TO LIMIT DATASET TO JUST HIGH-QUALITY EXPTL. DATA AND REMOVE
COMPLICATIONS FROM DISORDER
• + APPLY BIAS CORRECTIONS IN ML MODEL TO AVOID IMBALANCE IN TRAINING WHEN CERTAIN ELEMENTS
ARE MORE COMMON IN DATASET
2. **INPUT REPRESENTATION**
‘ENGINEERING A METHOD FOR TRANSFORMING DATA INTO AN APPROPRIATE REPRESENTATION FOR A
MODEL IS ONE OF THE MOST TIME-CONSUMING PHASES OF MODEL DEVELOPMENT. DEEP LEARNING
ALLOWS THE TEDIOUS REPRESENTATION DESIGN PROCESS TO BE INCORPORATED INTO MODEL FITTING’
3. TENSORFLOW (GOOGLE PYTHON LIBRARY) USED TO GENERATE INPUT REPRESENTATION +
CONSTRUCT, TRAIN AND VISUALIZE DNNS (+ OTHER LIBRARIES FOR OTHER BITS AND BOBS)
13. INPUT REPRESENTATION
THEREFORE NEED TO REPRESENT THE 3D GEOMETRICAL DATA OF THE CRYSTAL STRUCTURES,
SOMEHOW...
‘Deep learning methods use
representation learning. They convert the
input data to automatically discover new
representations for examining hidden
correlations in the data set. Therefore,
training of the model is critically
dependent on the representation used to
inform the model about the input data’
14. INPUT REPRESENTATION - AFPS
• ‘THE MODEL WAS TRAINED USING NORMALIZED ATOMIC
FINGERPRINTS, WHICH REPRESENT THE LOCAL
TOPOLOGY AROUND EACH CRYSTALLOGRAPHICALLY
UNIQUE ATOM’
• … ESSENTIALLY 12 OFF-CENTRED RDFS?
• REDUCE 3D CRYSTAL STRUCTURES TO 1D
REPRESENTATIONS FOR MODEL
• MULTIPLE PERSPECTIVES USED TO MITIGATE LOSS OF
GEOMETRICAL INFORMATION (WITH REDUCED
DIMENSIONALITY)
• EACH PERSPECTIVE AFPI
K IS AN INDIVIDUAL AFP
FUNCTION (EQ 1) CALCULATED WITH THE ORIGIN
OFFSET FROM ATOM I (FIG. 1A)
• TOTAL AFPI IS THEN A SET OF AFPI
K RDFS
• K IS INDEX USED TO ENUMERATE THE PERSPECTIVES
(FIG. 1B)
15. INPUT REPRESENTATION (CONT.)
‘A VARIATIONAL AUTOENCODER (VAE) WAS USED TO ALLOW THE
DNN TO LEARN ITS OWN REPRESENTATION OF THE AFPS’
USE MODEL TO LEARN A FUNCTION THAT BEST MAPS AFP
REPRESENTATION TO DATA LABELS
BIT OF A MYSTERY HOW 12 ’PERSPECTIVES’ (OR RDFS) AND
POINTS TO CENTRE THEM ON TO BEST REPRESENT ALL
DIFFERENT TYPES OF CRYSTAL STRUCTURES DETERMINED!
Not entirely clear how being
‘challenging for the model to learn’
or not ‘providing sufficient depth
perception’ is judged at this stage!
Possibly VAE was unable to
converge to a function in some
cases
+ Presumably data had to be
divided up, and model pre-trained
manually with different numbers of
perspectives with different
centering points and then
performance of models w.r.t data
labels evaluated
16. • TRAINED A 42-LAYER CONVOLUTIONAL VAE ON 12-PERSPECTIVE, 256-DIMENSIONAL AFPS
• 64-DIMENSIONAL LATENT REPRESENTATIONS GENERATED BY THE VAE, ALONG WITH THE
NORMALIZED GEOMETRIC DESCRIPTOR, WERE FED AS INPUT INTO A FIVE LAYER SIGMOID
CLASSIFIER WITH 118 OUTPUT NEURONS (ONE FOR EACH CHEMICAL ELEMENT IN THE PERIODIC
TABLE)
• OUTPUT FROM THE SIGMOID CLASSIFIER, ALONG WITH THE NON-NORMALIZED GEOMETRIC
DESCRIPTOR, WAS THEN FED AS INPUT INTO A FIVE-LAYER SOFTMAX CLASSIFIER WITH 118
OUTPUT NEURONS, CORRESPONDING TO 118 KNOWN CHEMICAL ELEMENTS
Side note on additional geometric
descriptors (due to information lost during
normalisation of AFKs):
1) Non-normalized distance (Ri0) from
atom i to its nearest neighbor
2) Ratio of the smallest interatomic
distance in the crystal structure to Ri0
Descriptors reflect, respectively, the actual
(in Å) and relative size of site i in the
crystal structure. Use of Ri0 can be thought
of as the way to inform the model about
the scale of topology.
Train VAE with labelled data
to reduce dimensions on
input
Feed into DNN networks
(sigmoid classifier) to
predict which elements were
likely to form specific
structural topologies
THE FULL TRAINING PROCESS
17. EXAMPLE OF TRAINING A NEURAL NETWORK
HTTPS://GOOGLE-DEVELOPERS.APPSPOT.COM/MACHINE-LEARNING/CRASH-COURSE/BACKPROP-SCROLL/
A PEAK INSIDE THE BLACK BOX….
18. RESULTS OVERVIEW
1. MEASURING PREDICTIVE PERFORMANCE OF CLASSIFIER (IDENTIFYING CORRECT
ELEMENTS IN CRYSTALS STRUCTURES BASED ON TOPOLOGY)
2. PREDICTING TRENDS FROM PERIODIC TABLE (WITH NO PRIOR CHEMICAL
KNOWLEDGE)
3. PREDICTING CANDIDATE COMPOUNDS BASED ON COMBINATORIAL ELEMENTAL
SUBSTITUTION OF KNOWN CRYSTAL STRUCTURES (AS IDENTIFIED BY MODEL)
19. CONFUSION MATRICES
ESSENTIALLY, HOW GOOD IS YOUR MODEL AT
MAKING PREDICTIONS
(AND HOW OFTEN AND IN WHICH WAYS DOES IT
GETS CONFUSED?)
IN THIS STUDY, SIGMOID CLASSIFIER WEIGHTED
TOWARD MINIMISING FALSE NEGATIVES
predictive performance of classifier
20. PREDICTING TRENDS FROM PERIODIC TABLE?
1. MISTAKES MADE BY CLASSIFIER
(INDICATED BY CONFUSION MATRIX)
MOSTLY ‘CHEMICALLY REASONABLE’
3D AND 4F ELEMENTS, TYPICALLY HAVE
SIMILAR COORD. ENVIRONMENTS
CHEMICAL STRUCTURAL SIMILARITY
BETWEEN LI OR MG WITH 3D METALS AND
CA OR Y WITH 4F METALS
2. GROUPINGS IN FIG. 5 FROM SIGMOID
CLASSIFIER SIMILAR TO PERIODIC TABLE
CLUSTERINGS CORRESPOND TO GROUPS/
ROWS/ DIAGONAL SIMILARITY BETWEEN
ELEMENTS IN THE PERIODIC TABLE (WHERE
LATTER IS EMPIRICALLY KNOWN TO
CHEMISTS IF NOT EXPLICIT IN PERIODIC
TABLE)
21. PREDICTING COMPOUNDS BASED ON COMBINATORIAL
ELEMENTAL SUBSTITUTION OF KNOWN CRYSTAL STRUCTURES (AS
IDENTIFIED BY MODEL)
(SIDE NOTE ON LIMITATIONS FIRST)
22. LIMITATIONS OF MODEL (AS IDENTIFIED BY AUTHORS)
1. DNN MODEL DETERMINES THE MOST LIKELY CHEMICAL ELEMENT FOR A GIVEN ATOMIC SITE, NOT CAPABLE
OF EVALUATING THE LIKELIHOOD OF THE ENTIRE CRYSTAL STRUCTURE
LIMITATION STEMS FROM THE LACK OF EXAMPLES OF CRYSTAL STRUCTURES WHICH CANNOT EXIST
‘FOR MACHINE LEARNING BASED METHODS IT IS NECESSARY TO HAVE INPUT EXAMPLES OF BOTH POSITIVE AND
NEGATIVE OUTCOMES SO THAT THE MODEL CAN LEARN WHAT IS POSSIBLE AND IMPOSSIBLE. UNFORTUNATELY,
THERE ARE NO DATABASES OF CRYSTAL STRUCTURES WHICH ARE KNOWN NOT TO EXIST… WITHOUT THIS CRUCIAL
BIT OF INFORMATION, THERE IS NOT AN OBVIOUS WAY FOR A MODEL TO BE TRAINED TO RECOGNIZE REASONABLE
FROM UNREASONABLE’
2. KNOWN STRUCTURE TYPES USED AS THE STARTING POINT FOR GENERATING NEW CRYSTAL STRUCTURES
THEREFORE MODEL UNABLE TO DISCOVER NOVEL STRUCTURAL ARRANGEMENTS
… LIMITATION OF SUPERVISED LEARNING FOR CLASSIFICATION VS. UNSUPERVISED PATTERN RECOGNITION?
23. LIMITATIONS OF MODEL (AS IDENTIFIED BY AUTHORS)
1. DNN MODEL DETERMINES THE MOST LIKELY CHEMICAL ELEMENT FOR A GIVEN ATOMIC SITE, NOT CAPABLE
OF EVALUATING THE LIKELIHOOD OF THE ENTIRE CRYSTAL STRUCTURE
LIMITATION STEMS FROM THE LACK OF EXAMPLES OF CRYSTAL STRUCTURES WHICH CANNOT EXIST
‘FOR MACHINE LEARNING BASED METHODS IT IS NECESSARY TO HAVE INPUT EXAMPLES OF BOTH POSITIVE AND
NEGATIVE OUTCOMES SO THAT THE MODEL CAN LEARN WHAT IS POSSIBLE AND IMPOSSIBLE. UNFORTUNATELY,
THERE ARE NO DATABASES OF CRYSTAL STRUCTURES WHICH ARE KNOWN NOT TO EXIST… WITHOUT THIS CRUCIAL
BIT OF INFORMATION, THERE IS NOT AN OBVIOUS WAY FOR A MODEL TO BE TRAINED TO RECOGNIZE REASONABLE
FROM UNREASONABLE’
2. KNOWN STRUCTURE TYPES USED AS THE STARTING POINT FOR GENERATING NEW CRYSTAL STRUCTURES
THEREFORE MODEL UNABLE TO DISCOVER NOVEL STRUCTURAL ARRANGEMENTS
… LIMITATION OF SUPERVISED LEARNING FOR CLASSIFICATION VS. UNSUPERVISED PATTERN RECOGNITION?
24. PREDICTING COMPOUNDS BASED ON COMBINATORIAL
ELEMENTAL SUBSTITUTION OF KNOWN CRYSTAL STRUCTURES (AS
IDENTIFIED BY MODEL)
GIVEN LIMITATIONS… CRYSTAL STRUCTURE PREDICTION PROBLEM REFORMULATED AS PREDICTING THE LIKELIHOODS OF
INDIVIDUAL ATOMIC SITES IN THE STRUCTURE
PREPARATION OF MODEL/ EVALUATOR
AFTER DNN TRAINED, PREDICTED LIKELIHOODS OF ALL CHEMICAL ELEMENTS ON ALL CRYSTALLOGRAPHIC SITES
CALCULATED AND STORED
FINDING THE PROBABILITY OF AN ELEMENT TO ADOPT THE TOPOLOGY OF A SPECIFIC SITE IN A SPECIFIC STRUCTURE
WAS THEN A LOOKUP PROCEDURE (REDUCE COMPUTATIONAL EXPENSE BY NOT RE-CALCULATING)
GENERATION OF STRUCTURES TO EVALUATE
• UNIQUE CRYSTALLOGRAPHIC SITES IN THE 51723 KNOWN CRYSTAL STRUCTURES USED AS SEEDS (W/O ANY CHEMICAL
LABELS)
• GENERATOR PRODUCED NEW CRYSTAL STRUCTURES BY COMBINATORIAL SUBSTITUTION ACROSS THE SERIES OF ALL
CHEMICAL ELEMENTS INTO THESE TEMPLATES
• FOR A PARTICULAR CRYSTAL STRUCTURE TEMPLATE, SUBSTITUTION WAS PERFORMED UNTIL ALL POSSIBLE
COMPOSITIONS AND UNIQUE SITE CONFIGURATIONS HAD BEEN EXHAUSTED
25. PREDICTING COMPOUNDS BASED ON COMBINATORIAL
ELEMENTAL SUBSTITUTION OF KNOWN CRYSTAL STRUCTURES (AS
IDENTIFIED BY MODEL)
TEST 1: PREDICTIVE ABILITY OF STRUCTURE EVALUATION COMPONENT
• 27% PROBABILITY FOR KNOWN CRYSTAL STRUCTURE TO APPEAR AS TOP
RANKED CANDIDATE
• 59% PROBABILITY FOR IT TO APPEAR IN TOP-10 RANKED PREDICTED
STRUCTURES
TEST 2: PREDICTIVE ABILITY OF STRUCTURE GENERATION COMPONENT
• (GENERATED FROM AVAILABLE TEMPLATES FOR PARTICULAR ELEMENTS)
• OPTIMALITY SCORE’
INDICATED PREDICTION MODEL WAS SIGNIFICANTLY BETTER AT IDENTIFYING
LIKELY COMPOSITIONS THAN RANDOM CHANCE
• STRUCTURE EVALUATION THEN APPLIED TO GENERATED CANDIDATE STRUCTURES
• MODEL TESTED AGAINST 5845 CRYSTAL STRUCTURES NOT USED DURING TRAINING PROCESS
26. FINAL THOUGHTS…
CHEMICAL INTUITION VS. MACHINE LEARNING
‘WE DEMONSTRATE THAT THE DNN IS CAPABLE OF AUTOMATICALLY “DISCOVERING” RELEVANT DESCRIPTORS FROM HIGH-
DIMENSIONAL “RAW REPRESENTATIONS” OF THE CRYSTALLOGRAPHIC DATA. SINCE THE INPUT DATA CONTAIN PURELY
GEOMETRICAL AND TOPOLOGICAL INFORMATION, ANY CHEMICAL KNOWLEDGE RESIDING WITHIN THE DNN OUTPUT MUST
HAVE BEEN LEARNED DURING TRAINING, AND THUS WAS “DISCOVERED”. THE DNN’S LEARNED REPRESENTATION OF LOCAL
TOPOLOGY SHOWS EVIDENCE OF KNOWN GEOMETRIC AND CHEMICAL TRENDS NOT EXPLICITLY PROVIDED TO THE NETWORK
DURING TRAINING’
(I.E. NO CHEMISTRY RULES SUCH AS CHARGE NEUTRALITY AND ELECTRONEGATIVITY BALANCE AS IN SMACT)
USEFUL TO HAVE GUIDANCE… OR RESTRICTIVE? ...OR, IN THIS CASE, JUST SHOWING OFF THAT IT ISN’T NECESSARY ;)
MAIN FIGURE OF MERIT FOR THEIR FINAL MODEL AS PRESENTED IN THE ABSTRACT
‘IN ~30% OF CASES KNOWN COMPOUNDS WERE IN TOP 10 MOST LIKELY CANDIDATES PROPOSED BY MODEL’
• IS 30% OF CASES ENOUGH? (I WOULDN’T LIKE TO GET 30% ON AN EXAM…)
• ALTERNATIVELY... IF THE AIM IS TO DISCOVER NEW COMPOUNDS… IS IT FAIR TO USE NUMBER OF EXISTING COMPOUNDS
IDENTIFIED TO JUDGE QUALITY OF MODEL FOR MATERIAL DISCOVERY?
• AND COULD ‘OVERFITTING’ STIFLE ‘CREATIVIY’ OF A ML MODEL FOR MATERIAL DISCOVERY...?
‘We demonstrate that the DNN is capable of automatically “discovering” relevant descriptors from high-
dimensional “raw representations” of the crystallographic data. Since the input data contain purely geometrical
and topological information, any chemical knowledge residing within the DNN output must have been learned
during training, and thus was “discovered”. The DNN’s learned representation of local topology shows evidence
of known geometric and chemical trends not explicitly provided to the network during training’
‘In ~30% of cases known compounds were in top 10 most likely candidates proposed by model’
27. FINAL THOUGHTS…
CHEMICAL INTUITION VS. MACHINE LEARNING
‘WE DEMONSTRATE THAT THE DNN IS CAPABLE OF AUTOMATICALLY “DISCOVERING” RELEVANT DESCRIPTORS FROM HIGH-
DIMENSIONAL “RAW REPRESENTATIONS” OF THE CRYSTALLOGRAPHIC DATA. SINCE THE INPUT DATA CONTAIN PURELY
GEOMETRICAL AND TOPOLOGICAL INFORMATION, ANY CHEMICAL KNOWLEDGE RESIDING WITHIN THE DNN OUTPUT MUST
HAVE BEEN LEARNED DURING TRAINING, AND THUS WAS “DISCOVERED”. THE DNN’S LEARNED REPRESENTATION OF LOCAL
TOPOLOGY SHOWS EVIDENCE OF KNOWN GEOMETRIC AND CHEMICAL TRENDS NOT EXPLICITLY PROVIDED TO THE NETWORK
DURING TRAINING’
(I.E. NO CHEMISTRY RULES SUCH AS CHARGE NEUTRALITY AND ELECTRONEGATIVITY BALANCE AS IN SMACT)
USEFUL TO HAVE GUIDANCE… OR RESTRICTIVE? ...OR, IN THIS CASE, JUST SHOWING OFF THAT IT ISN’T NECESSARY ;)
MAIN FIGURE OF MERIT FOR THEIR FINAL MODEL AS PRESENTED IN THE ABSTRACT
‘IN ~30% OF CASES KNOWN COMPOUNDS WERE IN TOP 10 MOST LIKELY CANDIDATES PROPOSED BY MODEL’
1. IS 30% OF CASES ENOUGH? (I WOULDN’T LIKE TO GET 30% ON AN EXAM…)
2. ALTERNATIVELY... IF THE AIM IS TO DISCOVER NEW COMPOUNDS… IS IT FAIR TO USE NUMBER OF EXISTING
COMPOUNDS IDENTIFIED TO JUDGE QUALITY OF MODEL FOR MATERIAL DISCOVERY? COULD ‘OVERFITTING’ STIFLE
‘CREATIVIY’ OF A ML MODEL...?
‘We demonstrate that the DNN is capable of automatically “discovering” relevant descriptors from high-
dimensional “raw representations” of the crystallographic data. Since the input data contain purely geometrical
and topological information, any chemical knowledge residing within the DNN output must have been learned
during training, and thus was “discovered”. The DNN’s learned representation of local topology shows evidence
of known geometric and chemical trends not explicitly provided to the network during training’
‘In ~30% of cases known compounds were in top 10 most likely candidates proposed by model’
28. FINAL THOUGHTS…
CHEMICAL INTUITION VS. MACHINE LEARNING
‘WE DEMONSTRATE THAT THE DNN IS CAPABLE OF AUTOMATICALLY “DISCOVERING” RELEVANT DESCRIPTORS FROM HIGH-
DIMENSIONAL “RAW REPRESENTATIONS” OF THE CRYSTALLOGRAPHIC DATA. SINCE THE INPUT DATA CONTAIN PURELY
GEOMETRICAL AND TOPOLOGICAL INFORMATION, ANY CHEMICAL KNOWLEDGE RESIDING WITHIN THE DNN OUTPUT MUST
HAVE BEEN LEARNED DURING TRAINING, AND THUS WAS “DISCOVERED”. THE DNN’S LEARNED REPRESENTATION OF LOCAL
TOPOLOGY SHOWS EVIDENCE OF KNOWN GEOMETRIC AND CHEMICAL TRENDS NOT EXPLICITLY PROVIDED TO THE NETWORK
DURING TRAINING’
(I.E. NO CHEMISTRY RULES SUCH AS CHARGE NEUTRALITY AND ELECTRONEGATIVITY BALANCE AS IN SMACT)
USEFUL TO HAVE GUIDANCE… OR RESTRICTIVE? ...OR, IN THIS CASE, JUST SHOWING OFF THAT IT ISN’T NECESSARY ;)
MAIN FIGURE OF MERIT FOR THEIR FINAL MODEL AS PRESENTED IN THE ABSTRACT
‘IN ~30% OF CASES KNOWN COMPOUNDS WERE IN TOP 10 MOST LIKELY CANDIDATES PROPOSED BY MODEL’
1. IS 30% OF CASES ENOUGH? (I WOULDN’T LIKE TO GET 30% ON AN EXAM…)
2. ALTERNATIVELY... IF THE AIM IS TO DISCOVER NEW COMPOUNDS… IS IT FAIR TO USE NUMBER OF EXISTING
COMPOUNDS IDENTIFIED TO JUDGE QUALITY OF MODEL FOR MATERIAL DISCOVERY? COULD ‘OVERFITTING’ STIFLE
‘CREATIVIY’ OF A ML MODEL...?
‘We demonstrate that the DNN is capable of automatically “discovering” relevant descriptors from high-
dimensional “raw representations” of the crystallographic data. Since the input data contain purely geometrical
and topological information, any chemical knowledge residing within the DNN output must have been learned
during training, and thus was “discovered”. The DNN’s learned representation of local topology shows evidence
of known geometric and chemical trends not explicitly provided to the network during training’
‘In ~30% of cases known compounds were in top 10 most likely candidates proposed by model’
Editor's Notes
DL put into context of ML, ML as algorithms where each step isn’t written explicitly by a programmer (computer program, written by computer program…)
In the paper I’m presenting the method used fits into the ‘connectionist’ group, I should check with Dan, but I *think* the methods he used for structure prediction may fit into the ‘analogizers’? (But I’ve not onto that chapter in the book yet)
Cool demonstration of google DeepMind, finds unique way to win game that ‘expert’ human player may not
Go as a game that is more like an art and less pre-programmable than say, chess.
Game beat world champion (but don’t worry they’re still friends)… but players say they themselves learnt new techniques from the machine that hadn’t been attempted by humans before
So it can do snazzy stuff, but what actually is deep learning?!
And I suppose what I’ve done here from reading wikipedia and picking out what I deem to be the key points to present to you… is actually something that could have been done by an ML algorithm... Maybe!
Huge search space for materials discovery beyond known compounds (here identified by SMACT using all possible combinations of elements and then screening by simple chemical rules)
But then… what could the structures be? If we’re to attempt to predict their properties!
Since some of the previous diagram seemed a bit black boxsy (actually used black boxes…) here’s an example workflow for training a neural network from google’s website
In an ideal world you would want all predictions to be correct, but how often the model is wrong in a particular way may have larger bearings for particular applications. E.g. if the model was being used to predict from the results of various tests if a patient had a particular disease, saying the person had the disease when they actually didn’t wouldn’t be ideal, but missing a case when a person did have the disease would be a far worse way to be wrong!
I guess the need for negative outcomes for ML is just another vote for the need of a journal of chemical failures!
Would be interesting to compare the 30% achieved by this model to the same proceedure in SMACT where candidates are initially identified by chemical rules!
This was a really cool paper, so gets a big thumbs up from the scientists of tomorrow
And here’s another one pondering what in the world to optimise next next with his big neural network brain and all of the data at his/ her disposal…