ICReDD-JST CREST
2019 9 26
( )
(AIP)

iPS
(WPI-ICReDD)
ichigaku.takigawa@riken.jp
!2
• Suzuki+ ChemCatChem. 2019.
• Kamachi+ The Journal of Physical Chemistry C. 2019.
• Hinuma+ The Journal of Physical Chemistry C. 2018.
• Toyao+, The Journal of Physical Chemistry C. 2018
• Takigawa+ RSC Advances. 2016.
Ken-ichi
SHIMIZU

(ICAT)
Satoru
TAKAKUSAGI

(ICAT)
Takashi
TOYAO

(ICAT)
Keisuke

SUZUKI

(DENSO)
!2
JST CREST ( )

( )
( )
!3
10
(1995 2004)
7
(2005 2011)
7
(2012 2018)
( )

" L1 "
( )

( )


( )

+ JST :
( )
(2019 )
AIP iPS 

( )


速
Case Study: ( )
!4
Wolfgang Pauli
“God made the bulk; 

the surface was invented by the devil.”
adsorption
diffusion
desorption
dissociation
recombination
kinks
terraces
adatom
vacancysteps
•
•
•
•
•
•
• 

•
(Reactants)
羅
(Catalysts)
...

( ...)
Empirical optimization or "Edisonian empiricism"
!5
( )
feedback
Thomas Edison
• Genius is 1% inspiration and 99% perspiration.
• There is no substitute for hard work.
• I have not failed. I've just found 10,000 ways that won't work.

:
速 



" (empirical/inductive)" " (rational/deductive)"
!6
( )
feedback
( ) 

etc
( ) ( ...)



... <
速 



!7
( ) Data-Driven
feedback
( + )
( ) 

( )


Multilevel
2000-2010 /
!8
!9
羅
羅




IoT
( )
!9
羅
羅




IoT
( )
!10
Key
!11
( ) ( + )
( ) 

In-House + Public + 

+ Quality Control / Annotations)


Multilevel
( )
!12


( )




速 ( ...)


In-House + Public + 

+ Quality Control / Annotations)
+
!13
REVIEW
Inverse molecular design using
machine learning: Generative models
for matter engineering
Benjamin Sanchez-Lengeling1
and Alán Aspuru-Guzik2,3,4
*
The discovery of new materials can bring enormous societal and technological progress. In this
context, exploring completely the large space of potential materials is computationally
intractable. Here, we review methods for achieving inverse design, which aims to discover
tailored materials from the starting point of a particular desired functionality. Recent advances
from the rapidly growing field of artificial intelligence, mostly from the subfield of machine
learning, have resulted in a fertile exchange of ideas, where approaches to inverse molecular
design are being proposed and employed at a rapid pace. Among these, deep generative models
have been applied to numerous classes of materials: rational design of prospective drugs,
synthetic routes to organic compounds, and optimization of photovoltaics and redox flow
batteries, as well as a variety of other solid-state materials.
M
any of the challenges of the 21st century
(1), from personalized health care to
energy production and storage, share a
common theme: materials are part of
the solution (2). In some cases, the solu-
tions to these challenges are fundamentally
limited by the physics and chemistry of a ma-
terial, such as the relationship of a materials
bandgap to the thermodynamic limits for the
generation of solar energy (3).
Several important materials discoveries arose
by chance or through a process of trial and error.
For example, vulcanized rubber was prepared in
the 19th century from random mixtures of com-
pounds, based on the observation that heating
with additives such as sulfur improved the
rubber’s durability. At the molecular level, in-
dividual polymer chains cross-linked, forming
bridges that enhanced the macroscopic mechan-
ical properties (4). Other notable examples in
this vein include Teflon, anesthesia, Vaseline,
Perkin’s mauve, and penicillin. Furthermore,
these materials come from common chemical
compounds found in nature. Potential drugs
either were prepared by synthesis in a chem-
ical laboratory or were isolated from plants,
soil bacteria, or fungus. For example, up until
2014, 49% of small-molecule cancer drugs were
natural products or their derivatives (5).
In the future, disruptive advances in the dis-
covery of matter could instead come from unex-
plored regions of the set of all possible molecular
and solid-state compounds, known as chemical
space (6, 7). One of the largest collections of
molecules, the chemical space project (8), has
mapped 166.4 billion molecules that contain at
most 17 heavy atoms. For pharmacologically rele-
vant small molecules, the number of structures is
estimated to be on the order of 1060
(9). Adding
consideration of the hierarchy of scale from sub-
nanometer to microscopic and mesoscopic fur-
ther complicates exploration of chemical space
in its entirety (10). Therefore, any global strategy
for covering this space might seem impossible.
Simulation offers one way of probing this
space without experimentation. The physics
and chemistry of these molecules are governed
by quantum mechanics, which can be solved via
the Schrödinger equation to arrive at their ex-
act properties. In practice, approximations are
used to lower computational time at the cost of
accuracy.
Although theory enjoys enormous progress,
now routinely modeling molecules, clusters, and
perfect as well as defect-laden periodic solids, the
size of chemical space is still overwhelming, and
smart navigation is required. For this purpose,
machine learning (ML), deep learning (DL), and
artificial intelligence (AI) have a potential role
to play because their computational strategies
automatically improve through experience (11).
In the context of materials, ML techniques are
often used for property prediction, seeking to
learn a function that maps a molecular material
to the property of choice. Deep generative models
are a special class of DL methods that seek to
model the underlying probability distribution of
both structure and property and relate them in a
nonlinear way. By exploiting patterns in massive
datasets, these models can distill average and
salient features that characterize molecules (12, 13).
Inverse design is a component of a more
complex materials discovery process. The time
scale for deployment of new technologies, from
discovery in a laboratory to a commercial pro-
duct, historically, is 15 to 20 years (14). The pro-
cess (Fig. 1) conventionally involves the following
steps: (i) generate a new or improved material
concept and simulate its potential suitability; (ii)
synthesize the material; (iii) incorporate the ma-
terial into a device or system; and (iv) characterize
and measure the desired properties. This cycle
generates feedback to repeat, improve, and re-
fine future cycles of discovery. Each step can take
up to several years.
In the era of matter engineering, scientists
seek to accelerate these cycles, reducing the
FRONTIERS IN COMPUTATION
1
Department of Chemistry and Chemical Biology, Harvard
University 12 Oxford Street, Cambridge, MA 02138, USA.
2
Department of Chemistry and Department of Computer
Science, University of Toronto, Toronto Ontario, M5S 3H6,
Canada. 3
Vector Institute for Artificial Intelligence, Toronto,
Ontario M5S 1M1, Canada. 4
Canadian Institute for Advanced
Fig. 1. Schematic comparison of material discovery paradigms. The current paradigm is
APTEDBYK.HOLOSKI
onJuly26,2018http://science.sciencemag.org/Downloadedfrom
REVIEW https://doi.org/10.1038/s41586-018-0337-2
Machine learning for molecular and
materials science
Keith T. Butler1
, Daniel W. Davies2
, Hugh Cartwright3
, Olexandr Isayev4
* & Aron Walsh5,6
*
Here we summarize recent progress in machine learning for the chemical sciences. We outline machine-learning
techniques that are suitable for addressing research questions in this domain, as well as future directions for the field.
We envisage a future in which the design, synthesis, characterization and application of molecules and materials is
accelerated by artificial intelligence.
T
he Schrödinger equation provides a powerful structure–
property relationship for molecules and materials. For a given
spatial arrangement of chemical elements, the distribution of
electrons and a wide range of physical responses can be described. The
development of quantum mechanics provided a rigorous theoretical
foundationforthechemicalbond.In1929,PaulDiracfamouslyproclaimed
that the underlying physical laws for the whole of chemistry are “completely
known”1
. John Pople, realizing the importance of rapidly developing
computer technologies, created a program—Gaussian 70—that could
perform ab initio calculations: predicting the behaviour, for molecules
of modest size, purely from the fundamental laws of physics2
. In the 1960s,
the Quantum Chemistry Program Exchange brought quantum chemistry
to the masses in the form of useful practical tools3
. Suddenly, experi-
mentalists with little or no theoretical training could perform quantum
calculations too. Using modern algorithms and supercomputers,
systems containing thousands of interacting ions and electrons can now
be described using approximations to the physical laws that govern the
world on the atomic scale4–6
.
The field of computational chemistry has become increasingly pre-
dictive in the twenty-first century, with activity in applications as wide
ranging as catalyst development for greenhouse gas conversion, materials
discovery for energy harvesting and storage, and computer-assisted drug
design7
. The modern chemical-simulation toolkit allows the properties
of a compound to be anticipated (with reasonable accuracy) before it has
been made in the laboratory. High-throughput computational screening
has become routine, giving scientists the ability to calculate the properties
of thousands of compounds as part of a single study. In particular, den-
sity functional theory (DFT)8,9
, now a mature technique for calculating
the structure and behaviour of solids10
, has enabled the development of
extensive databases that cover the calculated properties of known and
hypothetical systems, including organic and inorganic crystals, single
molecules and metal alloys11–13
.
The emergence of contemporary artificial-intelligence methods has
the potential to substantially alter and enhance the role of computers in
science and engineering. The combination of big data and artificial intel-
ligence has been referred to as both the “fourth paradigm of science”14
and the “fourth industrial revolution”15
, and the number of applications
in the chemical domain is growing at an astounding rate. A subfield of
artificial intelligence that has evolved rapidly in recent years is machine
learning. At the heart of machine-learning applications lie statistical algo-
rithms whose performance, much like that of a researcher, improves with
training. There is a growing infrastructure of machine-learning tools for
generating, testing and refining scientific models. Such techniques are
suitable for addressing complex problems that involve massive combi-
natorial spaces or nonlinear processes, which conventional procedures
either cannot solve or can tackle only at great computational cost.
As the machinery for artificial intelligence and machine learning
matures, important advances are being made not only by those in main-
stream artificial-intelligence research, but also by experts in other fields
(domain experts) who adopt these approaches for their own purposes. As
we detail in Box 1, the resources and tools that facilitate the application
of machine-learning techniques mean that the barrier to entry is lower
than ever.
In the rest of this Review, we discuss progress in the application of
machine learning to address challenges in molecular and materials
research. We review the basics of machine-learning approaches, iden-
tify areas in which existing methods have the potential to accelerate
research and consider the developments that are required to enable more
wide-ranging impacts.
Nuts and bolts of machine learning
With machine learning, given enough data and a rule-discovery algo-
rithm, a computer has the ability to determine all known physical laws
(and potentially those that are currently unknown) without human
input. In traditional computational approaches, the computer is little
more than a calculator, employing a hard-coded algorithm provided
by a human expert. By contrast, machine-learning approaches learn
the rules that underlie a dataset by assessing a portion of that data
and building a model to make predictions. We consider the basic steps
involved in the construction of a model, as illustrated in Fig. 1; this
constitutes a blueprint of the generic workflow that is required for the
successful application of machine learning in a materials-discovery
process.
Data collection
Machine learning comprises models that learn from existing (train-
ing) data. Data may require initial preprocessing, during which miss-
ing or spurious elements are identified and handled. For example, the
Inorganic Crystal Structure Database (ICSD) currently contains more
than 190,000 entries, which have been checked for technical mistakes
but are still subject to human and measurement errors. Identifying
and removing such errors is essential to avoid machine-learning
algorithms being misled. There is a growing public concern about
the lack of reproducibility and error propagation of experimental data
DNA to be sequences into distinct pieces,
parcel out the detailed work of sequencing,
and then reassemble these independent ef-
forts at the end. It is not quite so simple in the
world of genome semantics.
Despite the differences between genome se-
quencing and genetic network discovery, there
are clear parallels that are illustrated in Table 1.
In genome sequencing, a physical map is useful
to provide scaffolding for assembling the fin-
ished sequence. In the case of a genetic regula-
tory network, a graphical model can play the
same role. A graphical model can represent a
high-level view of interconnectivity and help
isolate modules that can be studied indepen-
dently. Like contigs in a genomic sequencing
project, low-level functional models can ex-
plore the detailed behavior of a module of genes
in a manner that is consistent with the higher
level graphical model of the system. With stan-
dardized nomenclature and compatible model-
ing techniques, independent functional models
can be assembled into a complete model of the
cell under study.
To enable this process, there will need to
be standardized forms for model representa-
tion. At present, there are many different
modeling technologies in use, and although
models can be easily placed into a database,
they are not useful out of the context of their
specific modeling package. The need for a
standardized way of communicating compu-
tational descriptions of biological systems ex-
tends to the literature. Entire conferences
have been established to explore ways of
mining the biology literature to extract se-
mantic information in computational form.
Going forward, as a community we need
to come to consensus on how to represent
what we know about biology in computa-
tional form as well as in words. The key to
postgenomic biology will be the computa-
tional assembly of our collective knowl-
edge into a cohesive picture of cellular and
organism function. With such a comprehen-
sive model, we will be able to explore new
types of conservation between organisms
and make great strides toward new thera-
peutics that function on well-characterized
pathways.
References
1. S. K. Kim et al., Science 293 , 2087 (2001).
2. A. Hartemink et al., paper presented at the Pacific
Symposium on Biocomputing 2000, Oahu, Hawaii, 4
to 9 January 2000.
3. D. Pe’er et al., paper presented at the 9th Conference
on Intelligent Systems in Molecular Biology (ISMB),
Copenhagen, Denmark, 21 to 25 July 2001.
4. H. McAdams, A. Arkin, Proc. Natl. Acad. Sci. U.S.A.
94 , 814 ( 1997 ).
5. A. J. Hartemink, thesis, Massachusetts Institute of
Technology, Cambridge (2001).
V I E W P O I N T
Machine Learning for Science: State of the
Art and Future Prospects
Eric Mjolsness* and Dennis DeCoste
Recent advances in machine learning methods, along with successful
applications across a wide variety of fields such as planetary science and
bioinformatics, promise powerful new tools for practicing scientists. This
viewpoint highlights some useful characteristics of modern machine learn-
ing methods and their relevance to scientific applications. We conclude
with some speculations on near-term progress and promising directions.
Machine learning (ML) (1) is the study of
computer algorithms capable of learning to im-
prove their performance of a task on the basis of
their own previous experience. The field is
closely related to pattern recognition and statis-
tical inference. As an engineering field, ML has
become steadily more mathematical and more
successful in applications over the past 20
years. Learning approaches such as data clus-
tering, neural network classifiers, and nonlinear
regression have found surprisingly wide appli-
cation in the practice of engineering, business,
and science. A generalized version of the stan-
dard Hidden Markov Models of ML practice
have been used for ab initio prediction of gene
structures in genomic DNA (2). The predictions
correlate surprisingly well with subsequent
gene expression analysis (3). Postgenomic bi-
ology prominently features large-scale gene ex-
pression data analyzed by clustering methods
(4), a standard topic in unsupervised learning.
Many other examples can be given of learning
and pattern recognition applications in science.
Where will this trend lead? We believe it will
lead to appropriate, partial automation of every
element of scientific method, from hypothesis
generation to model construction to decisive
experimentation. Thus, ML has the potential to
amplify every aspect of a working scientist’s
progress to understanding. It will also, for better
or worse, endow intelligent computer systems
with some of the general analytic power of
scientific thinking.
Machine Learning at Every Stage of
the Scientific Process
Each scientific field has its own version of the
scientific process. But the cycle of observing,
creating hypotheses, testing by decisive exper-
iment or observation, and iteratively building
up comprehensive testable models or theories is
shared across disciplines. For each stage of this
abstracted scientific process, there are relevant
developments in ML, statistical inference, and
pattern recognition that will lead to semiauto-
matic support tools of unknown but potentially
broad applicability.
Increasingly, the early elements of scientific
method—observation and hypothesis genera-
tion—face high data volumes, high data acqui-
sition rates, or requirements for objective anal-
ysis that cannot be handled by human percep-
tion alone. This has been the situation in exper-
imental particle physics for decades. There
automatic pattern recognition for significant
events is well developed, including Hough
transforms, which are foundational in pattern
recognition. A recent example is event analysis
for Cherenkov detectors (8) used in neutrino
oscillation experiments. Microscope imagery in
cell biology, pathology, petrology, and other
fields has led to image-processing specialties.
So has remote sensing from Earth-observing
satellites, such as the newly operational Terra
spacecraft with its ASTER (a multispectral
thermal radiometer), MISR (multiangle imag-
ing spectral radiometer), MODIS (imaging
Machine Learning Systems Group, Jet Propulsion Lab-
oratory/California Institute of Technology, Pasadena,
CA, 91109, USA.
*To whom correspondence should be addressed. E-
mail: mjolsness@jpl.nasa.gov
Table 1. Parallels between genome sequencing
and genetic network discovery.
Genome
sequencing
Genome semantics
Physical maps Graphical model
Contigs Low-level functional
models
Contig
reassembly
Module assembly
Finished genome
sequence
Comprehensive model
www.sciencemag.org SCIENCE VOL 293 14 SEPTEMBER 2001 2051
C O M P U T E R S A N D S C I E N C E
onAugust29,2018http://science.sciencemag.org/Downloadedfrom
Nature, 559

pp. 547–555 (2018)
Science, 293
pp. 2051-2055 (2001)
Science, 361
pp. 360-365 (2018)
Science is changing, the tools of science are changing. And that
requires different approaches. Erich Bloch, 1925-2016
( )

"low input, high throughput, no output science." (Sydney Brenner)
Case Study: ( )
!14
Wolfgang Pauli
“God made the bulk; 

the surface was invented by the devil.”
adsorption
diffusion
desorption
dissociation
recombination
kinks
terraces
adatom
vacancysteps
•
•
•
•
•
•
• 

•
(Reactants)
羅
(Catalysts)
...

( ...)
(ML; Machine Learning)
!15
Generic Object Recognition
Speech Recognition
Machine Translation
QSAR/QSPR Prediction
AI Game Players
“ ”
J’aime la
musique I love music
=
CH3
N
N
H
N
H
H3C
N
Growth inhibition
0.739
( ) ( ) +
!16
K. Shimizu et al, ACS Catal. 2, 1904 (2012)
d-band center (εd EF) / eVd-band center (εd EF) / eV
Hammer Nørskov d-band model
reactionrates
Volcano
trends!
adsorption energy / eV
Brønsted-Evans-Polanyi
relation
activationenergy/eV
Linear trends!
( )
Outline: Our ML-based studies
!17
1. Can we predict the d-band center?
2. Can we predict the adsorption energy?
3. Can we predict the catalytic activity?
predicting DFT-calculated values by machine learning
  (Takigawa et al, RSC Advances, 2016)
predicting DFT-calculated values by machine learning
  (Toyao et al, JPCC, 2018)
predicting values from experiments reported in the  literature
by machine learning
  (Suzuki et al, ChemCatChem, 2019)
Case 1. Predicting the d-band centers
!18
Guest
Host
Ruban, Hammer, Stoltze, Skriver, Nørskov, J Mol Catal A, 115:421-429 (1997)
J. K. Nørskov, et al., Advances in Catalysis, 2000
Host
Guest
Two types of models
• 1% doped
• overlayer
[1% doped]
The d-bands of
transition metals
play central roles.
The ML model
!19
Group (G)
Bulk Wigner Seitz radius (R) in Å
Atomic number (AN)
Atomic mass (AM) in g mol
1
Period (P)
Electronegativity (EN)
Ionization energy (IE) in eV
Enthalpy of fusion ( fusH) in J g
1
Density at 25 (ρ) in g cm
3


9
( 18)
6 Gradient Boosted Tree Regression (GBR)
(1) Group in the periodic table (host)
(2) Density at 25 (host)
(3) Enthalpy of fusion (guest)
(4) Ionization energy (guest)
(5) Enthalpy of fusion (host)
(6) Ionization energy (host)
ML : The beauty of the periodic table...
!20
Fe Co Ni Cu Ru Rh Pd Ag Ir Pt Au
Fe -0.92 -0.96 -0.97 -1.65 -1.64 -2.24 -1.87 -2.4 -3.11
Co -1.37 -1.23 -2.12 -2.82 -2.53 -2.26 -3.56
Ni -0.33 -1.18 -1.92 -2.03 -2.43 -2.15 -2.82 -3.39
Cu -2.42 -2.49 -2.67 -2.89 -2.94 -3.82 -4.63
Ru -1.11 -1.04 -1.12 -1.41 -1.88 -1.81 -1.54 -2.27
Rh -1.42 -1.32 -1.51 -1.7 -1.73 -2.12 -1.81 -1.7 -2.18 -2.3
Pd -1.47 -1.29 -1.29 -1.03 -1.58 -1.83 -1.68 -1.52 -1.79
Ag -3.75 -3.56 -3.62 -3.8 -4.03 -3.5 -3.93 -4.51
Ir -1.78 -1.71 -1.78 -1.55 -2.14 -2.53 -2.2 -2.11 -2.6 -2.7
Pt -1.71 -1.47 -2.13 -2.01 -2.23 -2.06 -1.96 -2.33
Au -3.03 -2.82 -2.85 -2.89 -3.44 -3.56
Fe Co Ni Cu Ru Rh Pd Ag Ir Pt Au
Fe -0.78 -1.65 -1.64 -1.87
Co -1.18 -1.17 -1.37 -1.87 -2.12 -2.82 -2.26
Ni -0.33 -1.18 -1.17 -2.61 -2.43 -2.15 -2.82
Cu -2.42 -2.89 -2.94 -3.88 -4.63
Ru -1.11 -1.04 -1.12 -1.11 -1.41 -1.81 -2.27
Rh -1.42 -1.51 -2.12 -1.81 -1.7
Pd -1.29 -1.29 -1.03 -1.58 -1.83 -1.52 -1.79
Ag -3.68 -3.8 -3.63 -4.51
Ir -2.14 -2.11 -2.7
Pt -1.71 -1.47 -2.13 -2.01 -2.23 -2.06
Au -2.86 -3.09 -2.89 -3.44 -3.56
Fe Co Ni Cu Ru Rh Pd Ag Ir Pt Au
Fe -2.17 -3.11
Co -1.17 -1.37 -2.12
Ni -0.33 -1.18 -2.61 -2.43
Cu -2.42 -2.29 -2.49 -3.71 -4.63
Ru -2.02
Rh -1.32 -1.73 -2.12
Pd -1.94 -1.83 -1.97
Ag -3.75 -3.68 -4.51
Ir -1.78 -1.71 -2.7
Pt -2.13
Au -3.09 -2.89
training sets (75%)
test sets (25%)
training sets (50%)
test sets (50%)
training sets (25%)
test sets (75%)
100 times

mean RMSE:
0.153 / eV
100 times

mean RMSE:
0.235 / eV
100 times

mean RMSE:
0.402 / eV
ML
ML
ML
Descriptor analysis and evaluation
!21
100 times mean RMSE:
0.204±0.047 / eV
100 times mean RMSE:
0.212±0.047 / eV
100 times mean RMSE:
0.214±0.046 / eV
18
Descriptor
Selection

(top-k)
training sets (75%)
test sets (25%)
Method: GBR
6 4
Case 2. Predicting the adsorption energy
!22
DFT calculation of adsorption energy
10 hours with our 32 cores workstation 

(CH3 on the Cu monometallic surface)
even longer time (about 34 hours) for the system
containing another metal such as Pb
Predicting Adsorption energy
of CH3 (on 46 Cu-based alloys)
ML prediction
• < 1 sec with our 1 core laptop
not dependent on target systems,
but methods we choose
training sets (75%)
test sets (25%)
Adsorbates: 

CH3, CH2, CH, C, H
!23
training sets (75%)
test sets (25%)
training sets (50%)
test sets (50%)
training sets (25%)
test sets (75%)
100 times

mean RMSE:
0.153 / eV
100 times

mean RMSE:
0.235 / eV
100 times

mean RMSE:
0.402 / eV
ML
ML
ML
(Surrogate/Proxy)
!24
( ) Data-Driven


( )
( AI)
feedback


(Surrogate/Proxy)
!24
( ) Data-Driven


( )
( AI)
feedback


(
)
!25
=
=
=
( ) x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit>
( )
( ) 



( )
...
ML
!26
Highly Inaccurate Model Predictions from
Extrapolation (Lohninger 1999)
( )
"exploitation""exploration"
/ /
Surrogate optimization / model-based optimization
!27
x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit>
ML <latexit sha1_base64="0VEGB1BS2t8KmbZWf3FuR1QlwM8=">AAACrnichVFLLwNRFP6MV72LjcSm0RA2zZmiWithY+nVIjTNzLitiXllZtqg8QdsLSywILEQP8PGH7DwE8SSxMbCmemIWJRzc+899zvnO/e796iOoXs+0XOL1NrW3tEZ6+ru6e3rH4gPDhU8u+pqIq/Zhu1uqYonDN0SeV/3DbHluEIxVUNsqgdLQXyzJlxPt60N/8gRRVOpWHpZ1xSfoe3y5K5q1g9PpkrxJKVy2QzNpBOUIsqmKcPOLMk5OZeQGQksichW7PgjdrEHGxqqMCFgwWffgAKPxw5kEBzGiqgz5rKnh3GBE3Qzt8pZgjMURg94rfBpJ0ItPgc1vZCt8S0GT5eZCYzTE93RGz3SPb3QZ9Na9bBGoOWId7XBFU5p4HRk/eNflsm7j/0f1p+afZSRDbXqrN0JkeAVWoNfOz5/W59fG69P0A29sv5reqYHfoFVe9duV8XaxR96VNbS/MeCeJTBLfzuU6K5U0in5OkUrc4kFxajZsYwijFMcsfmsIBlrCDPN5g4wyWuJJIKUlEqNVKllogzjF8m7X8BK8iaxA==</latexit>
"exploitation" "exploration" ML
( )
Surrogate optimization / model-based optimization
!27
x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit>
ML <latexit sha1_base64="0VEGB1BS2t8KmbZWf3FuR1QlwM8=">AAACrnichVFLLwNRFP6MV72LjcSm0RA2zZmiWithY+nVIjTNzLitiXllZtqg8QdsLSywILEQP8PGH7DwE8SSxMbCmemIWJRzc+899zvnO/e796iOoXs+0XOL1NrW3tEZ6+ru6e3rH4gPDhU8u+pqIq/Zhu1uqYonDN0SeV/3DbHluEIxVUNsqgdLQXyzJlxPt60N/8gRRVOpWHpZ1xSfoe3y5K5q1g9PpkrxJKVy2QzNpBOUIsqmKcPOLMk5OZeQGQksichW7PgjdrEHGxqqMCFgwWffgAKPxw5kEBzGiqgz5rKnh3GBE3Qzt8pZgjMURg94rfBpJ0ItPgc1vZCt8S0GT5eZCYzTE93RGz3SPb3QZ9Na9bBGoOWId7XBFU5p4HRk/eNflsm7j/0f1p+afZSRDbXqrN0JkeAVWoNfOz5/W59fG69P0A29sv5reqYHfoFVe9duV8XaxR96VNbS/MeCeJTBLfzuU6K5U0in5OkUrc4kFxajZsYwijFMcsfmsIBlrCDPN5g4wyWuJJIKUlEqNVKllogzjF8m7X8BK8iaxA==</latexit>
) ,
"exploitation" "exploration" ML
( )
Surrogate optimization / model-based optimization
!27
x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit>
ML <latexit sha1_base64="0VEGB1BS2t8KmbZWf3FuR1QlwM8=">AAACrnichVFLLwNRFP6MV72LjcSm0RA2zZmiWithY+nVIjTNzLitiXllZtqg8QdsLSywILEQP8PGH7DwE8SSxMbCmemIWJRzc+899zvnO/e796iOoXs+0XOL1NrW3tEZ6+ru6e3rH4gPDhU8u+pqIq/Zhu1uqYonDN0SeV/3DbHluEIxVUNsqgdLQXyzJlxPt60N/8gRRVOpWHpZ1xSfoe3y5K5q1g9PpkrxJKVy2QzNpBOUIsqmKcPOLMk5OZeQGQksichW7PgjdrEHGxqqMCFgwWffgAKPxw5kEBzGiqgz5rKnh3GBE3Qzt8pZgjMURg94rfBpJ0ItPgc1vZCt8S0GT5eZCYzTE93RGz3SPb3QZ9Na9bBGoOWId7XBFU5p4HRk/eNflsm7j/0f1p+afZSRDbXqrN0JkeAVWoNfOz5/W59fG69P0A29sv5reqYHfoFVe9duV8XaxR96VNbS/MeCeJTBLfzuU6K5U0in5OkUrc4kFxajZsYwijFMcsfmsIBlrCDPN5g4wyWuJJIKUlEqNVKllogzjF8m7X8BK8iaxA==</latexit>
) ,
e.g.

"expected improvement"
"exploitation" "exploration" ML
( )
Surrogate optimization / model-based optimization
!27
x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit>
ML <latexit sha1_base64="0VEGB1BS2t8KmbZWf3FuR1QlwM8=">AAACrnichVFLLwNRFP6MV72LjcSm0RA2zZmiWithY+nVIjTNzLitiXllZtqg8QdsLSywILEQP8PGH7DwE8SSxMbCmemIWJRzc+899zvnO/e796iOoXs+0XOL1NrW3tEZ6+ru6e3rH4gPDhU8u+pqIq/Zhu1uqYonDN0SeV/3DbHluEIxVUNsqgdLQXyzJlxPt60N/8gRRVOpWHpZ1xSfoe3y5K5q1g9PpkrxJKVy2QzNpBOUIsqmKcPOLMk5OZeQGQksichW7PgjdrEHGxqqMCFgwWffgAKPxw5kEBzGiqgz5rKnh3GBE3Qzt8pZgjMURg94rfBpJ0ItPgc1vZCt8S0GT5eZCYzTE93RGz3SPb3QZ9Na9bBGoOWId7XBFU5p4HRk/eNflsm7j/0f1p+afZSRDbXqrN0JkeAVWoNfOz5/W59fG69P0A29sv5reqYHfoFVe9duV8XaxR96VNbS/MeCeJTBLfzuU6K5U0in5OkUrc4kFxajZsYwijFMcsfmsIBlrCDPN5g4wyWuJJIKUlEqNVKllogzjF8m7X8BK8iaxA==</latexit>
) ,
e.g.

"expected improvement"
1. Initial Sampling ( )
2. Loop: ( )
1. Construct a Surrogate Model.
2. Search the In ll Criterion.
3. Add new samples. (intervention)
ML Open Research Topics
• +
•
•
•
•
•
• (CFR )
"exploitation" "exploration" ML
( )
Hot Research Topic
!28
AlphaGo

(Nature, Jan 2016)
AlphaGo Zero

(Nature, Oct 2017)
AlphaZero

(Science, Dec 2018)
Amazon
SageMaker
Structure-activity landscapes are nonsmooth...
!29
J. Med. Chem. 2012, 55, 2932 2942
nonsmooth

( )
Activity cliffs Selectivity cliffs
!30
http://www.evolvingai.org/fooling
https://arxiv.org/pdf/
1610.06940.pdf
https://towardsdatascience.com/know-your-adversary-
understanding-adversarial-examples-part-1-2-63af4c2f5830
e.g. Adversarial examples (GAN ) or
(Suzuki+ 2019)
!31
www.chemcatchem.org
A Journal of
18/2019
Front Cover:
Keisuke Suzuki et al.
Statistical Analysis and Discovery of Heterogeneous Catalysts Based
on Machine Learning from Diverse Published Data
!32
• 1833 catalysts

Oxidative coupling of methane (OCM) 

[Zavyalova+ 2011]
• 4185 catalysts

Water gas shift reaction (WGS) 

[Odabaşi+ 2014]
• 5567 catalysts

CO oxidation [Günay+ 2013]
Our model
GPR-based BO
Random


(OCM )
!33
( )
...


...
/ 

or ( )
速 / + 

k-shot etc
+
Theory-driven vs Data-driven
!34
( )


( )




( )
( ) 

Data-driven
Theory-driven
Theory-driven vs Data-driven
!34
( )


( )




( )
( ) 

( )

( )
/
Data-driven
Theory-driven
!35
Acknowledgements
Ken-ichi
SHIMIZU

(ICAT)
Satoru
TAKAKUSAGI

(ICAT)
Takashi
TOYAO

(ICAT)
Keisuke

SUZUKI

(DENSO)
• Suzuki+ ChemCatChem. 2019.
• Kamachi+ The Journal of Physical Chemistry C. 2019.
• Hinuma+ The Journal of Physical Chemistry C. 2018.
• Toyao+, The Journal of Physical Chemistry C. 2018
• Takigawa+ RSC Advances. 2016.

(2019.9) 不均一系触媒研究のための機械学習と最適実験計画

  • 1.
    ICReDD-JST CREST 2019 926 ( ) (AIP)
 iPS (WPI-ICReDD) ichigaku.takigawa@riken.jp
  • 2.
    !2 • Suzuki+ ChemCatChem.2019. • Kamachi+ The Journal of Physical Chemistry C. 2019. • Hinuma+ The Journal of Physical Chemistry C. 2018. • Toyao+, The Journal of Physical Chemistry C. 2018 • Takigawa+ RSC Advances. 2016. Ken-ichi SHIMIZU
 (ICAT) Satoru TAKAKUSAGI
 (ICAT) Takashi TOYAO
 (ICAT) Keisuke
 SUZUKI
 (DENSO) !2 JST CREST ( )
 ( )
  • 3.
    ( ) !3 10 (1995 2004) 7 (20052011) 7 (2012 2018) ( )
 " L1 " ( )
 ( ) 
 ( )
 + JST : ( ) (2019 ) AIP iPS 
 ( ) 
 速
  • 4.
    Case Study: () !4 Wolfgang Pauli “God made the bulk; 
 the surface was invented by the devil.” adsorption diffusion desorption dissociation recombination kinks terraces adatom vacancysteps • • • • • • • 
 • (Reactants) 羅 (Catalysts) ...
 ( ...)
  • 5.
    Empirical optimization or"Edisonian empiricism" !5 ( ) feedback Thomas Edison • Genius is 1% inspiration and 99% perspiration. • There is no substitute for hard work. • I have not failed. I've just found 10,000 ways that won't work.
 : 速 
 
 " (empirical/inductive)" " (rational/deductive)"
  • 6.
    !6 ( ) feedback ( )
 etc ( ) ( ...)
 
 ... < 速 
 

  • 7.
    !7 ( ) Data-Driven feedback (+ ) ( ) 
 ( ) 
 Multilevel
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
    Key !11 ( ) (+ ) ( ) 
 In-House + Public + 
 + Quality Control / Annotations) 
 Multilevel
  • 13.
    ( ) !12 
 ( ) 
 
 速( ...) 
 In-House + Public + 
 + Quality Control / Annotations) +
  • 14.
    !13 REVIEW Inverse molecular designusing machine learning: Generative models for matter engineering Benjamin Sanchez-Lengeling1 and Alán Aspuru-Guzik2,3,4 * The discovery of new materials can bring enormous societal and technological progress. In this context, exploring completely the large space of potential materials is computationally intractable. Here, we review methods for achieving inverse design, which aims to discover tailored materials from the starting point of a particular desired functionality. Recent advances from the rapidly growing field of artificial intelligence, mostly from the subfield of machine learning, have resulted in a fertile exchange of ideas, where approaches to inverse molecular design are being proposed and employed at a rapid pace. Among these, deep generative models have been applied to numerous classes of materials: rational design of prospective drugs, synthetic routes to organic compounds, and optimization of photovoltaics and redox flow batteries, as well as a variety of other solid-state materials. M any of the challenges of the 21st century (1), from personalized health care to energy production and storage, share a common theme: materials are part of the solution (2). In some cases, the solu- tions to these challenges are fundamentally limited by the physics and chemistry of a ma- terial, such as the relationship of a materials bandgap to the thermodynamic limits for the generation of solar energy (3). Several important materials discoveries arose by chance or through a process of trial and error. For example, vulcanized rubber was prepared in the 19th century from random mixtures of com- pounds, based on the observation that heating with additives such as sulfur improved the rubber’s durability. At the molecular level, in- dividual polymer chains cross-linked, forming bridges that enhanced the macroscopic mechan- ical properties (4). Other notable examples in this vein include Teflon, anesthesia, Vaseline, Perkin’s mauve, and penicillin. Furthermore, these materials come from common chemical compounds found in nature. Potential drugs either were prepared by synthesis in a chem- ical laboratory or were isolated from plants, soil bacteria, or fungus. For example, up until 2014, 49% of small-molecule cancer drugs were natural products or their derivatives (5). In the future, disruptive advances in the dis- covery of matter could instead come from unex- plored regions of the set of all possible molecular and solid-state compounds, known as chemical space (6, 7). One of the largest collections of molecules, the chemical space project (8), has mapped 166.4 billion molecules that contain at most 17 heavy atoms. For pharmacologically rele- vant small molecules, the number of structures is estimated to be on the order of 1060 (9). Adding consideration of the hierarchy of scale from sub- nanometer to microscopic and mesoscopic fur- ther complicates exploration of chemical space in its entirety (10). Therefore, any global strategy for covering this space might seem impossible. Simulation offers one way of probing this space without experimentation. The physics and chemistry of these molecules are governed by quantum mechanics, which can be solved via the Schrödinger equation to arrive at their ex- act properties. In practice, approximations are used to lower computational time at the cost of accuracy. Although theory enjoys enormous progress, now routinely modeling molecules, clusters, and perfect as well as defect-laden periodic solids, the size of chemical space is still overwhelming, and smart navigation is required. For this purpose, machine learning (ML), deep learning (DL), and artificial intelligence (AI) have a potential role to play because their computational strategies automatically improve through experience (11). In the context of materials, ML techniques are often used for property prediction, seeking to learn a function that maps a molecular material to the property of choice. Deep generative models are a special class of DL methods that seek to model the underlying probability distribution of both structure and property and relate them in a nonlinear way. By exploiting patterns in massive datasets, these models can distill average and salient features that characterize molecules (12, 13). Inverse design is a component of a more complex materials discovery process. The time scale for deployment of new technologies, from discovery in a laboratory to a commercial pro- duct, historically, is 15 to 20 years (14). The pro- cess (Fig. 1) conventionally involves the following steps: (i) generate a new or improved material concept and simulate its potential suitability; (ii) synthesize the material; (iii) incorporate the ma- terial into a device or system; and (iv) characterize and measure the desired properties. This cycle generates feedback to repeat, improve, and re- fine future cycles of discovery. Each step can take up to several years. In the era of matter engineering, scientists seek to accelerate these cycles, reducing the FRONTIERS IN COMPUTATION 1 Department of Chemistry and Chemical Biology, Harvard University 12 Oxford Street, Cambridge, MA 02138, USA. 2 Department of Chemistry and Department of Computer Science, University of Toronto, Toronto Ontario, M5S 3H6, Canada. 3 Vector Institute for Artificial Intelligence, Toronto, Ontario M5S 1M1, Canada. 4 Canadian Institute for Advanced Fig. 1. Schematic comparison of material discovery paradigms. The current paradigm is APTEDBYK.HOLOSKI onJuly26,2018http://science.sciencemag.org/Downloadedfrom REVIEW https://doi.org/10.1038/s41586-018-0337-2 Machine learning for molecular and materials science Keith T. Butler1 , Daniel W. Davies2 , Hugh Cartwright3 , Olexandr Isayev4 * & Aron Walsh5,6 * Here we summarize recent progress in machine learning for the chemical sciences. We outline machine-learning techniques that are suitable for addressing research questions in this domain, as well as future directions for the field. We envisage a future in which the design, synthesis, characterization and application of molecules and materials is accelerated by artificial intelligence. T he Schrödinger equation provides a powerful structure– property relationship for molecules and materials. For a given spatial arrangement of chemical elements, the distribution of electrons and a wide range of physical responses can be described. The development of quantum mechanics provided a rigorous theoretical foundationforthechemicalbond.In1929,PaulDiracfamouslyproclaimed that the underlying physical laws for the whole of chemistry are “completely known”1 . John Pople, realizing the importance of rapidly developing computer technologies, created a program—Gaussian 70—that could perform ab initio calculations: predicting the behaviour, for molecules of modest size, purely from the fundamental laws of physics2 . In the 1960s, the Quantum Chemistry Program Exchange brought quantum chemistry to the masses in the form of useful practical tools3 . Suddenly, experi- mentalists with little or no theoretical training could perform quantum calculations too. Using modern algorithms and supercomputers, systems containing thousands of interacting ions and electrons can now be described using approximations to the physical laws that govern the world on the atomic scale4–6 . The field of computational chemistry has become increasingly pre- dictive in the twenty-first century, with activity in applications as wide ranging as catalyst development for greenhouse gas conversion, materials discovery for energy harvesting and storage, and computer-assisted drug design7 . The modern chemical-simulation toolkit allows the properties of a compound to be anticipated (with reasonable accuracy) before it has been made in the laboratory. High-throughput computational screening has become routine, giving scientists the ability to calculate the properties of thousands of compounds as part of a single study. In particular, den- sity functional theory (DFT)8,9 , now a mature technique for calculating the structure and behaviour of solids10 , has enabled the development of extensive databases that cover the calculated properties of known and hypothetical systems, including organic and inorganic crystals, single molecules and metal alloys11–13 . The emergence of contemporary artificial-intelligence methods has the potential to substantially alter and enhance the role of computers in science and engineering. The combination of big data and artificial intel- ligence has been referred to as both the “fourth paradigm of science”14 and the “fourth industrial revolution”15 , and the number of applications in the chemical domain is growing at an astounding rate. A subfield of artificial intelligence that has evolved rapidly in recent years is machine learning. At the heart of machine-learning applications lie statistical algo- rithms whose performance, much like that of a researcher, improves with training. There is a growing infrastructure of machine-learning tools for generating, testing and refining scientific models. Such techniques are suitable for addressing complex problems that involve massive combi- natorial spaces or nonlinear processes, which conventional procedures either cannot solve or can tackle only at great computational cost. As the machinery for artificial intelligence and machine learning matures, important advances are being made not only by those in main- stream artificial-intelligence research, but also by experts in other fields (domain experts) who adopt these approaches for their own purposes. As we detail in Box 1, the resources and tools that facilitate the application of machine-learning techniques mean that the barrier to entry is lower than ever. In the rest of this Review, we discuss progress in the application of machine learning to address challenges in molecular and materials research. We review the basics of machine-learning approaches, iden- tify areas in which existing methods have the potential to accelerate research and consider the developments that are required to enable more wide-ranging impacts. Nuts and bolts of machine learning With machine learning, given enough data and a rule-discovery algo- rithm, a computer has the ability to determine all known physical laws (and potentially those that are currently unknown) without human input. In traditional computational approaches, the computer is little more than a calculator, employing a hard-coded algorithm provided by a human expert. By contrast, machine-learning approaches learn the rules that underlie a dataset by assessing a portion of that data and building a model to make predictions. We consider the basic steps involved in the construction of a model, as illustrated in Fig. 1; this constitutes a blueprint of the generic workflow that is required for the successful application of machine learning in a materials-discovery process. Data collection Machine learning comprises models that learn from existing (train- ing) data. Data may require initial preprocessing, during which miss- ing or spurious elements are identified and handled. For example, the Inorganic Crystal Structure Database (ICSD) currently contains more than 190,000 entries, which have been checked for technical mistakes but are still subject to human and measurement errors. Identifying and removing such errors is essential to avoid machine-learning algorithms being misled. There is a growing public concern about the lack of reproducibility and error propagation of experimental data DNA to be sequences into distinct pieces, parcel out the detailed work of sequencing, and then reassemble these independent ef- forts at the end. It is not quite so simple in the world of genome semantics. Despite the differences between genome se- quencing and genetic network discovery, there are clear parallels that are illustrated in Table 1. In genome sequencing, a physical map is useful to provide scaffolding for assembling the fin- ished sequence. In the case of a genetic regula- tory network, a graphical model can play the same role. A graphical model can represent a high-level view of interconnectivity and help isolate modules that can be studied indepen- dently. Like contigs in a genomic sequencing project, low-level functional models can ex- plore the detailed behavior of a module of genes in a manner that is consistent with the higher level graphical model of the system. With stan- dardized nomenclature and compatible model- ing techniques, independent functional models can be assembled into a complete model of the cell under study. To enable this process, there will need to be standardized forms for model representa- tion. At present, there are many different modeling technologies in use, and although models can be easily placed into a database, they are not useful out of the context of their specific modeling package. The need for a standardized way of communicating compu- tational descriptions of biological systems ex- tends to the literature. Entire conferences have been established to explore ways of mining the biology literature to extract se- mantic information in computational form. Going forward, as a community we need to come to consensus on how to represent what we know about biology in computa- tional form as well as in words. The key to postgenomic biology will be the computa- tional assembly of our collective knowl- edge into a cohesive picture of cellular and organism function. With such a comprehen- sive model, we will be able to explore new types of conservation between organisms and make great strides toward new thera- peutics that function on well-characterized pathways. References 1. S. K. Kim et al., Science 293 , 2087 (2001). 2. A. Hartemink et al., paper presented at the Pacific Symposium on Biocomputing 2000, Oahu, Hawaii, 4 to 9 January 2000. 3. D. Pe’er et al., paper presented at the 9th Conference on Intelligent Systems in Molecular Biology (ISMB), Copenhagen, Denmark, 21 to 25 July 2001. 4. H. McAdams, A. Arkin, Proc. Natl. Acad. Sci. U.S.A. 94 , 814 ( 1997 ). 5. A. J. Hartemink, thesis, Massachusetts Institute of Technology, Cambridge (2001). V I E W P O I N T Machine Learning for Science: State of the Art and Future Prospects Eric Mjolsness* and Dennis DeCoste Recent advances in machine learning methods, along with successful applications across a wide variety of fields such as planetary science and bioinformatics, promise powerful new tools for practicing scientists. This viewpoint highlights some useful characteristics of modern machine learn- ing methods and their relevance to scientific applications. We conclude with some speculations on near-term progress and promising directions. Machine learning (ML) (1) is the study of computer algorithms capable of learning to im- prove their performance of a task on the basis of their own previous experience. The field is closely related to pattern recognition and statis- tical inference. As an engineering field, ML has become steadily more mathematical and more successful in applications over the past 20 years. Learning approaches such as data clus- tering, neural network classifiers, and nonlinear regression have found surprisingly wide appli- cation in the practice of engineering, business, and science. A generalized version of the stan- dard Hidden Markov Models of ML practice have been used for ab initio prediction of gene structures in genomic DNA (2). The predictions correlate surprisingly well with subsequent gene expression analysis (3). Postgenomic bi- ology prominently features large-scale gene ex- pression data analyzed by clustering methods (4), a standard topic in unsupervised learning. Many other examples can be given of learning and pattern recognition applications in science. Where will this trend lead? We believe it will lead to appropriate, partial automation of every element of scientific method, from hypothesis generation to model construction to decisive experimentation. Thus, ML has the potential to amplify every aspect of a working scientist’s progress to understanding. It will also, for better or worse, endow intelligent computer systems with some of the general analytic power of scientific thinking. Machine Learning at Every Stage of the Scientific Process Each scientific field has its own version of the scientific process. But the cycle of observing, creating hypotheses, testing by decisive exper- iment or observation, and iteratively building up comprehensive testable models or theories is shared across disciplines. For each stage of this abstracted scientific process, there are relevant developments in ML, statistical inference, and pattern recognition that will lead to semiauto- matic support tools of unknown but potentially broad applicability. Increasingly, the early elements of scientific method—observation and hypothesis genera- tion—face high data volumes, high data acqui- sition rates, or requirements for objective anal- ysis that cannot be handled by human percep- tion alone. This has been the situation in exper- imental particle physics for decades. There automatic pattern recognition for significant events is well developed, including Hough transforms, which are foundational in pattern recognition. A recent example is event analysis for Cherenkov detectors (8) used in neutrino oscillation experiments. Microscope imagery in cell biology, pathology, petrology, and other fields has led to image-processing specialties. So has remote sensing from Earth-observing satellites, such as the newly operational Terra spacecraft with its ASTER (a multispectral thermal radiometer), MISR (multiangle imag- ing spectral radiometer), MODIS (imaging Machine Learning Systems Group, Jet Propulsion Lab- oratory/California Institute of Technology, Pasadena, CA, 91109, USA. *To whom correspondence should be addressed. E- mail: mjolsness@jpl.nasa.gov Table 1. Parallels between genome sequencing and genetic network discovery. Genome sequencing Genome semantics Physical maps Graphical model Contigs Low-level functional models Contig reassembly Module assembly Finished genome sequence Comprehensive model www.sciencemag.org SCIENCE VOL 293 14 SEPTEMBER 2001 2051 C O M P U T E R S A N D S C I E N C E onAugust29,2018http://science.sciencemag.org/Downloadedfrom Nature, 559
 pp. 547–555 (2018) Science, 293 pp. 2051-2055 (2001) Science, 361 pp. 360-365 (2018) Science is changing, the tools of science are changing. And that requires different approaches. Erich Bloch, 1925-2016 ( )
 "low input, high throughput, no output science." (Sydney Brenner)
  • 15.
    Case Study: () !14 Wolfgang Pauli “God made the bulk; 
 the surface was invented by the devil.” adsorption diffusion desorption dissociation recombination kinks terraces adatom vacancysteps • • • • • • • 
 • (Reactants) 羅 (Catalysts) ...
 ( ...)
  • 16.
    (ML; Machine Learning) !15 GenericObject Recognition Speech Recognition Machine Translation QSAR/QSPR Prediction AI Game Players “ ” J’aime la musique I love music = CH3 N N H N H H3C N Growth inhibition 0.739 ( ) ( ) +
  • 17.
    !16 K. Shimizu etal, ACS Catal. 2, 1904 (2012) d-band center (εd EF) / eVd-band center (εd EF) / eV Hammer Nørskov d-band model reactionrates Volcano trends! adsorption energy / eV Brønsted-Evans-Polanyi relation activationenergy/eV Linear trends! ( )
  • 18.
    Outline: Our ML-basedstudies !17 1. Can we predict the d-band center? 2. Can we predict the adsorption energy? 3. Can we predict the catalytic activity? predicting DFT-calculated values by machine learning   (Takigawa et al, RSC Advances, 2016) predicting DFT-calculated values by machine learning   (Toyao et al, JPCC, 2018) predicting values from experiments reported in the  literature by machine learning   (Suzuki et al, ChemCatChem, 2019)
  • 19.
    Case 1. Predictingthe d-band centers !18 Guest Host Ruban, Hammer, Stoltze, Skriver, Nørskov, J Mol Catal A, 115:421-429 (1997) J. K. Nørskov, et al., Advances in Catalysis, 2000 Host Guest Two types of models • 1% doped • overlayer [1% doped] The d-bands of transition metals play central roles.
  • 20.
    The ML model !19 Group(G) Bulk Wigner Seitz radius (R) in Å Atomic number (AN) Atomic mass (AM) in g mol 1 Period (P) Electronegativity (EN) Ionization energy (IE) in eV Enthalpy of fusion ( fusH) in J g 1 Density at 25 (ρ) in g cm 3 
 9 ( 18) 6 Gradient Boosted Tree Regression (GBR) (1) Group in the periodic table (host) (2) Density at 25 (host) (3) Enthalpy of fusion (guest) (4) Ionization energy (guest) (5) Enthalpy of fusion (host) (6) Ionization energy (host)
  • 21.
    ML : Thebeauty of the periodic table... !20 Fe Co Ni Cu Ru Rh Pd Ag Ir Pt Au Fe -0.92 -0.96 -0.97 -1.65 -1.64 -2.24 -1.87 -2.4 -3.11 Co -1.37 -1.23 -2.12 -2.82 -2.53 -2.26 -3.56 Ni -0.33 -1.18 -1.92 -2.03 -2.43 -2.15 -2.82 -3.39 Cu -2.42 -2.49 -2.67 -2.89 -2.94 -3.82 -4.63 Ru -1.11 -1.04 -1.12 -1.41 -1.88 -1.81 -1.54 -2.27 Rh -1.42 -1.32 -1.51 -1.7 -1.73 -2.12 -1.81 -1.7 -2.18 -2.3 Pd -1.47 -1.29 -1.29 -1.03 -1.58 -1.83 -1.68 -1.52 -1.79 Ag -3.75 -3.56 -3.62 -3.8 -4.03 -3.5 -3.93 -4.51 Ir -1.78 -1.71 -1.78 -1.55 -2.14 -2.53 -2.2 -2.11 -2.6 -2.7 Pt -1.71 -1.47 -2.13 -2.01 -2.23 -2.06 -1.96 -2.33 Au -3.03 -2.82 -2.85 -2.89 -3.44 -3.56 Fe Co Ni Cu Ru Rh Pd Ag Ir Pt Au Fe -0.78 -1.65 -1.64 -1.87 Co -1.18 -1.17 -1.37 -1.87 -2.12 -2.82 -2.26 Ni -0.33 -1.18 -1.17 -2.61 -2.43 -2.15 -2.82 Cu -2.42 -2.89 -2.94 -3.88 -4.63 Ru -1.11 -1.04 -1.12 -1.11 -1.41 -1.81 -2.27 Rh -1.42 -1.51 -2.12 -1.81 -1.7 Pd -1.29 -1.29 -1.03 -1.58 -1.83 -1.52 -1.79 Ag -3.68 -3.8 -3.63 -4.51 Ir -2.14 -2.11 -2.7 Pt -1.71 -1.47 -2.13 -2.01 -2.23 -2.06 Au -2.86 -3.09 -2.89 -3.44 -3.56 Fe Co Ni Cu Ru Rh Pd Ag Ir Pt Au Fe -2.17 -3.11 Co -1.17 -1.37 -2.12 Ni -0.33 -1.18 -2.61 -2.43 Cu -2.42 -2.29 -2.49 -3.71 -4.63 Ru -2.02 Rh -1.32 -1.73 -2.12 Pd -1.94 -1.83 -1.97 Ag -3.75 -3.68 -4.51 Ir -1.78 -1.71 -2.7 Pt -2.13 Au -3.09 -2.89 training sets (75%) test sets (25%) training sets (50%) test sets (50%) training sets (25%) test sets (75%) 100 times
 mean RMSE: 0.153 / eV 100 times
 mean RMSE: 0.235 / eV 100 times
 mean RMSE: 0.402 / eV ML ML ML
  • 22.
    Descriptor analysis andevaluation !21 100 times mean RMSE: 0.204±0.047 / eV 100 times mean RMSE: 0.212±0.047 / eV 100 times mean RMSE: 0.214±0.046 / eV 18 Descriptor Selection
 (top-k) training sets (75%) test sets (25%) Method: GBR 6 4
  • 23.
    Case 2. Predictingthe adsorption energy !22 DFT calculation of adsorption energy 10 hours with our 32 cores workstation 
 (CH3 on the Cu monometallic surface) even longer time (about 34 hours) for the system containing another metal such as Pb Predicting Adsorption energy of CH3 (on 46 Cu-based alloys) ML prediction • < 1 sec with our 1 core laptop not dependent on target systems, but methods we choose training sets (75%) test sets (25%) Adsorbates: 
 CH3, CH2, CH, C, H
  • 24.
    !23 training sets (75%) testsets (25%) training sets (50%) test sets (50%) training sets (25%) test sets (75%) 100 times
 mean RMSE: 0.153 / eV 100 times
 mean RMSE: 0.235 / eV 100 times
 mean RMSE: 0.402 / eV ML ML ML
  • 25.
  • 26.
  • 27.
    !25 = = = ( ) x<latexitsha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit> ( ) ( ) 
 
 ( ) ...
  • 28.
    ML !26 Highly Inaccurate ModelPredictions from Extrapolation (Lohninger 1999) ( ) "exploitation""exploration" / /
  • 29.
    Surrogate optimization /model-based optimization !27 x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit> ML <latexit sha1_base64="0VEGB1BS2t8KmbZWf3FuR1QlwM8=">AAACrnichVFLLwNRFP6MV72LjcSm0RA2zZmiWithY+nVIjTNzLitiXllZtqg8QdsLSywILEQP8PGH7DwE8SSxMbCmemIWJRzc+899zvnO/e796iOoXs+0XOL1NrW3tEZ6+ru6e3rH4gPDhU8u+pqIq/Zhu1uqYonDN0SeV/3DbHluEIxVUNsqgdLQXyzJlxPt60N/8gRRVOpWHpZ1xSfoe3y5K5q1g9PpkrxJKVy2QzNpBOUIsqmKcPOLMk5OZeQGQksichW7PgjdrEHGxqqMCFgwWffgAKPxw5kEBzGiqgz5rKnh3GBE3Qzt8pZgjMURg94rfBpJ0ItPgc1vZCt8S0GT5eZCYzTE93RGz3SPb3QZ9Na9bBGoOWId7XBFU5p4HRk/eNflsm7j/0f1p+afZSRDbXqrN0JkeAVWoNfOz5/W59fG69P0A29sv5reqYHfoFVe9duV8XaxR96VNbS/MeCeJTBLfzuU6K5U0in5OkUrc4kFxajZsYwijFMcsfmsIBlrCDPN5g4wyWuJJIKUlEqNVKllogzjF8m7X8BK8iaxA==</latexit> "exploitation" "exploration" ML ( )
  • 30.
    Surrogate optimization /model-based optimization !27 x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit> ML <latexit sha1_base64="0VEGB1BS2t8KmbZWf3FuR1QlwM8=">AAACrnichVFLLwNRFP6MV72LjcSm0RA2zZmiWithY+nVIjTNzLitiXllZtqg8QdsLSywILEQP8PGH7DwE8SSxMbCmemIWJRzc+899zvnO/e796iOoXs+0XOL1NrW3tEZ6+ru6e3rH4gPDhU8u+pqIq/Zhu1uqYonDN0SeV/3DbHluEIxVUNsqgdLQXyzJlxPt60N/8gRRVOpWHpZ1xSfoe3y5K5q1g9PpkrxJKVy2QzNpBOUIsqmKcPOLMk5OZeQGQksichW7PgjdrEHGxqqMCFgwWffgAKPxw5kEBzGiqgz5rKnh3GBE3Qzt8pZgjMURg94rfBpJ0ItPgc1vZCt8S0GT5eZCYzTE93RGz3SPb3QZ9Na9bBGoOWId7XBFU5p4HRk/eNflsm7j/0f1p+afZSRDbXqrN0JkeAVWoNfOz5/W59fG69P0A29sv5reqYHfoFVe9duV8XaxR96VNbS/MeCeJTBLfzuU6K5U0in5OkUrc4kFxajZsYwijFMcsfmsIBlrCDPN5g4wyWuJJIKUlEqNVKllogzjF8m7X8BK8iaxA==</latexit> ) , "exploitation" "exploration" ML ( )
  • 31.
    Surrogate optimization /model-based optimization !27 x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit> ML <latexit sha1_base64="0VEGB1BS2t8KmbZWf3FuR1QlwM8=">AAACrnichVFLLwNRFP6MV72LjcSm0RA2zZmiWithY+nVIjTNzLitiXllZtqg8QdsLSywILEQP8PGH7DwE8SSxMbCmemIWJRzc+899zvnO/e796iOoXs+0XOL1NrW3tEZ6+ru6e3rH4gPDhU8u+pqIq/Zhu1uqYonDN0SeV/3DbHluEIxVUNsqgdLQXyzJlxPt60N/8gRRVOpWHpZ1xSfoe3y5K5q1g9PpkrxJKVy2QzNpBOUIsqmKcPOLMk5OZeQGQksichW7PgjdrEHGxqqMCFgwWffgAKPxw5kEBzGiqgz5rKnh3GBE3Qzt8pZgjMURg94rfBpJ0ItPgc1vZCt8S0GT5eZCYzTE93RGz3SPb3QZ9Na9bBGoOWId7XBFU5p4HRk/eNflsm7j/0f1p+afZSRDbXqrN0JkeAVWoNfOz5/W59fG69P0A29sv5reqYHfoFVe9duV8XaxR96VNbS/MeCeJTBLfzuU6K5U0in5OkUrc4kFxajZsYwijFMcsfmsIBlrCDPN5g4wyWuJJIKUlEqNVKllogzjF8m7X8BK8iaxA==</latexit> ) , e.g.
 "expected improvement" "exploitation" "exploration" ML ( )
  • 32.
    Surrogate optimization /model-based optimization !27 x<latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit><latexit sha1_base64="BLB8K/n7QYAsE73zsDEUiBvCSV8=">AAAB/XicbVDLSgMxFL2pr1pfVZdugkVwVWZE0GXRjcsK9gHtUDJppo1NMkOSEctQ/AW3uncnbv0Wt36JaTsLbT1w4XDOvZzLCRPBjfW8L1RYWV1b3yhulra2d3b3yvsHTROnmrIGjUWs2yExTHDFGpZbwdqJZkSGgrXC0fXUbz0wbXis7uw4YYEkA8UjTol1UrMbyuxx0itXvKo3A14mfk4qkKPeK393+zFNJVOWCmJMx/cSG2REW04Fm5S6qWEJoSMyYB1HFZHMBNns2wk+cUofR7F2oyyeqb8vMiKNGcvQbUpih2bRm4r/eqFcSLbRZZBxlaSWKToPjlKBbYynVeA+14xaMXaEUM3d75gOiSbUusJKrhR/sYJl0jyr+l7Vvz2v1K7yeopwBMdwCj5cQA1uoA4NoHAPz/ACr+gJvaF39DFfLaD85hD+AH3+ADzJlfc=</latexit> ML <latexit sha1_base64="0VEGB1BS2t8KmbZWf3FuR1QlwM8=">AAACrnichVFLLwNRFP6MV72LjcSm0RA2zZmiWithY+nVIjTNzLitiXllZtqg8QdsLSywILEQP8PGH7DwE8SSxMbCmemIWJRzc+899zvnO/e796iOoXs+0XOL1NrW3tEZ6+ru6e3rH4gPDhU8u+pqIq/Zhu1uqYonDN0SeV/3DbHluEIxVUNsqgdLQXyzJlxPt60N/8gRRVOpWHpZ1xSfoe3y5K5q1g9PpkrxJKVy2QzNpBOUIsqmKcPOLMk5OZeQGQksichW7PgjdrEHGxqqMCFgwWffgAKPxw5kEBzGiqgz5rKnh3GBE3Qzt8pZgjMURg94rfBpJ0ItPgc1vZCt8S0GT5eZCYzTE93RGz3SPb3QZ9Na9bBGoOWId7XBFU5p4HRk/eNflsm7j/0f1p+afZSRDbXqrN0JkeAVWoNfOz5/W59fG69P0A29sv5reqYHfoFVe9duV8XaxR96VNbS/MeCeJTBLfzuU6K5U0in5OkUrc4kFxajZsYwijFMcsfmsIBlrCDPN5g4wyWuJJIKUlEqNVKllogzjF8m7X8BK8iaxA==</latexit> ) , e.g.
 "expected improvement" 1. Initial Sampling ( ) 2. Loop: ( ) 1. Construct a Surrogate Model. 2. Search the In ll Criterion. 3. Add new samples. (intervention) ML Open Research Topics • + • • • • • • (CFR ) "exploitation" "exploration" ML ( )
  • 33.
    Hot Research Topic !28 AlphaGo
 (Nature,Jan 2016) AlphaGo Zero
 (Nature, Oct 2017) AlphaZero
 (Science, Dec 2018) Amazon SageMaker
  • 34.
    Structure-activity landscapes arenonsmooth... !29 J. Med. Chem. 2012, 55, 2932 2942 nonsmooth
 ( ) Activity cliffs Selectivity cliffs
  • 35.
  • 36.
    (Suzuki+ 2019) !31 www.chemcatchem.org A Journalof 18/2019 Front Cover: Keisuke Suzuki et al. Statistical Analysis and Discovery of Heterogeneous Catalysts Based on Machine Learning from Diverse Published Data
  • 37.
    !32 • 1833 catalysts
 Oxidativecoupling of methane (OCM) 
 [Zavyalova+ 2011] • 4185 catalysts
 Water gas shift reaction (WGS) 
 [Odabaşi+ 2014] • 5567 catalysts
 CO oxidation [Günay+ 2013] Our model GPR-based BO Random 
 (OCM )
  • 38.
    !33 ( ) ... 
 ... / 
 or( ) 速 / + 
 k-shot etc +
  • 39.
    Theory-driven vs Data-driven !34 () 
 ( ) 
 
 ( ) ( ) 
 Data-driven Theory-driven
  • 40.
    Theory-driven vs Data-driven !34 () 
 ( ) 
 
 ( ) ( ) 
 ( )
 ( ) / Data-driven Theory-driven
  • 41.
    !35 Acknowledgements Ken-ichi SHIMIZU
 (ICAT) Satoru TAKAKUSAGI
 (ICAT) Takashi TOYAO
 (ICAT) Keisuke
 SUZUKI
 (DENSO) • Suzuki+ ChemCatChem.2019. • Kamachi+ The Journal of Physical Chemistry C. 2019. • Hinuma+ The Journal of Physical Chemistry C. 2018. • Toyao+, The Journal of Physical Chemistry C. 2018 • Takigawa+ RSC Advances. 2016.