2. Abstract
Scientific computations tend to involve a number of experiments
under different conditions.
It is important to manage computational experiments so that their
results are reproducible.
In this talk we introduce 3 rules to make computations reproducible.
2 / 31
5. Background
A lab notebook is indispensable for experimental research in natural
science. One of its role is to make experiments reproducible.
Why not for computational research?
.
......Lack of reproducibility means lack of reliability.
5 / 31
6. Common problems
Common problems in computational experiments:
I confused which results is got under which condition.
I overwrote previous results without intent.
I used inconsistent data to get invalid results.
...
Not a few problems are caused due an inappropriate management
of experiments.
6 / 31
7. Goal
To archive all results of each experiment with
all information required to reproduce them
so that we can retrieve and restore easily
in a systematic and costless way.
7 / 31
8. Note
What is introduced in this talk is not a established methodology,
but a collection of field techniques. Same with wording.
In this talk, we will not deal with
distributed computation
documentation or test
publishing of a paper
release of OSS
8 / 31
10. Three elements
We distinguish the following elements which affect reproducibility of
computations:
Algorithm an algorithm coded into a program
implemented by yourself, calling external library, ...
Data input and output data, intermediate data to reuse
Environment software and hardware environment
external library, server configuration, platform, ...
10 / 31
11. Three rules
Give an Identifier to each element and archive them.
Record a machine-readable Recipe
with a human-readable comments.
Make every manipulation Mechanized.
11 / 31
12. .
Identifier..
......Give an Identifier to each element and archive them.
Algorithm
use version control system
Data
give a name to distinguish data kind
give a version to distinguish concrete content
Environment
find information of platform
find a version (optionally build parameters) of a library
Keep in mind to track all elements during the whole process:
every code under version control
no data without an identifier
no temporary environment
12 / 31
13. .
Recipe
..
......
Record a machine-readable Recipe
with a human-readable comments.
A recipe should include all information
required to reproduce the results of an experiment
(other than contents of Algorithm, Data and Environment
stored in other place.)
A recipe should be machine-readable to re-conduct the experiment.
A recipe should include a human-readable comment
on purpose and/or meanings of the experiment.
A recipe should be generated automatically by tracking
experiments.
13 / 31
14. Typically a recipe include the following information:
in which order
which data is processed
by which algorithm
under which environment
with which Parameter
Typically a recipe consists of the followings:
a script file to run the whole process
a configuration file which specifies parameters and identifiers
a text file of comments
14 / 31
15. .
Mechanize..
......Make every manipulation Mechanized.
Run the whole process of an experiment by a single operation.
No manual manipulation of data.
No manual compilation of source codes.
Automated provision of an environment.
15 / 31
16. complement: Tentative experiment
Too large archive detracts substantive significant of reproducibility.
For tentative experiments with ephemeral results,
it is not necessarily required to record.
test of codes
trial on tiny data
...
If there is a possibility to get a result which might be used, referred
or looked up afterward, then it should be recorded.
16 / 31
17. complement: Reuse of intermediate data
In order to reuse intermediate data, utilize an identifier.
Explicitly specify intermediate data to reuse by an identifier.
Automatically detect available intermediate data
based on dependency.
...
17 / 31
19. Identify Algorithm
Use a version control system to manage source codes
such as Git and Mercurial.
It is easy to record a revision and uncommitted changes
at each experiment.
(Learn inside of VCS if you need more flexible management.)
19 / 31
20. Identify Data
File
Give appropriate names to directories and files,
then a resolved absolute path can be used as an identifier.
If no meaningful word is thought up, use time-stamp or hash.
DB or other API
A pair of URI and query of which results are constant
can be used as an identifier.
If API behaves randomly, keep the results at hand (w/time-stamp).
20 / 31
21. Identify Environment
Python package
Use PyPa tools (virtualenv, setuptools and pip) or Conda/enstaller.
Library
Use HashDist.
It is an alternative to utilize CDE.
Platform
Use platform, a standard library of Python
Server configuration
Use Ansible or other configuration management tool,
and Vagrant or other provisioning tool.
21 / 31
22. HashDist
A tool for developing, building and managing software stacks.
An software stack is described by YAML.
We can create, copy, move and remove software stacks.
$ git checkout stack.yml
$ hit build stack.yaml
22 / 31
23. Recipe: configuration file
A configuration in recipe should be of a machine-readable format.
Use ConfigParser, PyYAML or json module
to read/write parameters in INI, YAML or JSON format.
A receipt should include the followings:
command line argument
environment variable
random seed
23 / 31
24. Recipe: script file
A script in recipe should run the whole process
by a single operation.
There are several alternatives to realize such a script:
utilize a build tool (such as Autotools, Scons, and maf)
utilize a job-flow tool (such as Ruffus, Luigi)
write a small script by hand (e.g. run.py)
24 / 31
25. maf
“maf is a waf extension for writing computational experiments.”
Conduct computational experiments as build processes.
Focus on machine learning:
list configurations
run programs with each configuration
aggregate and visualize their results
25 / 31
26. Recipe: automatic generation
Do it yourself, or use Sumatra.
“Sumatra: automated tracking of scientific computations”
recording information about experiments, linking to data files
command line & web interface
integration with LATEX/Sphinx
$ smt run --executable=python --main=main.py
conf.param input.data
$ smt comment "..."
$ smt info
$ smt repeat
26 / 31
28. Summary
We have introduced 3 rules to manage computational experiments
so that their results are reproducible.
However, our method is just a makeshift patchwork of field
techniques.
.
......
We need a tool to manage experiments
in more integrated, systematic and sophisticated manner
for reproducible computations.
28 / 31
30. References
[1] G. K. Sandve, A. Nekrutenko, J. Taylor, E. Hovig, “Ten Simple Rules
for Reproducible Computational Research,” PLoS Comput. Biol.
9(10): e1003285 (2013). doi:10.1371/journal.pcbi.1003285
[2] V. Stodden, F. Leisch, R. Peng, “Implementing Reproducible
Research,” Open Science Framework (2014). osf.io/s9tya
30 / 31