Avoid retracting your analysis!
@yannick__ https://wurmlab.github.io
Reproducible research for
ecological and evolutionary
genomics.
Geoffrey Chang: Crystallographer
• Beckman FoundationYoung Investigator
Award
• Presidential Early Career Award
Journal of Molecular Biology (2003) Chang. Structure
of MsbA from Vibrio cholera: a multidrug resistance ABC
transporter homolog in a closed conformation.
PNAS (2004) Ma & Chang. Structure of the multidrug
resistance efflux transporter EmrE from Escherichia coli.
Science (2005) Reyes & Chang. Structure of the ABC
transporter MsbA in complex with ADP vanadate and
lipopolysaccharide.
Science (2005) Pornillos et al. X-ray structure of the
EmrE multidrug transporter in complex with a substrate.
Science (2001) Chang & Roth. Structure of MsbA from
E. coli: a homolog of the multidrug resistance ATP binding
cassette (ABC) transporters.
Science (2001) Chang & Roth.
earch Institute in
next year, in a cer-
Chang received a
Award
rs, the
young
ated a
apers
ctures
ded in
into a
Swiss
per in
bt on a
group
cience
gated,
scover
ispro-
mns of
density
m had
ucture.
d used
energy from adenosine triphosphate to trans-
port molecules across cell membranes. These
so-called ABC transporters perform many
determination was at the root o
cess: “He has an incredible d
ethic. He really pushed the fie
of getting things to
no one else had be
Chang’s data are go
but the faulty so
everything off.
Ironically, anoth
doc in Rees’s lab, K
exposed the mistake
tember issue of Na
now at the Swiss F
ofTechnology in Zu
the structure of anA
calledSav1866from
aureus. The structur
cally—and unexpe
ent from that of
pulling up Sav186
MsbA from S. typh
computer screen, L
realized in minutes
structurewasinvert
the “hand” of a mol
Flipping fiasco. The structures of MsbA (purple) and Sav1866 (green) overlap
little (left) until MsbA is inverted (right).
California.The next year, in a cer-
e White House, Chang received a
l Early Career Award
ts and Engineers, the
ghest honor for young
. His lab generated a
high-profile papers
e molecular structures
proteins embedded in
nes.
e dream turned into a
In September, Swiss
published a paper in
cast serious doubt on a
cture Chang’s group
ed in a 2001 Science
en he investigated,
horrified to discover
madedata-analysispro-
ipped two columns of
ng the electron-density
which his team had
final protein structure.
ly, his group had used
m to analyze data for
port molecules across cell membranes. These
so-called ABC transporters perform many
cess: “He has an
ethic. He really p
of get
no on
Chan
but t
every
Iro
doc in
expos
temb
now a
ofTec
the str
called
aureu
cally—
ent f
pullin
MsbA
comp
realiz
struct
the “h
a cha
Flipping fiasco. The structures of MsbA (purple) and Sav1866 (green) overlap
little (left) until MsbA is inverted (right).
Sav1866 Dawson & Locher (2006) Nature
Science(2001)Chang&Roth.Science (2001) Chang & Roth.
Comparison with 3D structure of ortholog
Science (2001) Chang & Roth.
http://wurmlab.github.io
LETTERS I BOOKS I POLICY FORUM I EDUCATION FORUM I PERSPECTIVES
1878 1880 1882
LETTERS
edited by Etta Kavanagh
Retraction
WE WISH TO RETRACT OUR RESEARCH ARTICLE “STRUCTURE OF
MsbA from E. coli:A homolog of the multidrug resistanceATP bind-
ing cassette (ABC) transporters” and both of our Reports “Structure of
the ABC transporter MsbA in complex with ADP•vanadate and
lipopolysaccharide”and“X-raystructureoftheEmrEmultidrugtrans-
porter in complex with a substrate” (1–3).
The recently reported structure of Sav1866 (4) indicated that our
MsbA structures (1, 2, 5) were incorrect in both the hand of the struc-
ture and the topology. Thus, our biological interpretations based on
these inverted models for MsbA are invalid.
Anin-housedatareductionprogramintroducedachangeinsignfor
anomalous differences.This program, which was not part of a conven-
tional data processing package, converted the anomalous pairs (I+ and
I-) to (F- and F+), thereby introducing a sign change. As the diffrac-
tion data collected for each set of MsbA crystals and for the EmrE
crystals were processed with the same program, the structures reported
in (1–3, 5, 6) had the wrong hand.
The error in the topology of the original MsbA structure was a con-
sequence of the low resolution of the data as well as breaks in the elec-
tron density for the connecting loop regions. Unfortunately, the use of
the multicopy refinement procedure still allowed us to obtain reason-
able refinement values for the wrong structures.
The Protein Data Bank (PDB) files 1JSQ, 1PF4, and 1Z2R for
MsbA and 1S7B and 2F2M for EmrE have been moved to the archive
of obsolete PDB entries. The MsbA and EmrE structures will be
recalculated from the original data using the proper sign for the anom-
alous differences, and the new Ca coordinates and structure factors
will be deposited.
We very sincerely regret the confusion that these papers have
caused and, in particular, subsequent research efforts that were unpro-
ductive as a result of our original findings.
GEOFFREY CHANG, CHRISTOPHER B. ROTH,
CHRISTOPHER L. REYES, OWEN PORNILLOS,
YEN-JU CHEN, ANDY P. CHEN
Department of Molecular Biology, The Scripps Research Institute, La Jolla, CA 92037, USA.
References
1. G. Chang, C. B. Roth, Science 293, 1793 (2001).
2. C. L. Reyes, G. Chang, Science 308, 1028 (2005).
3. O. Pornillos, Y.-J. Chen, A. P. Chen, G. Chang, Science 310, 1950 (2005).
4. R. J. Dawson, K. P. Locher, Nature 443, 180 (2006).
5. G. Chang, J. Mol. Biol. 330, 419 (2003).
6. C. Ma, G. Chang, Proc. Natl. Acad. Sci. U.S.A. 101, 2852 (2004).
MsbA from E. coli:A homolog of the multidrug resistanceATP bind-
ing cassette (ABC) transporters” and both of our Reports “Structure of
the ABC transporter MsbA in complex with ADP•vanadate and
lipopolysaccharide”and“X-raystructureoftheEmrEmultidrugtrans-
porter in complex with a substrate” (1–3).
The recently reported structure of Sav1866 (4) indicated that our
MsbA structures (1, 2, 5) were incorrect in both the hand of the struc-
ture and the topology. Thus, our biological interpretations based on
these inverted models for MsbA are invalid.
Anin-housedatareductionprogramintroducedachangeinsignfor
anomalous differences.This program, which was not part of a conven-
tional data processing package, converted the anomalous pairs (I+ and
I-) to (F- and F+), thereby introducing a sign change. As the diffrac-
tion data collected for each set of MsbA crystals and for the EmrE
crystals were processed with the same program, the structures reported
in (1–3, 5, 6) had the wrong hand.
The error in the topology of the original MsbA structure was a con-
sequence of the low resolution of the data as well as breaks in the elec-
1860
Untilrecently,GeoffreyChang’scareerwason
a trajectory most young scientists only dream
about. In 1999, at the age of 28, the protein
crystallographer landed a faculty position at
the prestigious Scripps Research Institute in
San Diego, California.The next year, in a cer-
emony at the White House, Chang received a
Presidential Early Career Award
for Scientists and Engineers, the
country’s highest honor for young
researchers. His lab generated a
stream of high-profile papers
detailing the molecular structures
of important proteins embedded in
cell membranes.
Then the dream turned into a
nightmare. In September, Swiss
researchers published a paper in
Nature that cast serious doubt on a
protein structure Chang’s group
had described in a 2001 Science
paper. When he investigated,
Chang was horrified to discover
thatahomemadedata-analysispro-
2001 Science paper, which described the struc-
tureofaproteincalledMsbA,isolatedfromthe
bacterium Escherichia coli. MsbA belongs to a
huge and ancient family of molecules that use
energy from adenosine triphosphate to trans-
port molecules across cell membranes. These
so-called ABC transporters perform many
Sciences and
EmrE, a differ
Crystalliz
five membra
was an incred
postdoc advis
nia Institute o
proteins are a
because they
ously diffic
needed for x-
determination
cess: “He has
ethic. He real
of
no
Ch
bu
ev
do
ex
tem
no
of
the
cal
au
ca
en
pu
A Scientist’s Nightmare: Software
Problem Leads to Five Retractions
SCIENTIFIC PUBLISHING
😥
Geoffrey Chang
• Beckman FoundationYoung Investigator
Award
• Presidential Early Career Award
Science (2001) Chang & Roth. Structure of MsbA from
E. coli: a homolog of the multidrug resistance ATP binding
cassette (ABC) transporters.
Journal of Molecular Biology (2003) Chang. Structure
of MsbA from Vibrio cholera: a multidrug resistance ABC
transporter homolog in a closed conformation.
PNAS (2004) Ma & Chang. Structure of the multidrug
resistance efflux transporter EmrE from Escherichia coli.
Science (2005) Reyes & Chang. Structure of the ABC
transporter MsbA in complex with ADP vanadate and
lipopolysaccharide.
Science (2005) Pornillos et al. X-ray structure of the
EmrE multidrug transporter in complex with a substrate.
1860
Untilrecently,GeoffreyChang’scareerwason
a trajectory most young scientists only dream
about. In 1999, at the age of 28, the protein
crystallographer landed a faculty position at
the prestigious Scripps Research Institute in
San Diego, California.The next year, in a cer-
emony at the White House, Chang received a
Presidential Early Career Award
for Scientists and Engineers, the
country’s highest honor for young
researchers. His lab generated a
stream of high-profile papers
detailing the molecular structures
of important proteins embedded in
cell membranes.
Then the dream turned into a
nightmare. In September, Swiss
researchers published a paper in
Nature that cast serious doubt on a
protein structure Chang’s group
had described in a 2001 Science
paper. When he investigated,
Chang was horrified to discover
thatahomemadedata-analysispro-
2001 Science paper, which described the struc-
tureofaproteincalledMsbA,isolatedfromthe
bacterium Escherichia coli. MsbA belongs to a
huge and ancient family of molecules that use
energy from adenosine triphosphate to trans-
port molecules across cell membranes. These
so-called ABC transporters perform many
Sciences and
EmrE, a differ
Crystalliz
five membra
was an incred
postdoc advis
nia Institute o
proteins are a
because they
ously diffic
needed for x-
determination
cess: “He has
ethic. He real
of
no
Ch
bu
ev
do
ex
tem
no
of
the
cal
au
ca
en
pu
A Scientist’s Nightmare: Software
Problem Leads to Five Retractions
SCIENTIFIC PUBLISHING
http://wurmlab.github.io
This is costly
For:
•the individual
•collaborators
•the institution
•1000s of researchers performing follow-up
work
•science
•society
http://wurmlab.github.io
• Understanding/visualising/analysing/massaging big data is hard.
• Biology/life is complex.
• Biologists lack computational training.
• Field is young.
• Analysis tools (generally) suck:
• badly written
• badly tested
• hard to install
• output quality… often questionable.
• Data sizes keep growing!
• Data formats keep changing :(
Genome bioinformatics is hardGenome bioinformatics is harder
than (many) other data sciences
http://wurmlab.github.io
Some sources of inspiration
http://wurmlab.github.io
Community Page
Best Practices for Scientific Computing
Greg Wilson1
*, D. A. Aruliah2
, C. Titus Brown3
, Neil P. Chue Hong4
, Matt Davis5
, Richard T. Guy6¤
,
Steven H. D. Haddock7
, Kathryn D. Huff8
, Ian M. Mitchell9
, Mark D. Plumbley10
, Ben Waugh11
,
Ethan P. White12
, Paul Wilson13
1 Mozilla Foundation, Toronto, Ontario, Canada, 2 University of Ontario Institute of Technology, Oshawa, Ontario, Canada, 3 Michigan State University, East Lansing,
Michigan, United States of America, 4 Software Sustainability Institute, Edinburgh, United Kingdom, 5 Space Telescope Science Institute, Baltimore, Maryland, United
States of America, 6 University of Toronto, Toronto, Ontario, Canada, 7 Monterey Bay Aquarium Research Institute, Moss Landing, California, United States of America,
8 University of California Berkeley, Berkeley, California, United States of America, 9 University of British Columbia, Vancouver, British Columbia, Canada, 10 Queen Mary
University of London, London, United Kingdom, 11 University College London, London, United Kingdom, 12 Utah State University, Logan, Utah, United States of America,
13 University of Wisconsin, Madison, Wisconsin, United States of America
Introduction
Scientists spend an increasing amount of time building and
using software. However, most scientists are never taught how to
do this efficiently. As a result, many are unaware of tools and
practices that would allow them to write more reliable and
maintainable code with less effort. We describe a set of best
practices for scientific software development that have solid
foundations in research and experience, and that improve
scientists’ productivity and the reliability of their software.
Software is as important to modern scientific research as
telescopes and test tubes. From groups that work exclusively on
computational problems, to traditional laboratory and field
scientists, more and more of the daily operation of science revolves
around developing new algorithms, managing and analyzing the
large amounts of data that are generated in single research
projects, combining disparate datasets to assess synthetic problems,
and other computational tasks.
Scientists typically develop their own software for these purposes
because doing so requires substantial domain-specific knowledge.
As a result, recent studies have found that scientists typically spend
30% or more of their time developing software [1,2]. However,
90% or more of them are primarily self-taught [1,2], and therefore
lack exposure to basic software development practices such as
writing maintainable code, using version control and issue
error from another group’s code was not discovered until after
publication [6]. As with bench experiments, not everything must be
done to the most exacting standards; however, scientists need to be
aware of best practices both to improve their own approaches and
for reviewing computational work by others.
This paper describes a set of practices that are easy to adopt and
have proven effective in many research settings. Our recommenda-
tions are based on several decades of collective experience both
building scientific software and teaching computing to scientists
[17,18], reports from many other groups [19–25], guidelines for
commercial and open source software development [26,27], and on
empirical studies of scientific computing [28–31] and software
development in general (summarized in [32]). None of these practices
will guarantee efficient, error-free software development, but used in
concert they will reduce the number of errors in scientific software,
make it easier to reuse, and save the authors of the software time and
effort that can used for focusing on the underlying scientific questions.
Our practices are summarized in Box 1; labels in the main text
such as ‘‘(1a)’’ refer to items in that summary. For reasons of space,
we do not discuss the equally important (but independent) issues of
reproducible research, publication and citation of code and data,
and open science. We do believe, however, that all of these will be
much easier to implement if scientists have the skills we describe.
Education
A Quick Guide to Organizing Computational Biology
Projects
William Stafford Noble1,2
*
1 Department of Genome Sciences, School of Medicine, University of Washington, Seattle, Washington, United States of America, 2 Department of Computer Science and
Engineering, University of Washington, Seattle, Washington, United States of America
Introduction
Most bioinformatics coursework focus-
es on algorithms, with perhaps some
components devoted to learning pro-
gramming skills and learning how to
use existing bioinformatics software. Un-
fortunately, for students who are prepar-
ing for a research career, this type of
curriculum fails to address many of the
day-to-day organizational challenges as-
sociated with performing computational
experiments. In practice, the principles
behind organizing and documenting
computational experiments are often
learned on the fly, and this learning is
strongly influenced by personal predilec-
tions as well as by chance interactions
with collaborators or colleagues.
The purpose of this article is to describe
one good strategy for carrying out com-
putational experiments. I will not describe
profound issues such as how to formulate
hypotheses, design experiments, or draw
conclusions. Rather, I will focus on
relatively mundane issues such as organiz-
ing files and directories and documenting
understanding your work or who may be
evaluating your research skills. Most com-
monly, however, that ‘‘someone’’ is you. A
few months from now, you may not
remember what you were up to when you
created a particular set of files, or you may
not remember what conclusions you drew.
You will either have to then spend time
reconstructing your previous experiments
or lose whatever insights you gained from
those experiments.
This leads to the second principle,
which is actually more like a version of
Murphy’s Law: Everything you do, you
will probably have to do over again.
Inevitably, you will discover some flaw in
your initial preparation of the data being
analyzed, or you will get access to new
data, or you will decide that your param-
eterization of a particular model was not
broad enough. This means that the
experiment you did last week, or even
the set of experiments you’ve been work-
ing on over the past month, will probably
need to be redone. If you have organized
and documented your work clearly, then
repeating the experiment with the new
under a common root directory. The
exception to this rule is source code or
scripts that are used in multiple projects
Each such program might have a projec
directory of its own.
Within a given project, I use a top-leve
organization that is logical, with chrono
logical organization at the next level, and
logical organization below that. A sample
project, called msms, is shown in Figure 1
At the root of most of my projects, I have a
data directory for storing fixed data sets, a
results directory for tracking computa
tional experiments peformed on that data
a doc directory with one subdirectory per
manuscript, and directories such as src
for source code and bin for compiled
binaries or scripts.
Within the data and results directo
ries, it is often tempting to apply a similar
logical organization. For example, you
may have two or three data sets agains
which you plan to benchmark your
algorithms, so you could create one
directory for each of them under data
In my experience, this approach is risky
because the logical structure of your fina
http://software.ac.uk
http://wurmlab.github.io
Specific Approaches/Tools
1. Write code for humans
http://wurmlab.github.io
Write code for humans (not computers!)
• For
• yourself
• colleagues / collaborators
• reviewers
• other random people who may reuse/improve your code
• Respect conventions (e.g., a style guide)
te Damian ConwayUse whitespace/indentation!
e Damian Conway
Same information
Line length
Strive to limit your code to 80 characters per line. This fits comfortably on a printed page with a
reasonably sized font. If you find yourself running out of room, this is a good indication that you
should encapsulate some of the work in a separate function.

ant_measurements <- read.table(file = '~/Downloads/Web/ant_measurements.txt', header=TRUE, se
ant_measurements <- read.table(file = '~/Downloads/Web/ant_measurements.txt',
header = TRUE,
sep = 't',
col.names = c('colony', 'individual', 'headwidth', 'mass')
)
ant_measurements <- read.table(file = '~/Downloads/Web/ant_measurements.txt', header=TRUE, 

sep='t', col.names = c('colony', 'individual', 'headwidth', 'mass'))
R style guide extract
http://r-pkgs.had.co.nz/style.html
R style guide extract
http://r-pkgs.had.co.nz/style.html
http://wurmlab.github.io
http://wurmlab.github.io
Write code for humans (not computers!)
• For
• yourself
• colleagues / collaborators
• reviewers
• other random people who may want to reuse your code
• Respect conventions (e.g., a style guide)
• Don't optimise (generally…)
http://wurmlab.github.io
Specific Approaches/Tools
1. Write code for humans
2. Organise mindfully
Eliminate redundancy
DRY: Don’t RepeatYourself
Organise mindfully
& don't reinvent the wheel.
Education
A Quick Guide to Organizing Computational Biology
Projects
William Stafford Noble1,2
*
1 Department of Genome Sciences, School of Medicine, University of Washington, Seattle, Washington, United States of America, 2 Department of Computer Science and
Engineering, University of Washington, Seattle, Washington, United States of America
Introduction
Most bioinformatics coursework focus-
es on algorithms, with perhaps some
components devoted to learning pro-
gramming skills and learning how to
use existing bioinformatics software. Un-
fortunately, for students who are prepar-
ing for a research career, this type of
curriculum fails to address many of the
day-to-day organizational challenges as-
sociated with performing computational
experiments. In practice, the principles
behind organizing and documenting
computational experiments are often
learned on the fly, and this learning is
strongly influenced by personal predilec-
tions as well as by chance interactions
with collaborators or colleagues.
The purpose of this article is to describe
understanding your work or who may be
evaluating your research skills. Most com-
monly, however, that ‘‘someone’’ is you. A
few months from now, you may not
remember what you were up to when you
created a particular set of files, or you may
not remember what conclusions you drew.
You will either have to then spend time
reconstructing your previous experiments
or lose whatever insights you gained from
those experiments.
This leads to the second principle,
which is actually more like a version of
Murphy’s Law: Everything you do, you
will probably have to do over again.
Inevitably, you will discover some flaw in
your initial preparation of the data being
analyzed, or you will get access to new
data, or you will decide that your param-
eterization of a particular model was not
under a common root directory. The
exception to this rule is source code or
scripts that are used in multiple projects.
Each such program might have a project
directory of its own.
Within a given project, I use a top-level
organization that is logical, with chrono-
logical organization at the next level, and
logical organization below that. A sample
project, called msms, is shown in Figure 1.
At the root of most of my projects, I have a
data directory for storing fixed data sets, a
results directory for tracking computa-
tional experiments peformed on that data,
a doc directory with one subdirectory per
manuscript, and directories such as src
for source code and bin for compiled
binaries or scripts.
Within the data and results directo-
ries, it is often tempting to apply a similar,
with this approach, the distinction be-
tween data and results may not be useful.
Instead, one could imagine a top-level
The Lab Notebook
In parallel with this chronological
directory structure, I find it useful to
These types of entries provide a complete
picture of the development of the project
over time.
Figure 1. Directory structure for a sample project. Directory names are in large typeface, and filenames are in smaller typeface. Only a subset of
the files are shown here. Note that the dates are formatted ,year.-,month.-,day. so that they can be sorted in chronological order. The
source code src/ms-analysis.c is compiled to create bin/ms-analysis and is documented in doc/ms-analysis.html. The README
files in the data directories specify who downloaded the data files from what URL on what date. The driver script results/2009-01-15/runall
automatically generates the three subdirectories split1, split2, and split3, corresponding to three cross-validation splits. The bin/parse-
sqt.py script is called by both of the runall driver scripts.
doi:10.1371/journal.pcbi.1000424.g001
In each results folder:
•script getResults.rb
•intermediates
•output
Organise mindfully
http://wurmlab.github.io
Specific Approaches/Tools
1. Write code for humans
2. Organise mindfully
3. Plan for mistakes
Automatically check consistency with
style guide
install.packages("lint") # once
library(lint) # everytime
lint("file_to_check.R")
http://wurmlab.github.io
Create code tests that are easy to run
• Unit tests == checking edge cases to see if the function works
# do your stuff
# e.g. define speed() function
library(testthat)
expect_that(speed(km = 0, minutes = 60), equals(0))
expect_that(speed(km = 60, minutes = 60), equals(1))
expect_that(speed(km = -4, minutes = 60), throws_error())
expect_that(nrow(significant_SNPs), 42)
expect_that(my_model, is_a("lm"))
• Integration tests == "full analysis" but on small data with
known results
• e.g. on fakeVCF genotype file of 2 loci (one true positive,
one true negative)
http://wurmlab.github.io
"Continuous integration":
Tests should run automagically.
So you don't have to remember (or find time) to do it.
💾http://github.org
Tests run
automatically
http://travis-ci.org
If unexpected
result:
📬
http://wurmlab.github.io
Perform "Sanity checks"
http://wurmlab.github.io
Code reviews: ask a peer to
(critically) read your analysis code.
http://wurmlab.github.io
Specific Approaches/Tools
1. Write code for humans
2. Organise mindfully
3. Plan for mistakes
4. Use tools that reduce risks
knitr/sweave Analysis & report in one step.
analysis.Rmd
A minimal R Markdown example
I know the value of pi is 3.1416, and 2 times pi is 6.2832. To c
library(knitr); knit( minimal.Rmd )
A paragraph here. A code chunk below:
1+1
## [1] 2
.4-.7+.3 # what? it is not zero!
## [1] 5.551e-17
Graphics work too
library(ggplot2)
qplot(speed, dist, data = cars) + geom_smooth()
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
● ●
0
40
80
120
5 10 15 20
speed
dist
Figure 1: A scatterplot of cars
& jupyter
http://wurmlab.github.io
If you need to make a "pipeline"
• Use "pipelining" software. E.g.:
• Snakemake
• Nextflow
• (etc)
http://wurmlab.github.io
Specific Approaches/Tools
1. Write code for humans
2. Organise mindfully
3. Plan for mistakes
4. Use tools that reduce risks
Bruno
Vieira
Anurag
Priyam
Ismail
Moghul
Roddy
Pracana
@bmpvieira@yeban @RoddyPracana
Joe
Colgan
https://wurmlab.github.io

2015 12-18- Avoid having to retract your genomics analysis - Popgroup Reproducible research presentation

  • 1.
    Avoid retracting youranalysis! @yannick__ https://wurmlab.github.io Reproducible research for ecological and evolutionary genomics.
  • 2.
    Geoffrey Chang: Crystallographer •Beckman FoundationYoung Investigator Award • Presidential Early Career Award Journal of Molecular Biology (2003) Chang. Structure of MsbA from Vibrio cholera: a multidrug resistance ABC transporter homolog in a closed conformation. PNAS (2004) Ma & Chang. Structure of the multidrug resistance efflux transporter EmrE from Escherichia coli. Science (2005) Reyes & Chang. Structure of the ABC transporter MsbA in complex with ADP vanadate and lipopolysaccharide. Science (2005) Pornillos et al. X-ray structure of the EmrE multidrug transporter in complex with a substrate. Science (2001) Chang & Roth. Structure of MsbA from E. coli: a homolog of the multidrug resistance ATP binding cassette (ABC) transporters. Science (2001) Chang & Roth.
  • 3.
    earch Institute in nextyear, in a cer- Chang received a Award rs, the young ated a apers ctures ded in into a Swiss per in bt on a group cience gated, scover ispro- mns of density m had ucture. d used energy from adenosine triphosphate to trans- port molecules across cell membranes. These so-called ABC transporters perform many determination was at the root o cess: “He has an incredible d ethic. He really pushed the fie of getting things to no one else had be Chang’s data are go but the faulty so everything off. Ironically, anoth doc in Rees’s lab, K exposed the mistake tember issue of Na now at the Swiss F ofTechnology in Zu the structure of anA calledSav1866from aureus. The structur cally—and unexpe ent from that of pulling up Sav186 MsbA from S. typh computer screen, L realized in minutes structurewasinvert the “hand” of a mol Flipping fiasco. The structures of MsbA (purple) and Sav1866 (green) overlap little (left) until MsbA is inverted (right). California.The next year, in a cer- e White House, Chang received a l Early Career Award ts and Engineers, the ghest honor for young . His lab generated a high-profile papers e molecular structures proteins embedded in nes. e dream turned into a In September, Swiss published a paper in cast serious doubt on a cture Chang’s group ed in a 2001 Science en he investigated, horrified to discover madedata-analysispro- ipped two columns of ng the electron-density which his team had final protein structure. ly, his group had used m to analyze data for port molecules across cell membranes. These so-called ABC transporters perform many cess: “He has an ethic. He really p of get no on Chan but t every Iro doc in expos temb now a ofTec the str called aureu cally— ent f pullin MsbA comp realiz struct the “h a cha Flipping fiasco. The structures of MsbA (purple) and Sav1866 (green) overlap little (left) until MsbA is inverted (right). Sav1866 Dawson & Locher (2006) Nature Science(2001)Chang&Roth.Science (2001) Chang & Roth. Comparison with 3D structure of ortholog Science (2001) Chang & Roth.
  • 4.
    http://wurmlab.github.io LETTERS I BOOKSI POLICY FORUM I EDUCATION FORUM I PERSPECTIVES 1878 1880 1882 LETTERS edited by Etta Kavanagh Retraction WE WISH TO RETRACT OUR RESEARCH ARTICLE “STRUCTURE OF MsbA from E. coli:A homolog of the multidrug resistanceATP bind- ing cassette (ABC) transporters” and both of our Reports “Structure of the ABC transporter MsbA in complex with ADP•vanadate and lipopolysaccharide”and“X-raystructureoftheEmrEmultidrugtrans- porter in complex with a substrate” (1–3). The recently reported structure of Sav1866 (4) indicated that our MsbA structures (1, 2, 5) were incorrect in both the hand of the struc- ture and the topology. Thus, our biological interpretations based on these inverted models for MsbA are invalid. Anin-housedatareductionprogramintroducedachangeinsignfor anomalous differences.This program, which was not part of a conven- tional data processing package, converted the anomalous pairs (I+ and I-) to (F- and F+), thereby introducing a sign change. As the diffrac- tion data collected for each set of MsbA crystals and for the EmrE crystals were processed with the same program, the structures reported in (1–3, 5, 6) had the wrong hand. The error in the topology of the original MsbA structure was a con- sequence of the low resolution of the data as well as breaks in the elec- tron density for the connecting loop regions. Unfortunately, the use of the multicopy refinement procedure still allowed us to obtain reason- able refinement values for the wrong structures. The Protein Data Bank (PDB) files 1JSQ, 1PF4, and 1Z2R for MsbA and 1S7B and 2F2M for EmrE have been moved to the archive of obsolete PDB entries. The MsbA and EmrE structures will be recalculated from the original data using the proper sign for the anom- alous differences, and the new Ca coordinates and structure factors will be deposited. We very sincerely regret the confusion that these papers have caused and, in particular, subsequent research efforts that were unpro- ductive as a result of our original findings. GEOFFREY CHANG, CHRISTOPHER B. ROTH, CHRISTOPHER L. REYES, OWEN PORNILLOS, YEN-JU CHEN, ANDY P. CHEN Department of Molecular Biology, The Scripps Research Institute, La Jolla, CA 92037, USA. References 1. G. Chang, C. B. Roth, Science 293, 1793 (2001). 2. C. L. Reyes, G. Chang, Science 308, 1028 (2005). 3. O. Pornillos, Y.-J. Chen, A. P. Chen, G. Chang, Science 310, 1950 (2005). 4. R. J. Dawson, K. P. Locher, Nature 443, 180 (2006). 5. G. Chang, J. Mol. Biol. 330, 419 (2003). 6. C. Ma, G. Chang, Proc. Natl. Acad. Sci. U.S.A. 101, 2852 (2004). MsbA from E. coli:A homolog of the multidrug resistanceATP bind- ing cassette (ABC) transporters” and both of our Reports “Structure of the ABC transporter MsbA in complex with ADP•vanadate and lipopolysaccharide”and“X-raystructureoftheEmrEmultidrugtrans- porter in complex with a substrate” (1–3). The recently reported structure of Sav1866 (4) indicated that our MsbA structures (1, 2, 5) were incorrect in both the hand of the struc- ture and the topology. Thus, our biological interpretations based on these inverted models for MsbA are invalid. Anin-housedatareductionprogramintroducedachangeinsignfor anomalous differences.This program, which was not part of a conven- tional data processing package, converted the anomalous pairs (I+ and I-) to (F- and F+), thereby introducing a sign change. As the diffrac- tion data collected for each set of MsbA crystals and for the EmrE crystals were processed with the same program, the structures reported in (1–3, 5, 6) had the wrong hand. The error in the topology of the original MsbA structure was a con- sequence of the low resolution of the data as well as breaks in the elec- 1860 Untilrecently,GeoffreyChang’scareerwason a trajectory most young scientists only dream about. In 1999, at the age of 28, the protein crystallographer landed a faculty position at the prestigious Scripps Research Institute in San Diego, California.The next year, in a cer- emony at the White House, Chang received a Presidential Early Career Award for Scientists and Engineers, the country’s highest honor for young researchers. His lab generated a stream of high-profile papers detailing the molecular structures of important proteins embedded in cell membranes. Then the dream turned into a nightmare. In September, Swiss researchers published a paper in Nature that cast serious doubt on a protein structure Chang’s group had described in a 2001 Science paper. When he investigated, Chang was horrified to discover thatahomemadedata-analysispro- 2001 Science paper, which described the struc- tureofaproteincalledMsbA,isolatedfromthe bacterium Escherichia coli. MsbA belongs to a huge and ancient family of molecules that use energy from adenosine triphosphate to trans- port molecules across cell membranes. These so-called ABC transporters perform many Sciences and EmrE, a differ Crystalliz five membra was an incred postdoc advis nia Institute o proteins are a because they ously diffic needed for x- determination cess: “He has ethic. He real of no Ch bu ev do ex tem no of the cal au ca en pu A Scientist’s Nightmare: Software Problem Leads to Five Retractions SCIENTIFIC PUBLISHING
  • 5.
    😥 Geoffrey Chang • BeckmanFoundationYoung Investigator Award • Presidential Early Career Award Science (2001) Chang & Roth. Structure of MsbA from E. coli: a homolog of the multidrug resistance ATP binding cassette (ABC) transporters. Journal of Molecular Biology (2003) Chang. Structure of MsbA from Vibrio cholera: a multidrug resistance ABC transporter homolog in a closed conformation. PNAS (2004) Ma & Chang. Structure of the multidrug resistance efflux transporter EmrE from Escherichia coli. Science (2005) Reyes & Chang. Structure of the ABC transporter MsbA in complex with ADP vanadate and lipopolysaccharide. Science (2005) Pornillos et al. X-ray structure of the EmrE multidrug transporter in complex with a substrate. 1860 Untilrecently,GeoffreyChang’scareerwason a trajectory most young scientists only dream about. In 1999, at the age of 28, the protein crystallographer landed a faculty position at the prestigious Scripps Research Institute in San Diego, California.The next year, in a cer- emony at the White House, Chang received a Presidential Early Career Award for Scientists and Engineers, the country’s highest honor for young researchers. His lab generated a stream of high-profile papers detailing the molecular structures of important proteins embedded in cell membranes. Then the dream turned into a nightmare. In September, Swiss researchers published a paper in Nature that cast serious doubt on a protein structure Chang’s group had described in a 2001 Science paper. When he investigated, Chang was horrified to discover thatahomemadedata-analysispro- 2001 Science paper, which described the struc- tureofaproteincalledMsbA,isolatedfromthe bacterium Escherichia coli. MsbA belongs to a huge and ancient family of molecules that use energy from adenosine triphosphate to trans- port molecules across cell membranes. These so-called ABC transporters perform many Sciences and EmrE, a differ Crystalliz five membra was an incred postdoc advis nia Institute o proteins are a because they ously diffic needed for x- determination cess: “He has ethic. He real of no Ch bu ev do ex tem no of the cal au ca en pu A Scientist’s Nightmare: Software Problem Leads to Five Retractions SCIENTIFIC PUBLISHING
  • 6.
    http://wurmlab.github.io This is costly For: •theindividual •collaborators •the institution •1000s of researchers performing follow-up work •science •society
  • 8.
    http://wurmlab.github.io • Understanding/visualising/analysing/massaging bigdata is hard. • Biology/life is complex. • Biologists lack computational training. • Field is young. • Analysis tools (generally) suck: • badly written • badly tested • hard to install • output quality… often questionable. • Data sizes keep growing! • Data formats keep changing :( Genome bioinformatics is hardGenome bioinformatics is harder than (many) other data sciences
  • 9.
  • 10.
    Some sources ofinspiration
  • 12.
    http://wurmlab.github.io Community Page Best Practicesfor Scientific Computing Greg Wilson1 *, D. A. Aruliah2 , C. Titus Brown3 , Neil P. Chue Hong4 , Matt Davis5 , Richard T. Guy6¤ , Steven H. D. Haddock7 , Kathryn D. Huff8 , Ian M. Mitchell9 , Mark D. Plumbley10 , Ben Waugh11 , Ethan P. White12 , Paul Wilson13 1 Mozilla Foundation, Toronto, Ontario, Canada, 2 University of Ontario Institute of Technology, Oshawa, Ontario, Canada, 3 Michigan State University, East Lansing, Michigan, United States of America, 4 Software Sustainability Institute, Edinburgh, United Kingdom, 5 Space Telescope Science Institute, Baltimore, Maryland, United States of America, 6 University of Toronto, Toronto, Ontario, Canada, 7 Monterey Bay Aquarium Research Institute, Moss Landing, California, United States of America, 8 University of California Berkeley, Berkeley, California, United States of America, 9 University of British Columbia, Vancouver, British Columbia, Canada, 10 Queen Mary University of London, London, United Kingdom, 11 University College London, London, United Kingdom, 12 Utah State University, Logan, Utah, United States of America, 13 University of Wisconsin, Madison, Wisconsin, United States of America Introduction Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. We describe a set of best practices for scientific software development that have solid foundations in research and experience, and that improve scientists’ productivity and the reliability of their software. Software is as important to modern scientific research as telescopes and test tubes. From groups that work exclusively on computational problems, to traditional laboratory and field scientists, more and more of the daily operation of science revolves around developing new algorithms, managing and analyzing the large amounts of data that are generated in single research projects, combining disparate datasets to assess synthetic problems, and other computational tasks. Scientists typically develop their own software for these purposes because doing so requires substantial domain-specific knowledge. As a result, recent studies have found that scientists typically spend 30% or more of their time developing software [1,2]. However, 90% or more of them are primarily self-taught [1,2], and therefore lack exposure to basic software development practices such as writing maintainable code, using version control and issue error from another group’s code was not discovered until after publication [6]. As with bench experiments, not everything must be done to the most exacting standards; however, scientists need to be aware of best practices both to improve their own approaches and for reviewing computational work by others. This paper describes a set of practices that are easy to adopt and have proven effective in many research settings. Our recommenda- tions are based on several decades of collective experience both building scientific software and teaching computing to scientists [17,18], reports from many other groups [19–25], guidelines for commercial and open source software development [26,27], and on empirical studies of scientific computing [28–31] and software development in general (summarized in [32]). None of these practices will guarantee efficient, error-free software development, but used in concert they will reduce the number of errors in scientific software, make it easier to reuse, and save the authors of the software time and effort that can used for focusing on the underlying scientific questions. Our practices are summarized in Box 1; labels in the main text such as ‘‘(1a)’’ refer to items in that summary. For reasons of space, we do not discuss the equally important (but independent) issues of reproducible research, publication and citation of code and data, and open science. We do believe, however, that all of these will be much easier to implement if scientists have the skills we describe. Education A Quick Guide to Organizing Computational Biology Projects William Stafford Noble1,2 * 1 Department of Genome Sciences, School of Medicine, University of Washington, Seattle, Washington, United States of America, 2 Department of Computer Science and Engineering, University of Washington, Seattle, Washington, United States of America Introduction Most bioinformatics coursework focus- es on algorithms, with perhaps some components devoted to learning pro- gramming skills and learning how to use existing bioinformatics software. Un- fortunately, for students who are prepar- ing for a research career, this type of curriculum fails to address many of the day-to-day organizational challenges as- sociated with performing computational experiments. In practice, the principles behind organizing and documenting computational experiments are often learned on the fly, and this learning is strongly influenced by personal predilec- tions as well as by chance interactions with collaborators or colleagues. The purpose of this article is to describe one good strategy for carrying out com- putational experiments. I will not describe profound issues such as how to formulate hypotheses, design experiments, or draw conclusions. Rather, I will focus on relatively mundane issues such as organiz- ing files and directories and documenting understanding your work or who may be evaluating your research skills. Most com- monly, however, that ‘‘someone’’ is you. A few months from now, you may not remember what you were up to when you created a particular set of files, or you may not remember what conclusions you drew. You will either have to then spend time reconstructing your previous experiments or lose whatever insights you gained from those experiments. This leads to the second principle, which is actually more like a version of Murphy’s Law: Everything you do, you will probably have to do over again. Inevitably, you will discover some flaw in your initial preparation of the data being analyzed, or you will get access to new data, or you will decide that your param- eterization of a particular model was not broad enough. This means that the experiment you did last week, or even the set of experiments you’ve been work- ing on over the past month, will probably need to be redone. If you have organized and documented your work clearly, then repeating the experiment with the new under a common root directory. The exception to this rule is source code or scripts that are used in multiple projects Each such program might have a projec directory of its own. Within a given project, I use a top-leve organization that is logical, with chrono logical organization at the next level, and logical organization below that. A sample project, called msms, is shown in Figure 1 At the root of most of my projects, I have a data directory for storing fixed data sets, a results directory for tracking computa tional experiments peformed on that data a doc directory with one subdirectory per manuscript, and directories such as src for source code and bin for compiled binaries or scripts. Within the data and results directo ries, it is often tempting to apply a similar logical organization. For example, you may have two or three data sets agains which you plan to benchmark your algorithms, so you could create one directory for each of them under data In my experience, this approach is risky because the logical structure of your fina
  • 13.
  • 14.
  • 15.
    http://wurmlab.github.io Write code forhumans (not computers!) • For • yourself • colleagues / collaborators • reviewers • other random people who may reuse/improve your code • Respect conventions (e.g., a style guide)
  • 16.
    te Damian ConwayUsewhitespace/indentation! e Damian Conway Same information
  • 17.
    Line length Strive tolimit your code to 80 characters per line. This fits comfortably on a printed page with a reasonably sized font. If you find yourself running out of room, this is a good indication that you should encapsulate some of the work in a separate function. ant_measurements <- read.table(file = '~/Downloads/Web/ant_measurements.txt', header=TRUE, se ant_measurements <- read.table(file = '~/Downloads/Web/ant_measurements.txt', header = TRUE, sep = 't', col.names = c('colony', 'individual', 'headwidth', 'mass') ) ant_measurements <- read.table(file = '~/Downloads/Web/ant_measurements.txt', header=TRUE, 
 sep='t', col.names = c('colony', 'individual', 'headwidth', 'mass')) R style guide extract http://r-pkgs.had.co.nz/style.html
  • 18.
    R style guideextract http://r-pkgs.had.co.nz/style.html
  • 19.
  • 20.
    http://wurmlab.github.io Write code forhumans (not computers!) • For • yourself • colleagues / collaborators • reviewers • other random people who may want to reuse your code • Respect conventions (e.g., a style guide) • Don't optimise (generally…)
  • 21.
  • 22.
    Eliminate redundancy DRY: Don’tRepeatYourself Organise mindfully & don't reinvent the wheel.
  • 23.
    Education A Quick Guideto Organizing Computational Biology Projects William Stafford Noble1,2 * 1 Department of Genome Sciences, School of Medicine, University of Washington, Seattle, Washington, United States of America, 2 Department of Computer Science and Engineering, University of Washington, Seattle, Washington, United States of America Introduction Most bioinformatics coursework focus- es on algorithms, with perhaps some components devoted to learning pro- gramming skills and learning how to use existing bioinformatics software. Un- fortunately, for students who are prepar- ing for a research career, this type of curriculum fails to address many of the day-to-day organizational challenges as- sociated with performing computational experiments. In practice, the principles behind organizing and documenting computational experiments are often learned on the fly, and this learning is strongly influenced by personal predilec- tions as well as by chance interactions with collaborators or colleagues. The purpose of this article is to describe understanding your work or who may be evaluating your research skills. Most com- monly, however, that ‘‘someone’’ is you. A few months from now, you may not remember what you were up to when you created a particular set of files, or you may not remember what conclusions you drew. You will either have to then spend time reconstructing your previous experiments or lose whatever insights you gained from those experiments. This leads to the second principle, which is actually more like a version of Murphy’s Law: Everything you do, you will probably have to do over again. Inevitably, you will discover some flaw in your initial preparation of the data being analyzed, or you will get access to new data, or you will decide that your param- eterization of a particular model was not under a common root directory. The exception to this rule is source code or scripts that are used in multiple projects. Each such program might have a project directory of its own. Within a given project, I use a top-level organization that is logical, with chrono- logical organization at the next level, and logical organization below that. A sample project, called msms, is shown in Figure 1. At the root of most of my projects, I have a data directory for storing fixed data sets, a results directory for tracking computa- tional experiments peformed on that data, a doc directory with one subdirectory per manuscript, and directories such as src for source code and bin for compiled binaries or scripts. Within the data and results directo- ries, it is often tempting to apply a similar, with this approach, the distinction be- tween data and results may not be useful. Instead, one could imagine a top-level The Lab Notebook In parallel with this chronological directory structure, I find it useful to These types of entries provide a complete picture of the development of the project over time. Figure 1. Directory structure for a sample project. Directory names are in large typeface, and filenames are in smaller typeface. Only a subset of the files are shown here. Note that the dates are formatted ,year.-,month.-,day. so that they can be sorted in chronological order. The source code src/ms-analysis.c is compiled to create bin/ms-analysis and is documented in doc/ms-analysis.html. The README files in the data directories specify who downloaded the data files from what URL on what date. The driver script results/2009-01-15/runall automatically generates the three subdirectories split1, split2, and split3, corresponding to three cross-validation splits. The bin/parse- sqt.py script is called by both of the runall driver scripts. doi:10.1371/journal.pcbi.1000424.g001 In each results folder: •script getResults.rb •intermediates •output Organise mindfully
  • 24.
    http://wurmlab.github.io Specific Approaches/Tools 1. Writecode for humans 2. Organise mindfully 3. Plan for mistakes
  • 25.
    Automatically check consistencywith style guide install.packages("lint") # once library(lint) # everytime lint("file_to_check.R")
  • 26.
    http://wurmlab.github.io Create code teststhat are easy to run • Unit tests == checking edge cases to see if the function works # do your stuff # e.g. define speed() function library(testthat) expect_that(speed(km = 0, minutes = 60), equals(0)) expect_that(speed(km = 60, minutes = 60), equals(1)) expect_that(speed(km = -4, minutes = 60), throws_error()) expect_that(nrow(significant_SNPs), 42) expect_that(my_model, is_a("lm")) • Integration tests == "full analysis" but on small data with known results • e.g. on fakeVCF genotype file of 2 loci (one true positive, one true negative)
  • 27.
    http://wurmlab.github.io "Continuous integration": Tests shouldrun automagically. So you don't have to remember (or find time) to do it. 💾http://github.org Tests run automatically http://travis-ci.org If unexpected result: 📬
  • 28.
  • 29.
    http://wurmlab.github.io Code reviews: aska peer to (critically) read your analysis code.
  • 30.
    http://wurmlab.github.io Specific Approaches/Tools 1. Writecode for humans 2. Organise mindfully 3. Plan for mistakes 4. Use tools that reduce risks
  • 31.
    knitr/sweave Analysis &report in one step. analysis.Rmd A minimal R Markdown example I know the value of pi is 3.1416, and 2 times pi is 6.2832. To c library(knitr); knit( minimal.Rmd ) A paragraph here. A code chunk below: 1+1 ## [1] 2 .4-.7+.3 # what? it is not zero! ## [1] 5.551e-17 Graphics work too library(ggplot2) qplot(speed, dist, data = cars) + geom_smooth() ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● 0 40 80 120 5 10 15 20 speed dist Figure 1: A scatterplot of cars & jupyter
  • 32.
    http://wurmlab.github.io If you needto make a "pipeline" • Use "pipelining" software. E.g.: • Snakemake • Nextflow • (etc)
  • 33.
    http://wurmlab.github.io Specific Approaches/Tools 1. Writecode for humans 2. Organise mindfully 3. Plan for mistakes 4. Use tools that reduce risks Bruno Vieira Anurag Priyam Ismail Moghul Roddy Pracana @bmpvieira@yeban @RoddyPracana Joe Colgan https://wurmlab.github.io