Presentation by Prof Mark Reed at CIFOR Indonesian to open UN Global Peatland Initiative workshop to identify key variables that should be measured in tropical peatland research and monitoring. Workshop co-facilitated by Mark Reed and Dylan Young, with slides adapted from a presentation by Gav Stewart, Newcastle University.
2. Practicalities
• Fire exits and drills
• See agenda
• Evening meal here tonight at 18.00, bus to hotel around 20.00
• Tomorrow: bus will collect you from your hotel lobby at 07.45
(returns to hotel 16.30)
• Scan your expenses form and receipts and email to
claire.machin-davies@newcastle.ac.uk no later than 31st July
2019 (and then send hard copies as per email instructions)
3. Practicalities
• Creating a safe environment where we can do our best thinking
• Everyone’s view is valid – speak up but listen actively
• Don’t be upset if I ask you to hold onto a point so others can
answer who haven’t spoken so much
• Focus on science not politics today (we will discuss the criteria
we should use to prioritise outcomes tomorrow, which can
include political priorities)
4. The challenge
• Different researchers measure different outcomes (variables) in
different ways (using different methods) and report their data
differently
• Results cannot be applied beyond the contexts in which the
data was collected
• Data cannot be synthesised for policy and practice
• We need to reduce research wastage and increase the
synthesisability of data to enable more evidence-based policy
and practice
5. Our approach
A unified approach to data collection across tropical peatlands:
• What is the scope? What are the domains (sets) within which
we might define outcomes in tropical peatlands?
• What outcomes (variables) can be measured in a tropical
peatland?
• Which of these outcomes measures (variables) are most
important to try and measure (core outcome sets)?
• How should each outcome (variable) be measured?
• How should the data be reported?
6. What outcomes (variables) should be measured
in tropical peatland research and monitoring?
Scoping: are there
missing sets (domains)
within which we might
group outcomes e.g.
accumulation/loss?
What are the most
important (core)
outcomes that
should be
measured e.g. is it
more important to
measure above-
ground litter
decomposition
rates, or litter types
or both?
What are the best
ways to measure
each outcome e.g.
flux towers versus
closed chambers or
vegetation proxies?
How should the
data be reported
e.g. units,
contextual data?
Pre-
workshop
survey
Are there missing
outcomes that should
be measured with each
set e.g. accumulation
rates, oxidative loss?
Workshop
Post-
workshop
voting
Subsequent
steps
7. The goal
• A menu of the most important outcome measures (grouped into
sets), with a range of associated best-practice methods (from
state-of-the-art to proxies) and reporting protocols, from which
future research and monitoring projects can choose
• There is nothing to stop projects choosing other outcomes to
measure and report in different ways, but they do so in the
knowledge that their data is less likely to be synthesisable
• If adopted widely, it will be possible to synthesise an increasing
proportion of data to provide evidence at national and
international scales relevant to policy and practice
8. The goal
• Ultimately, the goal is to replicate the process to develop core
outcome sets for all peatland sites
• Test case: UK peatlands
• Replication case: tropical peatlands
• Future work: other peatland types and then other systems (e.g. the
food system)
• Feed into (current and) future Global Peatland Assessments by
increasing the availability of synthesisable data
9. Workshop process
• Scope the outcome sets (or domains) in which we might want to
identify and group outcome measures: should any be modified
or added?
• Discuss outcome measures identified in pre-workshop survey:
should any be modified or added?
• Carbon
• Hydrology
• Biodiversity
• Contribute based on your expertise – feel free to drop out of
discussion points you know little about and rejoin later
10. Discussion
• For clarity we will use the definitions on our handouts (see
Dylan’s talk)
• Discuss now to ensure we are all talking about the same things,
even if we would use different terms
• Other questions?
12. More science = better decisions?
• There is rarely sufficient evidence to make decisions
• What are the proposed interventions?
• What are alternative options?
• What is the decision-making context?
• Science is rarely aligned with policy needs:
• Misaligned time-horizons
• Not all policy needs
require “cutting edge”
science
• Poor communication
leading to unintelligible,
irrelevant findings How “good” is half a fish? Communicating outcomes of quantitative syntheses to decision
makers. HR Bayliss, CJ Lortie, GB Stewart - Frontiers in Ecology and the Environment, 2015
13. More science = better decisions?
• 85% of global health research is being avoidably “wasted”
(worth $170 billion per year):
• Failure to publish completed research
• Published research is
insufficiently clear,
complete, and accurate
for others to use,
replicate or synthesise to
apply beyond original
context
• Research has design
flaws or insufficient rigour
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124.
14. Robust science = robust decisions
• The reproducibility crisis:
• The Reproducibility Project in psychology replicated
100 studies from 3 top journals and fewer than half
produced significant results in the expected directions
• P-hacking:
• Cherry-picking and only reporting statistically significant findings,
regardless of effect size or plausibility
• Searching for statistically significant relationships or differences in data
and then creating a hypothesis to explain what was found
• Statistical significance does not measure the size of an effect or
importance of a result and should be the basis for decisions
Schooler, J. W. (2014). Metascience could rescue the”'replication crisis”. Nature. 515 (7525): 9.
15. But even when the science is robust…
• Individual studies are frequently contradictory e.g. different
contexts, time-horizons and sources of uncertainty
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med
2(8): e124.
• There are typically multiple
interventions for any policy or
practice to choose from, each
with their own evidence base,
measured and reported in
different ways, so they can’t be
directly compared
16. Dealing with uncertainty within studiesIncreasingprecision
Studies take uncertainty into account to varying degrees, leading to
increased precision as uncertainty is better understood
17. Dealing with uncertainty within studies
Approaches include:
• Preregistration with open data
• Cost-benefit analysis under
uncertainty and non-monetary
methods e.g. multi-criteria evaluation
• More complex methods e.g. Bayesian
Belief Networks with value of
information analysis, Agent-Based
Models, sensitivity analysis (e.g.
Random Forest)
Bolam, F. C., Grainger, M. J., Mengersen, K. L., Stewart, G. B., Sutherland, W., Runge, M. C., & McGowan, P. J. (2019). Using the Value of Information to
improve conservation decision making. Biological reviews of the Cambridge Philosophical Society, 94 (2), 629-647. https://doi.org/10.1111/brv.12471
18. Dealing with uncertainty between studies
• Avoid relying on individual studies for big decisions
• Rely more on high quality meta-analyses with uncertainty
estimates, for example:
• Importance effect sizes of individual studies versus overall pooled
effect size?
• Might publication bias account for the pooled effect?
• How sensitive are the results to uncertainties arising from how the data
was collected versus other influences?
• However, most meta-analyses report insufficient availability of
high quality data for any given variable
• Different studies measure different things in different ways and
report them differently