IAOS 2018 - Statistics as a trusted source of information, M. Durand
1. STATISTICS AS A TRUSTED SOURCE OF INFORMATION
Remarks by Martine Durand,
OECD Chief Statistician and Director, Statistics and Data Directorate
IAOS-OECD Conference “Better Statistics for Better Lives”,
OECD, Paris, 19 September 2018
2. “The IMF’s Executive Board …found that Argentina’s progress in implementing the remedial measures …
has not been sufficient.
“As a result, the Fund has issued a declaration of censure against Argentina in connection with its breach
of obligation to the Fund under the Articles of Agreement.
“The Board called on Argentina to adopt the remedial measures to address the inaccuracy of CPI-GBA
and GDP data without further delay….The[se] measures aim at aligning…with…international statistical
understandings and guidelines that ensure accurate measurement.”
- IMF Press Release No. 13/33, February 1, 2013
(Dis)trust in statistics – example 1
3. During a public debate in the lead-up to the
Brexit vote in the UK, Professor Anand
Menon invited the audience to imagine the
likely plunge in Britain’s GDP if it left the EU.
A lady in the audience yelled back: “That’s
your bloody GDP. Not ours.”
- The Guardian, 10 Jan 2017
(Dis)trust in statistics – example 2
5. • Wrong data (whether deliberate or inadvertent)
• Perceived irrelevance of the measure (“That’s not my GDP”)
• Perceived inaccuracy of the data (“I wouldn’t trust your numbers”)
*
• Each is problematic for official statistics, and for healthy democracy
• Our responses need to address both actual quality and perceptions
Thus, three sources of distrust of official statistics…
6. 1. Apply quality criteria
2. Follow established methodologies
3. Make our data as accurate as possible
4. Explain how they were made
5. Distinguish between censuses, surveys,
estimates, projections, model outputs etc.
6. Explain what data show and do not show
7. Subject data to quality review, including
external review
Response 1: Improve actual quality
7. 1. Solicit user feedback, and answer it
2. Create formal consultative bodies
3. Provide for experimental statistics
4. Improve granularity
5. Entertain suggestions for new data series
6. Exploit new data sources while preserving
quality
Response 2: Improve relevance
8. 1. Establish and protect statistical independence
2. Refrain from political commentary
3. Explain the limitations of data
4. Correct errors
5. Correct misinterpretations/fake news
6. Monitor trust itself, and respond to identified
“credibility gaps”
Response 3: Be honest, and look honest
9. 1. Be impartial and independent
2. Continuously examine and improve our statistical output
3. Explain our work clearly and frankly
4. Be humble, listen to our users and the public
5. Be open to new ideas, and new ways of communicating
6. Monitor trust and the way our data are perceived and used in the
public square
Summary: We need to…
10. • This 2015 OECD Recommendation elaborates
twelve detailed guidelines to improve statistical
systems
• These are supplemented by examples of specific
good practices
• Draws on the UN Fundamental Principles, regional
codes of practice, and quality frameworks
• Implementation will be reviewed in 2018, and
Recommendation may be updated – all feedback
welcome to Julien.Dupont@oecd.org.
For further information…
Editor's Notes
I want to start by reminding everyone that not all statistics SHOULD be trusted. Some are so bad they have been censured by international organisations. Here is an example from five years ago where the IMF publically rebuked Argentina for publishing inaccurate figures on inflation and GDP.
So we shouldn’t start by thinking that all distrust of statistics is unwarranted. Sometimes complaints are justified, and they may even be necessary to stimulate improvements.
Here is a second type of distrust, this time on a more emotional level. The example is from the Brexit debate. At a public meeting a lady yells out: “That’s your GDP. Not ours.” I don’t think she is quibbling about the specific figures. She just doesn’t connect with the concept. Perhaps she thinks the way GDP is measured is wrong. Perhaps she would prefer a different measure of economic activity. Or perhaps she wants to focus on broad well-being, rather than on economic production. In any case, she does not feel she “owns” GDP as it stands.
I also have one more example of distrust in statistics. This time it’s from just before the last American election. The Washington Post surveyed likely voters about whether they trusted federal government data in general. Nearly half did not, but the results were heavily skewed along party lines. Over 85% of voters who preferred Mrs Clinton, the candidate of the incumbent party, trusted government data. But not even a third of voters who preferred Mr Trump, the candidate of the then opposition, trusted the same data. Would those figures be different now that Mr Trump is President and is constantly citing official figures to support his claims that the U.S. economy is on the up-and-up? Maybe we should ask the Washington Post to re-run this survey…
These three examples illustrate some of the main sources of distrust in official data:
Sometimes the figures are just wrong, either deliberately or accidentally, and distrust is fully justified
Sometimes people don’t connect with the statistical measure being presented
And sometimes, they may understand the measure, but they don’t trust the number being presented.
All three sources of distrust need to be addressed, not just for the sake of accuracy, but because public trust in statistics is important to debate in a democratic society. We can always disagree about policies, but debate becomes incoherent if we cannot agree on basic facts and data. So, what can we do about this? I would suggest that we need to respond on three broad fronts…
First, we have to make sure that our data are really worthy of trust.
This means assessing all our output against standard quality criteria of accuracy, reliability, relevance, timeliness, frequency and so on. It means following established methods, and explaining the compilation process. It means flagging the types of evidence behind the data, and explaining their meaning and application. And it means careful and periodic review of methods and outputs, including by unbiased, competent, outside observers.
Second, it means making our data more relevant to our publics. For this, we need to listen, and create multiple channels for users to provide feedback and make suggestions. Improving relevance may mean experimenting with new sources, methods and data, improving the specificity and granularity of data, and even considering whole new statistical concepts.
Relevance is often defined as the overlap between what we offer and what users demand. But we shouldn’t view this overlap as being an inevitably small and static area. We should always be trying to expand it by explaining the meaning and importance of our data in ways accessible to all those who might be able to make good use of them.
Lastly, we need to give the public every reason to trust us. We need laws and systems and customs that protect the professional independence of statisticians. We need to be strictly impartial, show we understand the limitations of our data, and be quick and forthright in correcting any errors.
We also need to check how we are being rated by the public, to pinpoint areas where trust may be lacking, and to work out ways to gain or restore that trust.
All the responses that I have suggested spring from a recognition of the importance of reliable statistics to the functioning of a democratic society.
In democracies, we can never rely on imposing our views on others. We will be judged by our performances, and by how useful we are, not only to governments but also to citizens.
This means that we need to make constant efforts to improve our products and the way we explain them, to listen to and act on user’s suggestions, and to be alive to the possibilities for innovation.
Especially in a democracy, trust has to be earned. But democracies also make it easy for us to gauge if we are being trusted, and they offer us constant feedback and ideas for improvement.
What I have said today is largely based on our Recommendation on Good Statistical Practice, which the OECD Council passed in November 2015. The Recommendation draws on 25 years of experience and reflection on what makes for a trustworthy statistical system. We are finding that it offers a sound and comprehensive template for improving statistical operations and for building or rebuilding trust in official data.
At the end of this year we will be reporting back to Council on the first three years of implementation of the Recommendation. We have already begun collecting views from Adherent countries on the adequacy of the Recommendation and its list of indicative good practices, and we would also be interested to receive any feedback you might have either now or over the next few weeks.
Thank you for your attention.