This is a presentation by Daniel Greene of the Center for International Security and Cooperation on "Measuring Cultures of Responsibility in the Life Sciences."
Measuring Cultures of Responsibility in the Life Sciences – Daniel Greene
1. Measuring Cultures of Responsibility
in the Life Sciences
Daniel Greene, Ph.D.
February 22, 2020
dkgreene@stanford.edu
2. Some potential RISKS of life science research:
• Accidental release of pathogens & GMOs
• Widely accessible knowledge and tools for
creating pathogens and novel bioweapons
3. Some potential BENEFITS of life science research:
• $1 trillion+ value
• Profound improvements in healthcare,
agriculture, energy, production
Some potential RISKS of life science research:
• Accidental release of pathogens & GMOs
• Widely accessible knowledge and tools for
creating pathogens and novel bioweapons
4. Some potential BENEFITS of life science research:
• $1 trillion+ value
• Profound improvements in healthcare,
agriculture, energy, production
Some potential RISKS of life science research:
• Accidental release of pathogens & GMOs
• Widely accessible knowledge and tools for
creating pathogens and novel bioweapons
5. "The NSABB strongly believes that one of
the best ways to address concerns
regarding dual use research is to raise
awareness of dual use research issues and
strengthen the culture of responsibility
within the scientific community. The
stakes are high for public health, national
security and the vitality of the life sciences
research enterprise."
7. How do we know whether a program is effective?
How do we observe a culture of responsibility in practice,
or know when we have one?
What are metrics that we could use to indicate the
presence of a culture of responsibility, and ultimately to
guide the development of programs and interventions?
8. How do we know whether a program is effective?
How do we observe a culture of responsibility in practice,
or know when we have one?
What are metrics that we could use to indicate the
presence of a culture of responsibility, and ultimately to
guide the development of programs and interventions?
9. How do we know whether a program is effective?
How do we observe a culture of responsibility in practice,
or know when we have one?
What are metrics that we could use to indicate the
presence of a culture of responsibility, and ultimately to
guide the development of programs and interventions?
10. Senior Research Scholar
Megan J. Palmer
• Interviews and focus groups
with scientists, regulators, and
students
• Survey question development
& validation
• Analyses of large-scale
datasets of scientific practice
Program Manager
Connor Hoffmann
11. "Of the many interventions that might be used to improve the
culture of biosafety and biosecurity, educational and training
interventions are among the most frequently employed or cited.
Unfortunately, there has been little assessment of these
interventions specifically directed at improving biosafety and
biosecurity in laboratories.”
(Perkins et al., 2018)
12. (Minehata and Shinomiya, 2010)
“Was your understanding on the following aspects of the module
developed?”
13. Is the goal…
• To “raise awareness”? (Minehata and Shinomiya, 2010)
• To provide training in “knowledge or skills”? (Chamberlain et al.,
2009)
• To change “workplace culture”? (Flipse et al., 2013)
• To promote “social norms”? (American Society for Microbiology, 2005)
• To promote “engagement”? (Atlas and Dando, 2006)
Editor's Notes
Hi everyone, thanks for having me today. I'd like to start this talk by telling you a little bit about myself. I’m a postdoc at the Center for International Security and Cooperation at Stanford, and my graduate work was in education research, with a focus on survey development, psychometrics, and social-psychological interventions to motivate sustained behavior change. I’m using these skills to critically assess the ways that we attempt to measure so-called “cultures of responsibility” in the life sciences. This is also the focus of a design jam later today, which I encourage you to attend if you’re interested. And even if you can’t, feel free to find me today or email me, I’d love to chat.
Let me start by contrasting two recent news stories from the world of life science policy that highlight some of the potential risks and benefits of life-science research.
About a month ago, the National Academies of Science, Engineering, and Medicine released a new report on "Safeguarding the Bioeconomy", defining the bioeconomy as "economic activity that is driven by research and innovation in the life sciences and biotechnology, and that is enabled by technological advances in engineering and in computing and information sciences." The current US bioeconomy is valued at close to $1 trillion dollars, and is expected to continue to grow rapidly and to make profound contributions in fields like healthcare, agriculture, energy, and industrial production. These are some of the benefits that we can hope for from continued life-science research.
The current US bioeconomy is valued at close to $1 trillion dollars, and is expected to continue to grow rapidly and to make profound contributions in fields like healthcare, agriculture, energy, and industrial production. These are some of the benefits that we can hope for from continued life-science research.
But as many of you are well aware, life-science research is “dual-use”, meaning in this context that the same knowledge and tools that can be used to create massive benefits can also be used to create massive harms. Life scientists need to grapple with safety concerns around lab accidents, and they also need to secure their physical and virtual spaces against theft or misuse.
In addition, the knowledge that scientists are producing might itself constitute a risk. Publicly-available life-science research could enable people to create novel bioweapons using increasingly accessible tools and techniques. For example, the entire genome sequence of smallpox has been online since 1994. This knowledge could enable bad actors to reconstruct smallpox with the right tools, but it could also enable life scientists to more quickly develop better vaccines.
On this general topic, also about a month ago the National Science Advisory Board for Biosecurity, or NSABB, hosted its first publicized meeting since 2017, where they discussed some of the tensions between security and public transparency in life-science research.
So how can the benefits of dual-use research be preserved and the risks minimized? Here’s one common answer.
Back in 2007, the NSABB released a widely-cited "Proposed Framework for the Oversight of Dual-Use Life Sciences Research". Quote:
"The NSABB strongly believes that one of the best ways to address concerns regarding dual use research is to raise awareness of dual use research issues and strengthen the culture of responsibility within the scientific community. The stakes are high for public health, national security and the vitality of the life sciences research enterprise.”
Many other groups in government and academia have come to similar conclusions, including multiple National Research Council reports. A culture of responsibility in the life sciences is widely seen as important for mitigating the risks and preserving the benefits of biotechnologies.
In fact, the US department of Homeland Security has recently funded the Engineering Biology Research Consortium, to do culture-of-responsibility trainings in life-science laboratories nationwide -What the EBRC is calling “Malice Analysis”, which they are presenting on later today.
But Malice Analysis currently has no assessment component – it could be more effective if EBRC has detailed learning outcomes for a program like Malice Analysis and gets feedback on how well it’s achieving those outcomes. So I want to raise the following questions:
How would we know whether a program like Malice Analysis is effective?
How do we observe a culture of responsibility in practice, or know when we have one?
What are some meaningful metrics that we could use to indicate the presence of a culture of responsibility, and ultimately to guide the development of programs and interventions?
These are the focal questions of my research at CISAC, and I get to work on them as part of a great team.
I’m working with Senior Research Scholar Megan Palmer and our incoming Program Manager Connor Hoffmann to develop metrics of cultures of responsibility using interviews, focus groups, and survey studies, and to validate aspects of those metrics using large-scale datasets of scientific practice.
We are fortunate to have access to life scientists through a partnership with the EBRC, and access to regulatory staff through a partnership with the Association for Biosafety and Biosecurity.
We also have access to the next generation of life-scientists through a partnership with the iGEM Foundation, which organizes an annual synthetic biology competition with over 5,000 students worldwide. These students do cutting-edge research every year and fill graduate programs and academic positions. iGEM has dedicated safety and social-responsibility elements woven into the way that projects are evaluated and awards are granted, and organizers are constantly confronted with novel risk concerns and developing novel response strategies. We see iGEM as a rich testbed for studying attitudes about risk, methods of assessing risk, and strategies for mitigating risk, and we are now in the process of analyzing about 15 years of iGEM participation data to understand more about how teams cultivate and enact social responsibility.
As a starting point for this work, over the last several months I have been reviewing the literature on the design and assessment of programs to create a culture of responsibility. What I have found so far is that unfortunately, over a decade after the 2007 NSABB report, there’s not much assessment going on. Quoting from a 2018 review paper by Perkins et al. that looked at 326 articles on the subject:
"Of the many interventions that might be used to improve the culture of biosafety and biosecurity, educational and training interventions are among the most frequently employed or cited. Unfortunately, there has been little assessment of these interventions specifically directed at improving biosafety and biosecurity in laboratories.”
Most training programs that I have seen have no assessment component at all. And the assessments that do exist are often lacking in rigor. Let me show you a quick example.
A 2010 program by Minehata and colleagues sought to cultivate a culture of responsibility among medical students in Japan through a five-day course that has now been integrated into existing medical school syllabi.
The program was evaluated simply by asking participants whether they agreed that their "understanding was developed" on various topics, on a 1 to 5 scale. The average was unsurprisingly between a 4 and 5 on all topics, and almost exactly the same across topics. This isn't particularly useful information for a number of reasons:
The categories here aren’t specific enough for people to provide nuanced answers. For example, it might be hard to summarize your understanding of the “surrounding situation of scientists and scientific papers” with a single number.
The question is also binary - it only asks if any understanding was developed or not - but it provides a five-point scale.
Finally and perhaps most importantly, it is subject to demand effects - i.e. the subjects may have been trying to please the person doing the assessment.
And remember, this is the exception. Most programs don’t even appear to do any assessment at all.
The lack of high-quality assessment also contributes to conceptual confusion around the goal or purpose of these programs - of what exactly educators are trying to convey.
Is the goal to “raise awareness”, as Minehata and colleagues described their program? This goal implies a fairly low bar of merely alerting life scientists to the existence of various issues.
Is it to provide “training” or impart some definable knowledge or skills? If so, what knowledge or skills, and what evidence is there that these skills are both lacking and important?
Or is the goal to change "culture”, "norms”, or "engagement”? These terms are not interchangeable; they imply different unstated and overlapping theories about what will cause life scientists to actually change their behavior in ways that reduce risks.
Creating an assessment forces you to make your unstated theories more explicit, which perhaps suggests some reasons why assessments aren’t created more often. This is the conceptual landscape that we hope to organize as a step towards developing better metrics.
Summing up, the life-science research landscape has tremendous potential for both benefit and harm. Life scientists themselves are potentially well-positioned to help society navigate this landscape. We hope that our research can help to create the conditions for life scientists to be wise and responsible guides for us all. And again, if you’d like to be involved or learn more, feel free to reach out or come to our design jam. Thank you.