Science is being turned against itself. For decades, its twin ideals of transparency and rigor have been weaponized by those who disagree with results produced by the scientific method. Under the Trump administration, that fight has ramped up again.
In a move ostensibly meant to reduce conflicts of interest, Environmental Protection Agency Administrator Scott Pruitt has removed a number of scientists from advisory panels and replaced some of them with representatives from industries that the agency regulates. Like many in the Trump administration, Pruitt has also cast doubt on the reliability of climate science. For instance, in an interview with CNBC, Pruitt said that “measuring with precision human activity on the climate is something very challenging to do.” Similarly, Trump’s pick to head NASA, an agency that oversees a large portion the nation’s climate research, has insisted that research into human influence on climate lacks certainty, and he falsely claimed that “global temperatures stopped rising 10 years ago.” Kathleen Hartnett White, Trump’s nominee to head the White House Council on Environmental Quality, said in a Senate hearing last month that she thinks we “need to have more precise explanations of the human role and the natural role” in climate change.
The same entreaties crop up again and again: We need to root out conflicts. We need more precise evidence. What makes these arguments so powerful is that they sound quite similar to the points raised by proponents of a very different call for change that’s coming from within science. This other movement strives to produce more robust, reproducible findings. Despite having dissimilar goals, the two forces espouse principles that look surprisingly alike:
- Science needs to be transparent.
- Results and methods should be openly shared so that outside researchers can independently reproduce and validate them.
- The methods used to collect and analyze data should be rigorous and clear, and conclusions must be supported by evidence.
These are the arguments underlying an “open science” reform movement that was created, in part, as a response to a “reproducibility crisis” that has struck some fields of science.1 But they’re also used as talking points by politicians who are working to make it more difficult for the EPA and other federal agencies to use science in their regulatory decision-making, under the guise of basing policy on “sound science.” Science’s virtues are being wielded against it.
What distinguishes the two calls for transparency is intent: Whereas the “open science” movement aims to make science more reliable, reproducible and robust, proponents of “sound science” have historically worked to amplify uncertainty, create doubt and undermine scientific discoveries that threaten their interests.
“Our criticisms are founded in a confidence in science,” said Steven Goodman, co-director of the Meta-Research Innovation Center at Stanford and a proponent of open science. “That’s a fundamental difference — we’re critiquing science to make it better. Others are critiquing it to devalue the approach itself.”
Calls to base public policy on “sound science” seem unassailable if you don’t know the term’s history. The phrase was adopted by the tobacco industry in the 1990s to counteract mounting evidence linking secondhand smoke to cancer. A 1992 Environmental Protection Agency report identified secondhand smoke as a human carcinogen, and Philip Morris responded by launching an initiative to promote what it called “sound science.” In an internal memo, Philip Morris vice president of corporate affairs Ellen Merlo wrote that the program was designed to “discredit the EPA report,” “prevent states and cities, as well as businesses from passing smoking bans” and “proactively” pass legislation to help their cause.
The sound science tactic exploits a fundamental feature of the scientific process: Science does not produce absolute certainty. Contrary to how it’s sometimes represented to the public, science is not a magic wand that turns everything it touches to truth. Instead, it’s a process of uncertainty reduction, much like a game of 20 Questions. Any given study can rarely answer more than one question at a time, and each study usually raises a bunch of new questions in the process of answering old ones. “Science is a process rather than an answer,” said psychologist Alison Ledgerwood of the University of California, Davis. Every answer is provisional and subject to change in the face of new evidence. It’s not entirely correct to say that “this study proves this fact,” Ledgerwood said. “We should be talking instead about how science increases or decreases our confidence in something.”
The tobacco industry’s brilliant tactic was to turn this baked-in uncertainty against the scientific enterprise itself. While insisting that they merely wanted to ensure that public policy was based on sound science, tobacco companies defined the term in a way that ensured that no science could ever be sound enough. The only sound science was certain science, which is an impossible standard to achieve.
“Doubt is our product,” wrote one employee of the Brown & Williamson tobacco company in a 1969 internal memo. The note went on to say that doubt “is the best means of competing with the ‘body of fact’” and “establishing a controversy.” These strategies for undermining inconvenient science were so effective that they’ve served as a sort of playbook for industry interests ever since, said Stanford University science historian Robert Proctor.
The sound science push is no longer just Philip Morris sowing doubt about the links between cigarettes and cancer. It’s also a 1998 action plan by the American Petroleum Institute, Chevron and Exxon Mobil to “install uncertainty” about the link between greenhouse gas emissions and climate change. It’s industry-funded groups’ late-1990s effort to question the science the EPA was using to set fine-particle-pollution air-quality standards that the industry didn’t want. And then there was the more recent effort by Dow Chemical to insist on more scientific certainty before banning a pesticide that the EPA’s scientists had deemed risky to children. Now comes a move by the Trump administration’s EPA to repeal a 2015 rule on wetlands protection by disregarding particular studies. (To name just a few examples.)
Doubt merchants aren’t pushing for knowledge, they’re practicing what Proctor has dubbed “agnogenesis” — the intentional manufacture of ignorance. This ignorance isn’t simply the absence of knowing something; it’s a lack of comprehension deliberately created by agents who don’t want you to know, Proctor said.2
In the hands of doubt-makers, transparency becomes a rhetorical move. “It’s really difficult as a scientist or policy maker to make a stand against transparency and openness, because well, who would be against it?” said Karen Levy, researcher on information science at Cornell University. But at the same time, “you can couch everything in the language of transparency and it becomes a powerful weapon.” For instance, when the EPA was preparing to set new limits on particulate pollution in the 1990s, industry groups pushed back against the research and demanded access to primary data (including records that researchers had promised participants would remain confidential) and a reanalysis of the evidence. Their calls succeeded and a new analysis was performed. The reanalysis essentially confirmed the original conclusions, but the process of conducting it delayed the implementation of regulations and cost researchers time and money.
Delay is a time-tested strategy. “Gridlock is the greatest friend a global warming skeptic has,” said Marc Morano, a prominent critic of global warming research and the executive director of ClimateDepot.com, in the documentary “Merchants of Doubt” (based on the book by the same name). Morano’s site is a project of the Committee for a Constructive Tomorrow, which has received funding from the oil and gas industry. “We’re the negative force. We’re just trying to stop stuff.”
Some of these ploys are getting a fresh boost from Congress. The Data Quality Act (also known as the Information Quality Act) was reportedly written by an industry lobbyist and quietly passed as part of an appropriations bill in 2000. The rule mandates that federal agencies ensure the “quality, objectivity, utility, and integrity of information” that they disseminate, though it does little to define what these terms mean. The law also provides a mechanism for citizens and groups to challenge information that they deem inaccurate, including science that they disagree with. “It was passed in this very quiet way with no explicit debate about it — that should tell you a lot about the real goals,” Levy said.
But what’s most telling about the Data Quality Act is how it’s been used, Levy said. A 2004 Washington Post analysis found that in the 20 months following its implementation, the act was repeatedly used by industry groups to push back against proposed regulations and bog down the decision-making process. Instead of deploying transparency as a fundamental principle that applies to all science, these interests have used transparency as a weapon to attack very particular findings that they would like to eradicate.
Now Congress is considering another way to legislate how science is used. The Honest Act, a bill sponsored by Rep. Lamar Smith of Texas,3 is another example of what Levy calls a “Trojan horse” law that uses the language of transparency as a cover to achieve other political goals. Smith’s legislation would severely limit the kind of evidence the EPA could use for decision-making. Only studies whose raw data and computer codes were publicly available would be allowed for consideration.
That might sound perfectly reasonable, and in many cases it is, Goodman said. But sometimes there are good reasons why researchers can’t conform to these rules, like when the data contains confidential or sensitive medical information.4 Critics, which include more than a dozen scientific organizations, argue that, in practice, the rules would prevent many studies from being considered in EPA reviews.5
It might seem like an easy task to sort good science from bad, but in reality it’s not so simple. “There’s a misplaced idea that we can definitively distinguish the good from the not-good science, but it’s all a matter of degree,” said Brian Nosek, executive director of the Center for Open Science. “There is no perfect study.” Requiring regulators to wait until they have (nonexistent) perfect evidence is essentially “a way of saying, ‘We don’t want to use evidence for our decision-making,’” Nosek said.
Most scientific controversies aren’t about science at all, and once the sides are drawn, more data is unlikely to bring opponents into agreement. Michael Carolan, who researches the sociology of technology and scientific knowledge at Colorado State University, wrote in a 2008 paper about why objective knowledge is not enough to resolve environmental controversies. “While these controversies may appear on the surface to rest on disputed questions of fact, beneath often reside differing positions of value; values that can give shape to differing understandings of what ‘the facts’ are.” What’s needed in these cases isn’t more or better science, but mechanisms to bring those hidden values to the forefront of the discussion so that they can be debated transparently. “As long as we continue down this unabashedly naive road about what science is, and what it is capable of doing, we will continue to fail to reach any sort of meaningful consensus on these matters,” Carolan writes.
The dispute over tobacco was never about the science of cigarettes’ link to cancer. It was about whether companies have the right to sell dangerous products and, if so, what obligations they have to the consumers who purchased them. Similarly, the debate over climate change isn’t about whether our planet is heating, but about how much responsibility each country and person bears for stopping it. While researching her book “Merchants of Doubt,” science historian Naomi Oreskes found that some of the same people who were defending the tobacco industry as scientific experts were also receiving industry money to deny the role of human activity in global warming. What these issues had in common, she realized, was that they all involved the need for government action. “None of this is about the science. All of this is a political debate about the role of government,” she said in the documentary.
These controversies are really about values, not scientific facts, and acknowledging that would allow us to have more truthful and productive debates. What would that look like in practice? Instead of cherry-picking evidence to support a particular view (and insisting that the science points to a desired action), the various sides could lay out the values they are using to assess the evidence.
For instance, in Europe, many decisions are guided by the precautionary principle — a system that values caution in the face of uncertainty and says that when the risks are unclear, it should be up to industries to show that their products and processes are not harmful, rather than requiring the government to prove that they are harmful before they can be regulated. By contrast, U.S. agencies tend to wait for strong evidence of harm before issuing regulations. Both approaches have critics, but the difference between them comes down to priorities: Is it better to exercise caution at the risk of burdening companies and perhaps the economy, or is it more important to avoid potential economic downsides even if it means that sometimes a harmful product or industrial process goes unregulated? In other words, under what circumstances do we agree to act on a risk? How certain do we need to be that the risk is real, and how many people would need to be at risk, and how costly is it to reduce that risk? Those are moral questions, not scientific ones, and openly discussing and identifying these kinds of judgment calls would lead to a more honest debate.
Science matters, and we need to do it as rigorously as possible. But science can’t tell us how risky is too risky to allow products like cigarettes or potentially harmful pesticides to be sold — those are value judgements that only humans can make.