SOURCE: book cover
Interview reprinted from Science Progress.
It’s a sordid story that’s been repeated too many times over many decades. Independent scientists identify a chemical or environmental hazard that threatens public health. Industry-funded researchers question the results of these studies and call for more research, delaying regulatory action that will protect citizens. The classic case is the long war waged by the tobacco companies. An internal memo from 1969 explains the aims of an industry that mastered the art of manufacturing uncertainty: “Doubt is our product since it the best means of competing with the ‘body of fact’ that exists in the minds of the general public. It is also the means of establishing a controversy.”
Debating the science, David Michaels points out in his new book, Doubt Is Their Product: How Industry’s Assault on Science Threatens Your Health, is a lot easier for these companies than debating policy. He traces the same “tricks of the trade” that industry-funded scientists used to delay action to curb the health risks posed by tobacco, asbestos, beryllium, dangerous pharmaceuticals, diacetyl (which causes “popcorn lung”), and man-made climate change. Scientists, policymakers, journalists, and citizens deserve independent science that protects public health, says Michaels, an epidemiologist and the director of the Project on Scientific Knowledge and Public Policy at the George Washington School of Public Health. This interview transcript has been edited.
Andrew Plemmons Pratt, Science Progress: When did you know that you needed to write this book, and why?
David Michaels: During the Clinton administration, I served as assistant secretary of energy for Environment, Safety, and Health. I was responsible for the health of workers, the communities, and the environment around the nation’s nuclear weapons factories, some of the most polluted and dangerous places in the United States. One of the hazards I had to address was beryllium, an extremely toxic lightweight metal that’s used in the manufacture of our nuclear weapons. We had workers who were getting chronic beryllium disease, which a terrible, often deadly disease, after extremely low exposures. We had an accountant who got sick after working for a very brief time in a building in which beryllium had been used several years earlier.
In the Clinton administration, under the leadership of then-Secretary of Energy, now-New Mexico Governor Bill Richardson, we issued a beryllium protection rule that’s ten times stronger than the one that’s used by the Occupational Safety and Health Administration for private sector workers. As we were developing this rule, the beryllium industry submitted studies that I saw as attempts to obscure the issue. Instead of acknowledging that the old standard wasn’t adequately protective, they focused on what we didn’t know. They clearly wanted to delay our more protective regulation. Of course, we didn’t let their reports stop us and we issued a strong regulation.
But I continued to be interested in beryllium after leaving the Clinton administration. So if there was an “ah ha” moment, it was probably when, searching through the tobacco archives—millions of pages of previously-secret documents put up on the web as a result of the tobacco lawsuits—I discovered that the same scientists who had manufactured uncertainly for the beryllium industry were doing the same thing for the cigarette manufactures.
SP: That’s one of the key themes that you write about, the “manufacture of uncertainty.” Can you talk a little bit about the different kinds of methods that industries use to create this uncertainty where it did not exist before?
Michaels: In the book I call them “tricks of the trade,” which are tricks one can do as a scientist if you know how to do them. These are tricks that turn positive studies into negative ones or take one positive study and do a literature review which buries the positive study in what is essentially a whole mass of garbage so it looks like there is nothing there.
Comedian Lily Tomlin said, “No matter how cynical you become, it’s never enough to keep up.” I see these same techniques and these same approaches across the scientific literature, and not just in terms of chemical hazards. They’re used by the deniers of global warming, the people who are essentially fueled by the big oil and coal companies trying to say that humans aren’t causing global warming. After finding the initial information, I dove into the literature, court records, and dockets where corporations and trades associations file comments with agencies, and I found the same names, the same tactics—the same alchemy in campaigns run by all these different industries.
I found some very powerful smoking guns: the sales pitches made by these product defense firms. They boasted about how they were able to delay regulation, what they were able to do, how they needed to change scientific studies around.
I found one for example around freon, which we now causes a hole in the ozone layer. The Hill & Knowlton company was saying, “We were able to essentially delay regulation for a couple of years when we were working for DuPont on this.” So what I’ve done is put all these smoking guns up on our website, which is www.defendingscience.org so anyone can download them and read exactly how these people work.
SP: How can scientists, policy makers, and journalists learn to spot these tricks?
Michaels: It’s very tough, and you have to be somewhat of an expert in the field. I see things in epidemiology immediately, but I have more trouble with the toxicology. Given that it’s difficult to do, and a layperson can’t just pick it up, it’s very important, first of all, that scientists who are in the field who see these problems write letters about them and put critiques up on the web. But I think the other thing that is more fundamental is to have a screen around conflict of interest.
We know the basic problem is that scientists who are paid to find a certain result will find that result. That’s certainly what we see in these studies over and over again: that scientists who work for these companies that actually manufacture uncertainty never find a result the sponsor doesn’t want.
So one thing that we should be demanding on the part of our scientific journals, and on the part of the regulatory agencies, is to ask, “Who paid for this study, and under what contract were these studies done?” If they were done with a secret contract, as is often the case, or a contract that says the sponsor has the right to see the results before they’re published. Those studies should immediately be in question and they should be looked at much more carefully.
SP: And there is research on this “funding effect” demonstrating a link between who’s paying for the research and the results of it, correct?
Michaels: That’s true, and we see that across the board. We saw it in tobacco; we see it all over the pharmaceutical literature; most recently in bisphenol A, a plastic chemical that’s used in baby bottles. There are well over a hundred studies that have been done—there’s a small handful paid for done by the producers, the chemical companies making bisphenol A—all those studies show no effect. But 90 percent of the studies done by scientists independent of these corporations find an effect. It’s very powerful. We know that to really trust a study, it should be done independently.
SP: You write about how the sustained assault on scientific integrity under the Bush administration has actually demoralized a whole lot of scientists who are working for the federal government. Shortly after your book came out, the Union of Concerned Scientists released a survey saying that about 60 percent of the respondents of scientists at the EPA had personally experienced political interference with their work in the past five years. What is the toll of this interference and of this manufacturing of uncertainty on scientists, and how can we make sure that the government has good researchers in the future?
Michaels: It is a significant problem. As you said, the morale of government scientists has plummeted. Out best scientists chose to work in government because they see—or at least they saw—public service as a higher calling. They want to help in the nation’s efforts to improve human health, to protect the environment. Now many of them feel they can no longer make a contribution to the public good in their current job and they’re getting out or they’re just fading away; they’ve become marginalized. We have to reverse this trend by making public service an appealing career choice again.
I think we have to do this on two levels. The first and somewhat more obvious one is to set up structures and policies to ensure that government scientists will be respected and listened to—that they can publish their studies; that they can meet with other scientists; that they can get training that they need to keep up. We need of course the best scientists in the country. When one of our regulatory agencies has to go nose-to-nose with a multi-billion dollar chemical or pharmaceutical company, they need to have the best scientists; they can’t have scientists who can no longer go to scientific meetings to keep up with their field. So we have to make the job of scientists one that good scientists want to go into.
But on a more fundamental level, we have to change the national view of government work. Idealistic young scientists should feel the call to public service. In the past, Presidents Franklin Roosevelt and John F. Kennedy called young professionals and activists to serve their country. Our next president needs to make public service a desirable career goal—not just Americorps or the Peace Corps, but signing up with the EPA or the Justice Department with the understanding that people will work there for multiple years. We should see this as an important part of any homeland security program, because it is our future and our children’s future.
SP: How can people who are working scientific fields, or are writing about them, or who are dealing with policy present these ideas without getting into the complexities of the science or of the regulatory policy, which can confuse people?
Michaels: It’s very tough. But one thing that I’ve seen people start to do is come together as scientists to look at the scientific literature—to step back to a minute and say, “Here is a chemical that is in our environment; we’re very interested in this; there are studies being produced by government scientists, by university scientists, by corporate scientists.” There is a group of us in any university town—let’s come together and look at the literature almost as a journal club. Let’s get together and see what we think of the interpretations done by these different groups of scientists and start writing ourselves: put up a blog post and weigh into the discussion. Scientists can do that with a lot of credibility and they bring a lot of expertise because there are so many toxic chemicals out there, and producers of these chemicals can hire scientists to create fictions about whether or not they’re dangerous or safe. But there are independent scientists who don’t take those issues on because they’re often not funded to do so. Just as volunteers, I think that’s something we could do—little chemical investigation groups.
SP: Let me ask another question about one of the “tricks of the trade” you point out that seems pervasive, which is reanalysis of existing data in order to come to different conclusions from those at which a particular study might have arrived. Part of the problem here has to do with access to raw data from scientific studies, and the FDA is one particular agency that does generally have access to raw data from studies when they’re making decisions about pharmaceuticals. Can you talk about why access to raw data is so important in combating these nefarious reanalyses?
Michaels: It’s a fascinating issue. There’s an unequal playing field. There was a law passed by Congress in the late 1990s called the Shelby Amendment that gives public access to any study done by the government or paid for by the government and done by, say, university scientists. What has happened over and over again is corporations have gotten this raw data and paid mercenary scientists to reanalyze the data and to essentially make positive results go away. If you have raw data, you can do that.
Now the FDA also understands that the interpretations done by scientists paid by drug companies can’t absolutely be trusted; they want to do the analyses themselves. So the FDA says, “Give us the raw data when you’re looking at a drug’s efficacy or safety, and we’ll analyze it ourselves.” But for other agencies—the Occupational Safety and Health Administration or parts of the EPA, for example—companies submit studies and the agencies have to rely on those interpretations done by, essentially, the corporate scientists. Those corporate scientists, or mercenary scientists in some cases, have access to the raw data for the work paid for by the universities and by the government. So you have this unequal playing field where the raw data of some studies are available, yet important studies that are produced by corporations can’t be reanalyzed.
Given that you have this unequal playing field, we think everyone’s data should be made available in a way that any researcher should be able to look at those data, but under certain conditions. They have to be able to say not that, “We’re going to go fishing and figure out how we can manipulate these studies to make a positive result disappear,” but rather, “We’re going to set our hypotheses out in advance; we’re going to say this is how it will work—in consultation with the people who wrote the studies.” And then everybody can essentially play equally in the same field.
SP: Well let me ask another question about federal rules that have an impact on this playing field. You devote an entire chapter of your book to what you call “the most important Supreme court case that you’ve never heard of,” which is Daubert v. Merrell Dow Pharmaceuticals, which gives trial judges the responsibility to determine the quality of scientific testimony from expert witnesses. Could you explain why this is a problem when we’re tlaking about public health?
Michaels: It’s an interesting problem because the Supreme court ruled in 1994 that judges must decide that the expert testimony to be given in court cases is “relevant and reliable.” Now most judges have no scientific training and they’re very much influenced by the attorneys—in many cases the corporate attorneys—who weigh in saying, “Don’t believe this study. This is junk science. Don’t let it in.” And so in some cases what judges are doing is saying, “Well this evidence doesn’t look like it makes any sense to me. I’m not going to let it in.”
That ends the court case, and the people who are suing a manufacturer of a dangerous product or a polluter essentially lose their case at that point. Litigation and court cases are a very important part of our public health protection system, especially now in the Bush years when the regulatory agencies have been handcuffed. There are numerous examples I talk about in the book of hazards that are under control only because we have lawsuits because the regulatory agencies really aren’t doing much. So you have this system now where the judges, in many cases very sympathetic to corporations, are using Daubert decisions as an opportunity to stop evidence from getting to juries so they can decide whether or not an injury is caused by a toxic substance.
Now in some cases, they’re very well meaning. But I think that it would be much more reasonable to let a jury decide whether or not there is evidence that some toxic exposure caused an illness. The judges don’t have a particular expertise in this and it’s too easy for them to throw evidence out. So I write about that quite a bit. It’s not a very well-know decision, but it’s a very important one. And I think it does have a big impact on public health.
SP: What do policy makers, journalists, or other readers most need to understand about the way that industry may manufacture uncertainty in dealing with science and public health?
Michaels: Well I think they have to see that this strategy—which tobacco came up with and is now so widely used—we have to expect it. Whenever independent scientists create some new information that shows a link between a toxic chemical, or something else, and human health or the environment, you expect to see the immediate response saying, “We’ve looked at this carefully; it’s not there; more research is needed.” You’ll hear the call for “sound science,” when in fact I think they’re looking for something that “sounds like science” but isn’t. I think what has to be done is that the public, the press, legislators, should demand independent evaluations of studies. You can’t trust interpretations or studies done by sponsors who have an interest in the outcome, and that’s the bottom line.