Join us.

We’re working to create a just society and preserve a healthy environment for future generations. Donate today to help.

Donate

Deconstructing Regulatory Science

Climate Justice Public Protections Responsive Government

Originally published on The Regulatory Review. Reprinted with permission.

The U.S. Environmental Protection Agency (EPA) Administrator Scott Pruitt recently opened another front in his battle to redirect the agency away from its mission to protect human health and the environment. This time, he cobbled together a proposed rule that would drastically change how science is considered during the regulatory process.

Opposition soon mobilized. In addition to the traditional forces of public interest groups and other private-sector watchdogs, the editors of the most prominent scientific journals in the country raised the alarm and nearly 1,000 scientists signed a letter opposing the proposal.

This essay offers a contextual explanation of the reasons why scientists, who are typically loathe to enter the regulatory fray, are so alarmed.

In normal times, when agencies must evaluate the scientific evidence that informs a significant policy decision about health or environmental hazards, they typically take four sequential steps:

  • First, they convene a group of respected scientists from within and sometimes from outside the agency that includes representatives of the disciplines needed to make the determination, such as neurologists, statisticians, pediatricians, and hydrogeologists.
  • Second, the scientists gather all the available scientific research and review it carefully, making informed judgments about which studies are more or less convincing.
  • Third, in a synthesis stage known as evaluating the "weight of the evidence," the scientists assess the evidence more holistically, writing up their findings and doing their best to be as clear and balanced as possible. The "weight of the evidence" approach embodies such basic, best scientific practices that when it is not followed, scientists question the integrity of the analysis.
  • Fourth, the agency typically subjects the scientific assessment and underlying work to some form of internal or external peer review, or both.

The gold standard for this process is set by the National Research Council (NRC) of the National Academy of Sciences (NAS), which provides scientific reviews for issues that can be highly controversial. But throughout the government, the process can be found, where it is duplicated, expanded, and tailored to the missions of agencies making health and safety decisions. For example, an even more elaborate version is mandated by the Clean Air Act for EPA to use when setting national ambient air quality standards (NAAQS).

When a scientific review suggests that a more stringent health or safety standard is justified and the agency proceeds to tighten its requirements, affected industry and deregulatory stakeholders generally oppose the rule. One of the first lines in their attack is to pick apart the foundational, scientific analyses. Beyond scrutinizing the scientific work in legitimate ways, some stakeholders take the additional step of engaging in an approach we refer to here as "deconstruction" of the evidence. This deconstruction involves ends-oriented, illegitimate attacks that take apart the scientific analyses bit by bit in an effort to discredit and alter the results. Deconstruction is accomplished through a myriad of techniques, including plucking individual, particularly unwelcome, studies out of the mix, undermining the validity of other studies for nonscientific reasons, encouraging the reconsideration of raw data using unreliable models, and adding research engineered to confound a finding of harm.

The recent Pruitt proposal endeavors to sweep these deconstruction strategies all together in a single rule and apply them on a grand scale to every important science-based decision EPA might make. Three features of the proposed rule are particularly noteworthy.

First, the rule proposes an exclusionary test that eliminates individual studies—no matter how well-regarded by scientists—based solely on whether the data is transparent. Specifically, the proposed rule states that EPA can only include research in its assessments if the underlying "dose response data and models" are "publicly available in a manner sufficient for independent verification." Both the meaning of the test itself as well as the decision to exempt a particular study from the requirement are explicitly left to the discretion of the Administrator to apply on a "case-by-case" basis.  Might Administrator Pruitt thus exempt from the data transparency requirements all studies sponsored by a regulated industry if these studies are stamped trade secret—regardless of whether the trade secret claim is justified—during EPA's review of an industry's application for a license or permit? No one knows for sure because every implementation step affords the Administrator broad discretion to decide which studies are in and which are out.

Second, the proposal would require EPA not only to prepare its own risk assessment models for predicting adverse health or environmental effects, but also to consider an unlimited series of models suggested by stakeholders. The proposal presents a mind-numbing list of these models that EPA must explicitly consider, such as a "broad class of parametric dose-response or concentration-response models; a robust set of potential confounding variables; nonparametric models that incorporate fewer assumptions; various threshold models across the dose or exposure range; and models that investigate factors that might account for spatial heterogeneity."

In a world where governments have unlimited budgets, money is no object and time has no consequence, this approach might seem acceptable. But delaying pollution controls for years has concrete, measurable results, causing preventable illness, disease, and death. Because EPA does not have anything close to the resources necessary to review every use of a model to assess toxicity of common chemicals, well-financed stakeholders are again in the drivers' seat. They can request an almost infinite number of models to be applied, or they can run them themselves, inundating EPA with a surfeit of conflicting information.

Finally, the proposal applies only to "pivotal regulatory science," which consists of "specific scientific studies or analyses that drive the requirements" or "quantitative analysis of EPA final significant regulatory decisions" or both. The practical challenges involved in actually deciding whether science is pivotal are daunting and inherently problematic. Either staff scientists must decide which of the studies, assessments, and models are "pivotal"—a judgment that entails significant policy judgment—or policy officials will be forced to pick through technical analyses in search of those studies that appear to be the most important.

Both possibilities conflate any meaningful distinction between science and policy. This outcome is made all the worse by the parallel requirements that this "pivotal" science be peer-reviewed and that peer reviewers be directed to evaluate the strength of EPA's assumptions—many of which stem not from science but from statutory and policy criteria delegated directly to the Administrator.

Because the language of the proposed rule is vague and the discretion vast, the proposal could result in a number of destructive changes in the way regulatory science is developed and considered. We highlight three below:

Turning "Best Science" on Its Head. As we began to explain earlier, over the last 50 years, EPA has developed increasingly elaborate processes for engaging scientists in assessing the scientific research bearing on policy questions. The leading example is the synthesis of hundreds of new studies that inform the air quality standards, accomplished through careful weight of the evidence and peer review. Best scientific practices inform the weighting and analysis.

A National Academies Press flow chart for toxicity assessments, found below, provides an illustration of how this deliberative process should work. The Pruitt proposed rule, however, would equip the Administrator and his trusted political appointees with what can best be described as a sniper rifle to shoot out studies for any of several reasons that are now factored into the "weight of the evidence" process. The threshold exclusion criteria for "best" science are based on political conceptions of data transparency, a criterion not even referenced in the NAS flow chart and inconsistent with scientists' own conventions.

Open Season for Data Dredging. Under the proposal, EPA would be required not only to prepare its own models with multiple assumptions, but also to "give explicit consideration" to a long list of models that could be prepared by outside stakeholders. If the agency ignores these models, it could be sued for making an arbitrary and capricious decision that disregarded crucial evidence.   Whether or not such suits prevail, they could motivate the agency to worry about excessive data, further delaying final decisions.

This requirement not only gives a nod to data dredging, but also legally facilitates it. Data dredging is the technique of using multiple models and statistical tests to churn through data sets to produce favorable outcomes. Data dredging is the antithesis of "good science" and it contributes to the worrisome "reproducibility crisis." By including this particular requirement in the proposal, the proposed rule appears to take a page right out of the special interests' playbook. Well-financed stakeholders now have even greater incentives to engage in data dredging efforts: They have ready access to data and are further buoyed by the fact that EPA is forced to consider every one of their ends-oriented models.

Retroactive Culling of Unwelcome Studies. The proposal could even be read to require EPA to revisit research that was used to support important public health rules if it does not meet EPA's amorphous standards for data transparency. For example, highly acclaimed studies on lead toxicity—like Herbert Needleman's landmark paper and some of its progeny—could be excluded from ongoing risk assessments if the older studies do not meet the proposal's requirement for data transparency because, for example, 30-year-old records were destroyed.  Foundational studies are relevant any time a standard for toxic exposure is updated. The update of the EPA's lead in drinking water standard is long overdue.

A second potential target is the 1993 "Six Cities" prospective cohort epidemiological study that demonstrated a clear association between inhalation of fine particulate matter and fatal respiratory illness. To meet the proposal's requirements, researchers would need to expend extensive time and effort either redacting private information—such that the data would be "de-identified"—or run the risk that this "de-identification" may not be effective because well-funded efforts to discredit the study could unveil subjects' identities. If anything, Six Cities was optimistic from a public health perspective, and subsequent studies have supported and expanded its findings about the damage caused by fine particulate matter, which is the subject of continuous review for more stringent controls under the Clean Air Act. Excluding Six Cities from further consideration could compel EPA to repeat critical research before it implements further controls.

As we watch the flurry of activity sparked by the Pruitt proposal, we must remember that this initiative is only one of several efforts now underway to undermine the rigorous use of science to implement the mandates of the nation's environmental protection laws. Other examples include the overhauling of the NAAQS process and the distorting of membership criteria for science advisory boards and federal advisory committees. The next critical step is to put these and other individual initiatives together to track the larger sea change they make to the use of science for policy. Without assessing this bigger picture, there is a very real risk of focusing too much attention on peripheral brush fires while the entire forest burns down.

EPA’s proposed rule uses “exclusion criteria” that focus instead on the availability of the underlying data.
Climate Justice Public Protections Responsive Government

Subscribe to CPRBlog Digests

Subscribe to CPRBlog Digests to get more posts like this one delivered to your inbox.

Subscribe