Join us.

We’re working to create a just society and preserve a healthy environment for future generations. Donate today to help.

Donate

New EPA White Paper on Probabilistic Risk Assessment

Responsive Government

Earlier this month, EPA released for public comment a new white paper on probabilistic risk assessment, marking the Obama Administration’s first major foray into the contentious debate about EPA’s evolving risk assessment methods. Back in May, EPA Administrator Lisa Jackson announced changes to the way the Office of Research and Development (ORD) will update risk assessments for the IRIS database, but that announcement was made without any real public input and it only implicated the inner workings of one program office (albeit an important one). The public comment period on the new white paper presents the first opportunity for the various stakeholders who usually weigh in on EPA’s risk assessment policies to have some say in the new administration’s policies.

The new white paper, Using Probabilistic Methods to Enhance the Role of Risk Analysis in Decision Making, focuses on one of the fundamental problems in regulatory risk assessment – how should risk assessors and risk managers address the uncertainty and variability intrinsic to the risk assessment process?

The most straightforward way to answer that question, and EPA’s approach in many situations, is to use default assumptions. When the pesticides program staff is working on setting a limit for pesticide residue on apples, they can use a standard assumption about the number of apples a person eats. Or when ORD staff updates IRIS profiles, they can assume a linear dose-response relationship for a suspected carcinogen’s toxicity. But as scientific knowledge about certain parameters and models used in risk assessments grows, default assumptions might legitimately be replaced by data collected in the real world. Recognizing that every parameter and model used in a risk assessment has some inherent level of uncertainty, and that variability in the population can have a significant impact on risk determinations, risk assessors can use probabilistic data to replace point estimates of specific parameters or generic model assumptions.

Going back to the pesticide residue example, prior to 1998, EPA assumed that 100% of a crop with registered uses of a pesticide was treated with that pesticide, all of the crop that ended up on grocery store shelves had residues of the pesticide at the maximum level allowed under the law, and the relevant population ate the contaminated crop often (at the 95th percentile). Since then, EPA has started using probabilistic risk methods to fill in these data points. Instead of assuming that everyone eats a lot of a specific crop, EPA collects data from FDA’s Continuing Survey of Food Intake by Individuals (CSFII). Instead of assuming that every piece of fruit has the maximum amount of allowable pesticide residue, EPA collects data from “crop field trials, USDA’s Pesticide Data Program (PDP) data, Food and Drug Administration (FDA) monitoring data, or market basket surveys conducted by the registrants.” All of these new data are then run through a new risk model that calculates not only the population’s risk, but also risks to various subpopulations (e.g., infants, kids between the ages of 6 and 12, etc.) (Check out Case Study 4, pp.58-59 in the white paper.)

Proponents of using probabilistic methods to address uncertainty and variability in risk assessment argue that these methods will result in a “fuller characterization of risk,” that they can help identify vulnerable populations, and that they can highlight the spots where additional data could improve a risk assessment. But is it worth the time and effort? Collecting, validating, and analyzing all of the data needed to replace default assumptions with probabilistic models takes time and money. It creates the risk of wading into a regulatory quagmire. Again, going back to the pesticide residues, are market basket surveys conducted by pesticide manufacturers reliable data sources for estimating pesticide residues? Or is it better to assume maximum allowable residues, avoid the disputes about data reliability, and move on to the next decision? The decision whether to use probabilistic methods (vs. default assumptions) to fill data gaps is as much a policy decision as it is a scientific endeavor.

Unfortunately, EPA’s white paper, despite its great background on what probabilistic methods are, what they can do, and how they work, does not do a very good job of describing the time, money, or other resources that are necessary to produce useful information using those methods. The paper has 16 case studies carefully chosen to show the broad range of probabilistic methods that EPA has used in recent years to add detail to various risk assessments. The case studies neatly show that probabilistic methods can sometimes lead to more stringent regulations, sometimes less stringent standards, and sometimes have no effect. But what they lack is any quantitative evidence about the resources used in each case.

The white paper recommends that EPA improve its internal capacity for utilizing probabilistic risk assessment methods through training, knowledge sharing, and the development of general policies and guidance. This last point is one worth echoing. It is immensely important that EPA establish standard procedures that will help risk assessors and risk managers determine which tools to use and when to use them, so that the risk assessment process does not become so bogged down in data collection and analysis that the ultimate regulatory decisions needed to protect human health and the environment are unreasonably delayed.

(EPA has extended the comment period for the white paper until September 16. Docket number EPA-HQ-ORD-2009-0645 can be accessed here.)

Responsive Government

Subscribe to CPRBlog Digests

Subscribe to CPRBlog Digests to get more posts like this one delivered to your inbox.

Subscribe