by Sidney Shapiro, July 2009
How should the government ensure the quality of scientific and statistical information that agencies disseminate?
Federal agencies increasingly are seeking to fulfill their statutory missions by disseminating information, particularly through the Internet, about the entities, products and topics within their purview. Programs like the Toxic Release Inventory (TRI), an annual, national compilation of chemical releases issued by the Environment Protection Agency (EPA), create political and economic pressure on firms to improve their performance, such as by reducing toxic exposures beyond the amounts required by existing regulations. Other programs empower individuals to alter their market activity in a manner that reduces their risk. The crash worthiness ratings issued by the National Highway Safety Administration (NHTSA) illustrate this potential. More broadly, information disclosure satisfies the public’s right to know about potential hazards.
In 2001, Congress passed a two-paragraph provision buried in an appropriations bill that requires agencies to ensure and maximize the quality of information that they disseminate and to establish an error correction process. Congress also gave the Office of Management and Budget (OMB) the power to issue guidelines to agencies about how to implement the requirement. Rep. Jo Ann Emerson (R-MO) sponsored the rider without legislative hearings, committee review, or debate. Representative Emerson reportedly acted at the behest of Jim Tozzi, a former OMB-official who runs the corporate sponsored Center for Regulatory Effectiveness. As far as can be determined, few, if any, other members of Congress knew of the appropriations rider at the time they voted for it. In February 2002, OMB issued instructions telling agencies how to implement the legislation. After seeking public input, agencies adopted permanent procedures to implement the rider in October 2002.
The IQA initially appeared to offer regulated entities a new method of monkey-wrenching the regulatory process. While no one would have the government rely on unreliable or poor quality information, there is an important distinction between uncertain science and poor quality data. For example, an excellent study of the adverse health effects of heightened blood lead levels may be incomplete in the sense that it does not definitively indicate the hazards to the public exposure to lead. This scientific uncertainty results from the fact that the rates of transfer between airborne lead and blood lead are poorly understood by scientists. The way in which OMB implemented the IQA seemed to permit regulated entities to attack such scientific uncertainty as a data quality problem under the IQA.
Although some companies and industry-related groups tried this tactic, the IQA has not turned into the anti-regulatory tool that was originally anticipated. For one thing, the Bush Administration was so friendly to industry preferences that the IQA turned out to be largely unnecessary for companies or trade associations to gain their objectives. Further, the courts have held that an agency’s disposition of an IQA complaint is not judicially reviewable. This means a company or industry group cannot ask the courts to overturn an agency when it rejects a complaint that data is not reliable. Thus, the Obama administration can reject any such complaints, assuming that they are unwarranted, without fear of being overruled by the courts.
What’s At Stake?
Should OMB be responsible for establishing information quality procedures for federal agencies?
Although the IQA did not unleash a torrent of data quality complaints, the impact of the legislation has not been entirely benign. OMB used the rider as authority to adopt peer review guidelines in 2004 and to propose risk assessment guidelines in 2007. The peer review guidelines were rewritten by OMB after it received extensive criticism from the scientific community. Although there was less opposition to the final version of the guidelines, the guidelines remain problematic. The proposed risk assessment guidelines were withdrawn after they were criticized by a committee of the National Research Council of the National Academy of Sciences as too flawed to be repairable.
The scientific opposition to OMB’s initial peer review guidelines and to its risk assessment guidelines may reflect its lack of expertise. Although almost all OMB employees are economists, accountants, and lawyers, this did not stop the agency from attempting to write guidelines for the conduct of peer review and risk assessment. More likely, the defects in these efforts sprung from the Bush administration’s efforts to use these procedures to pursue a political agenda—to make it more difficult to issue regulations and to justify this effort by conflating scientific uncertainty and inaccurate science.
This conflation has been the hallmark of industry’s “sound science” campaign. Regulatory opponents seek to exploit scientific uncertainty by arguing that regulatory action is not based on “sound science,” but their real objection is with the policy choice made by Congress not to wait for more definitive information about the extent of a risk before a regulatory agency acts to reduce that risk. As Stanton Glantz and Elisa Ong, two health researchers, explain: “the ‘sound science’ movement . . . is not simply an effort from within the profession to improve the quality of scientific discourse. This movement reflects sophisticated public relations campaigns controlled by industry executives and lawyers to manipulate the standards of proof for the corporate interests of their clients.” The tobacco industry invented the “sound science” strategy as part of its long effort to stave off government regulation, and it has become a staple of anti-regulatory reformers. (See CPR Perspective on Clean Science)
The National Research Council of the National Academy of Sciences has indicated that “there is room for improvement in risk assessment practices” in the federal government, but the committee also recommended that “OMB should limit its efforts to stating goals and general principles of risk assessment.” The committee based its recommendations on the lack of expertise at OMB to improve risk assessment in the government: “The details should be left to the agencies or expert committees appointed by the agencies, wherein lies the depth of expertise to address the issues relevant to the specific types of risk assessments.”
More broadly, unless the goal is politicization, it makes no sense for White House staff to try to spend hours trying to learn what agency experts already know about agency information practices. It may make sense for the White House to create an agenda for the improvement of information practices and to coordinate the fulfillment of that agenda.
The White House’s role in superintending information quality should be limited in the following ways. First, the White House should not assume that there is some problem with an agency’s use of scientific or statistical information or that the solution is additional guidelines if there is such a problem. OMB entirely ignored this step when it adopted its peer review guidelines and proposed its risk assessment guidelines. The same is true for Congress when it adopted the so-called Information Quality Act as an appropriations rider without any hearings or investigation concerning whether legislative action was necessary. This step is important because requiring agencies to adopt new or revised procedures regarding scientific and statistical information slows down regulatory decision-making. Unless there is an offsetting benefit in terms of higher quality information, the adoption of procedures is an anti-regulatory reform.
Decisions on the Table
-- What role should OMB play in ensuring the quality of scientific and statistical information?
-- What role should agencies play in ensuring the quality of scientific and statistical information?
Second, if the White House seeks to harmonize how agencies compile and disseminate scientific and statistical information, it should require agencies that use common types of information to meet and jointly agree on common ways of improving information quality. The new methods should be written by scientific experts at their respective agencies, and they should be subject to peer review by scientific advisory committees from which agencies routinely seek expert advice. The White House’s role should be limited to convening inter-agency committees and to ensuring that agencies have non-arbitrary reasons for maintaining existing approaches or for making changes.
Third, the White House should require that interagency or individual agency efforts to improve the quality of information be transparent and public. Moreover, given the importance of ensuring appropriate information quality policies, the work product of agencies or any inter-agency task forces should be made available for public comment.