In Michigan v. EPA, handed down two weeks ago, the Supreme Court waded into the decades-long debate over the use of cost-benefit analysis (CBA) in agency rulemaking. The decision struck down EPA’s limits on mercury emissions from power plants for the agency’s failure to consider costs, and so appears, superficially at least, like a win for the pro-CBA camp. Indeed, Professor Cass Sunstein of Harvard—President Obama’s former “regulatory czar” and one of CBA’s most prominent cheerleaders—has been crowing about the opinion, hailing it as “a rifle shot,” ringing in the arrival of “the Cost-Benefit State.”
But Sunstein’s celebration is a bit premature; his so-called “cost-benefit state” remains mostly in his imagination. In fact, there is good reason to believe that the Court remains quite skeptical of the particular brand of CBA that Professor Sunstein advocates. And that’s very good news for the rest of us.
The EPA issued its long-awaited cooling water rule yesterday and the score appears to be: Industry – home run; Fish – zero. Which is to say, it’s bad news not just for the fish but also for all of us who depend on the health of our aquatic ecosystems – which is to say, everyone.
This is the rule that governs the design standards for the massive cooling water intakes at power plants and other large industrial facilities that withdraw billions of gallons of water a day from our rivers, lakes and estuaries. In the process, they kill billions of fish and other aquatic organisms. Congress was aware of this problem when it passed the Clean Water Act in 1972 and so included language directing the EPA to require those structures to “reflect the best technology available [BTA] for minimizing adverse environmental impact.”Full text
Since the Reagan Administration, federal agencies have been required by Executive Order to send their major rules to the White House’s Office of Information and Regulatory Affairs (OIRA) for review before releasing them to the public. OIRA review consists of, among other things, ensuring that agencies subject their rules to cost-benefit analysis to make sure the dollar value of their costs to industry exceeds the dollar value of the benefits they confer on the public.
It was no surprise under the Reagan administration – or more recently under the George W. Bush administration – that OIRA review served largely to delay and weaken rules. But you might be surprised to hear that the Obama administration’s record on OIRA delays has been significantly worse than the George W. Bush administration’s. A new report prepared by the Administrative Conference of the United States (ACUS) found that “in 2012, the average time for OIRA to complete reviews increased [from 51 days] to 79 days, and in the first half of 2013, the average review time was 140 days –nearly three times the average for the period from 1994 through 2011.”
The report went on to note that the number of rules languishing at OIRA for 6 months or a year or more has risen dramatically in the Obama administration. This is particularly disturbing, since Executive Order 12866, which governs OIRA review, sets a clear, mandatory 90 day deadline for review, with a one-time 30-day extension permitted in certain limited circumstances.
According to the report:
From 1994 through 2011, an average of fewer than 10 completed reviews per year (less than 2%) took more than six months; however, in the first half of 2013, 63 reviews (nearly 30%) took more than six months, and 27 (nearly 13%) took more than one year. Further, these statistics may understate the extent of the delays. According to senior employees in 11 departments and agencies (who were interviewed for this report anonymously and without indication of agency affiliation), OIRA has increasingly used “informal reviews” of rules prior to their formal submission [.]
At CPR, we’re glad to see ACUS focus on the important problem of OIRA delay, which we’ve commented on in the past. But many of us at CPR were disappointed to see that the report misses important causes of delay at OIRA and that many of the recommendations might continue or expand OIRA’s interference in agency rulemakings in ways that were never authorized by Executive Order 12866.
Yesterday, CPR President Rena Steinzor, Member Scholars Tom McGarity, Wendy Wagner, Sid Shapiro and Senior Analyst James Goodwin and I submitted comments highlighting some of the problems at OIRA that ACUS should seek to address in its report and subsequent list of recommendations.
It was 20 years ago this week that President Bill Clinton signed Executive Order 12866. That was a watershed of sorts, because it marked the adoption by a Democratic administration of a key aspect of President Reagan’s anti-regulatory agenda -- the requirement that all major federal regulations undergo cost-benefit analysis. This was not a move that pleased Clinton’s liberal base, since cost-benefit analysis was widely understood to be a tool favored by industry for weakening and delaying regulation. But, nonetheless, Clinton signed 12866 in 1993, and it’s been with us ever since.
Maybe the staying power of cost-benefit analysis has partly to do with the superficial appeal of the basic idea. “After all,” says the Chamber of Commerce, “it’s just basic rationality and common sense! Why would you want a rule that causes more harm than good?” And then come the inevitable appeals to Ben Franklin, who apparently said something once about writing down pros and cons on a sheet of paper. So if you’re against cost-benefit analysis you’re basically against Ben Franklin, which means you might as well say you hate your mother and never want another slice of apple pie. Perhaps it’s no surprise, then, that Clinton capitulated to such arguments. Or that Obama did the same nearly two decades later, issuing an order “reaffirming” 12866 after briefly flirting with the idea of scrapping it.
The problem is, there’s cost-benefit analysis and there’s cost-benefit analysis. Ben Franklin’s sheet of paper with the line down the middle is one end of the spectrum. But at the other end is a highly technical and formal method grounded in economic theory that attempts to fully quantify and monetize all of the social costs and benefits of a whole range of regulatory options and then, by calculating the point at which the marginal benefits curve intersects the marginal costs curve, identify the “economically efficient” level of regulation. And those two decision-making tools have very little in common.Full text
Cross-posted from ThinkProgress.
“Election over, administration unleashes new rules,” trumpeted an Associated Press story this week.
What are these newly unleashed rules? Perhaps the big food safety rules that have been stalled for more than a year have gone through? Rules limiting greenhouse gas emissions from new and existing power plants? Long-awaited rules to protect coal miners’ safety?
Not quite. In fact, the AP strained to come up with just tiny examples: “[T]he Environmental Protection Agency has proposed rules to update water quality guidelines for beaches and other recreational waters and deal with runoff from logging roads.”
The recreational waters standard was a welcome development, but not particularly consequential or abrupt. EPA was required by law to issue the recreational water standards by 2005; it has issued them now only after being ordered by a court to do so. And as the agency explained in its press release, “The criteria released today do not impose any new requirements; instead, they are a tool that states can choose to use in setting their own standards.”
As for the rule earlier this month on runoff from logging roads, it’s not what you might imagine: it says that EPA will not be regulating pollution from logging roads. That regulation was issued in an incredibly short period of time; it took only three months from the agency proposing a rule to issuing its final “rule.” If only the Administration were so aggressive with protective rules.Full text
The Clean Water Act turns 40 today. One of the remarkable things about those four decades is the extent to which the Act has largely withstood repeated attempts by industry to water down its technology-based standard-setting provisions with cost-benefit analysis. Just three years ago, when the U.S. Supreme Court decided Entergy Corp. v. Riverkeeper, environmentalists largely lost one skirmish in this ongoing war, but the legacy of that opinion may actually be less harmful to the statute’s ability to protect clean water than appears at first blush. Understanding all that requires going back to the origins of the Act.
It’s not that there wasn’t a federal statute aimed at preventing water pollution back before 1972. It’s just that the old statute wasn’t working. A key problem was that the old statute set standards based on the water quality of a river or lake as a whole. This was difficult and cumbersome and made enforcement virtually impossible, because one polluter could always point the finger at another discharging into the same river in order to evade responsibility. The big innovation of the Clean Water Act of 1972 was to vastly simplify the standard setting and enforcement process by saying to polluters, “regardless of what others are doing, you must reduce the pollution levels coming out of your discharge pipe as much as is technologically feasible.” These technology-based standards were far easier to implement and enforce, and the result was a dramatic improvement in water quality throughout the nation’s rivers, lakes and streams in subsequent decades.
Industry fought these standards almost from the beginning, and one of their stock arguments was always that pollution standards should be set by a cost-benefit analysis rather than on the basis of the best technology available. Industry figured, correctly, that requiring EPA to prove that the environmental benefits of a given pollution standard outweighed its costs would bog the agency down in endless calculation and analysis and give industry lots of opportunities to delay and challenge rules and permits. Technology-based standards already take costs into account, because EPA and the courts have always interpreted the determination of whether a technology is feasible or “available” as including an estimation of the technology’s economic feasibility. But requiring the agency to specifically prove that the costs did not outweigh the monetized benefits of a standard would mire them down in exactly the kinds of cumbersome evaluations of overall water quality that Congress sought to avoid by enacting technology based standards in the first place.Full text
Remember that kid on the playground who always insisted on changing the rules of the game and then still threw a tantrum when he lost? That’s just the kind of spoiled-brat behavior we’re seeing from the coal industry and its elected agents on Capitol Hill this week. Coal and other polluting industries have spent decades complaining about the federal laws that protect public health and the environment, arguing that we should change the rules by which they operate, forcing agencies to perform complicated cost-benefit analyses before they can impose limits on polluters. They’ve always figured (and mostly they’re right) that cost-benefit analysis would result in less stringent regulation, because the benefits of protecting public health and the environment are so difficult to quantify and monetize that agencies will end up undercounting them in comparison to costs.
Imagine their disappointment, then, when Lisa Jackson starts playing by their rules . . . and winning! It turns out, that for at least one type of air pollution – particulate matter – we do have some half-decent public health data. It’s undoubtedly still incomplete, only accounting for a portion of the various ways that soot and other fine particles in the air mess with our bodies, but the data are enough to show that particulate matter pollution is causing an enormous amount of damage to our health – and that cleaning it up will produce huge benefits. These numbers are so big that they outweigh the cost estimates by billions of dollars. And they make things like EPA’s upcoming rule limiting mercury and other pollutants from coal-fired power plants look like a really good idea.
In their desperation to make the benefits of clean air look smaller, two anti-EPA Republicans have reached back to an idea that was so callous and cynical and produced such an immediate furor when it was suggested a decade ago that even the Bush administration ultimately dropped it like a hot potato. Frank O’Donnell of Clean Air Watch first caught this yesterday and it deserves attention. In a letter Tuesday to Regulatory Czar Cass Sunstein, two House subcommittee chairs friendly to the coal lobby, Representatives Andy Harris (R-MD) and Paul Broun (R-GA), suggest reviving the “senior death discount,” writing:
You have stated that “it makes a great deal of sense to focus on statistical life-years rather than statistical lives.” In spite of the fact that most mortality associated with PM2.5 happens in the population over 65 years of age, EPA puts the same value on mortality for all ages. In your view, is this practice appropriate?
This post was written by Member Scholar Amy Sinden and Policy Analyst Lena Pons.
Last week, the National Automobile Dealers Association (NADA) sponsored a fly-in lobby day to support an amendment that would strip EPA of the authority to set greenhouse gas emission standards for passenger cars and light trucks for 2017-2025. The amendment, offered earlier this year by Rep. Steve Austria (R-Ohio), would prevent EPA from spending any money to implement the 2017-2025 standards. NADA wants the National Highway Traffic Safety Administration (NHTSA) to have sole responsibility for regulating vehicle efficiency. Dealers want NHTSA to run the show because, they claim, EPA does not give adequate consideration to costs of the standards.
One problem: the auto dealers have completely misrepresented how EPA and NHTSA’s joint standards work. In fact, EPA, just like NHTSA, kept considerations of cost and technological feasibility front and center in developing the joint fuel economy and greenhouse gas standards for 2012-2016. There is no reason to think they will approach the 2017-2025 standards differently.Full text
It all started Monday on the Daily Caller. The story claimed that the EPA, in planning regulations on greenhouses gasses, is “asking for taxpayers to shoulder the burden of up to 230,000 new bureaucrats — at a cost of $21 billion — to attempt to implement the rules.” The story spread like wild among many of the usual suspects, like National Review, Red State and Fox News. And it was promoted by some of the top anti-regulation advocates in Congress: Senator Jim Inhofe, House Energy & Commerce’s Environment and the Economy subcommittee chair John Shimkus, and Rep. Geoff Davis, chief sponsor of the REINS Act. Inhofe and Davis both reprinted the original article directly on their site.
One problem: the story isn’t true.
Daily Caller writer Matthew Boyle found the 230,000 stat in a brief the EPA filed on September 16. That brief defends the “tailoring rule,” which is the agency’s method of limiting which greenhouse gas emitters will be regulated under the Clean Air Act’s PSD Program. The EPA has said previously that it would be very impractical to require all small emissions sources (i.e., any facility emitting over 100 or 250 tons per year of CO2) to get a permit; instead, it will focus on large sources, such as big industrial facilities that emit at least 75,000 – 100,000 tons per year of CO2.
In the brief (see pages 48-49 of the PDF), EPA says that “immediately applying the literal PSD statutory threshold of 100/250 tpy to greenhouse gas emissions” – that is, no tailoring rule – would “overwhelm the resources of permitting authorities and severely impair the functioning of the programs…” It would necessitate, the EPA estimated, 230,000 new hires.
The irony here is that in worrying specifically about a hypothetical 230,000 new EPA hires, the anti-regulatory crowd has a great ally on this matter – one who just laid out a 149-page case against the non-tailored GHG regulation that could theoretically require it. That was the EPA.Full text
This post was written by Member Scholar Amy Sinden and Policy Analyst Lena Pons.
This morning President Obama will make an announcement about upcoming fuel economy and greenhouse gas emission standards for passenger cars and light trucks for model years 2017-2025. The announcement will reference the Administration’s plan to propose a standard to reach 54.5 miles per gallon by 2025. These standards will set the pace at which automakers improve the fuel economy of cars for many years to come, and help to determine how quickly advanced technologies – plug-in hybrids, electric vehicles, and fuel cell vehicles – will be available in showrooms.
But the planned announcement is troubling because the number the President will roll out was the result of raw political wrangling, not the rational policymaking process that the Administration purports to pride itself on. The White House has been haggling with the automakers for the last month, and 54.5 is the magic number that has emerged from that negotiation.
This is not how the process is supposed to work. Under the laws passed by Congress, the agencies are supposed to go through a rational scientific process in order to set the standard at the “maximum feasible” level. Once the agencies – in this case the National Highway Traffic Safety Administration (NHTSA) and EPA – have gone through that process and come up with a tentative number, then they are supposed to go through the notice and comment rulemaking process. That means they publish their proposed rule along with a detailed explanation of their proposal and supporting information. Then everyone, including the automakers, is invited to comment on the proposal and attempt to persuade the agencies of how the initial proposal should be modified. The comments of all interested parties are public, and everyone is theoretically included in the process. This process is not new or unfamiliar. It is one of the fundamental tenets of Administrative Law.Full text