A bit more than two weeks ago, three academics released a paper, "Do Energy Efficiency Investments Deliver? Evidence from the Weatherization Assistance Program." The authors' answer to this question was, basically, "No." They found that actual energy savings were around 40% of the savings projected by the building simulation model used to prioritize investments inside the home, and that the costs of repairs outweigh the energy benefits. From there, the authors (or their press person) broadly criticized residential energy efficiency programs: the headline on the authors' website claims - still! - that the "costs of residential energy efficiency investments are double the benefits."
Academic incentives reward provocation, and in that respect the authors were successful. My colleagues Merrian Borgeson, Rebecca Stanfield, and Deron Lovaas explained critical flaws in the study, especially the way the authors included the costs of repairs made for health and safety reasons (furnace replacements to prevent carbon monoxide poisoning are not intended to pay for themselves in energy savings) in their cost-benefit analysis. My old friend Dave Rinebolt from Ohio Partners from Affordable Energy, both Steve Nadel and Marty Kushler from ACEEE, and media outlets like Vox and the New York Times wrote on the study. On Tuesday the authors posted a blog responding to the response.
Two weeks on from the release of the study, what have we learned?
Residential energy efficiency: it's not what people think
The idea that one result from one study - even one with a careful research design - could damn "residential energy efficiency" is absurd. As I wrote here a couple of months ago, electric and natural gas utilities invested around $7.6 billion in energy efficiency programs in 2014, split half-and-half between programs targeting residential customers and programs targeting other customers. None of this utility investment goes to the Weatherization Assistance Program (WAP). Only a portion of it goes to programs like "whole home" retrofits that even resemble WAP. A large portion of utility investment in residential energy efficiency funds programs that help customers easily access efficiency opportunities like LED lightbulbs (at least $214 million in 2013) and efficient consumer electronics and appliances (more than $72 million in 2013), and remove inefficient second refrigerators from the grid (more than $68 million in 2013). Electric utilities now spend more than $78 million a year on programs that provide feedback directly to customers about their energy use. Savings from these programs are most often estimated using randomized controlled methods similar to those the authors use.
Marty Kushler makes another good point: we know already that the type of program the authors examined is much more expensive than residential energy efficiency programs generally, for two reasons. First, they are as much about safety and quality of life as energy improvements. Second, WAP pays for the full cost of the repairs, while most energy efficiency programs target customers already in the market for a product and so only need to pay for the extra cost of the efficient version of that product.
There's a pattern, and it's strawman-shaped
The authors compared their experimental estimates of energy savings with estimates spit out by the NEAT software system (an energy model used by local weatherization agencies) and declared that savings were 40% of expected. But, as Dave Rinebolt of Ohio Partners for Affordable Energy (which represents the organizations that implement WAP in Ohio) wrote, the NEAT model is used to prioritize investments in a house, and is not used to measure savings. Oak Ridge National Laboratory will soon release a retrospective analysis of the energy savings impact of WAP, which will be an interesting comparator. We should compare ex-post estimate to ex-post estimate, not the results of a RCT to a building model.
Why do I say there's a pattern? I recently attended the excellent, 20th annual POWER conference on energy research and policy hosted by the Energy Institute at the Haas School of Business at the University of California Berkeley (home of two of the WAP paper's authors). Energy economist Sebastian Houde presented his paper on the impact of ARRA's appliance rebate program, another stimulus program that also had an energy efficiency purpose. Houde's research - estimating savings from the program - was careful, but he then compared his savings estimate to an uncited estimate attributed to the Department of Energy and his estimates of cost effectiveness (from a program that was designed primarily as a stimulus program, not an energy efficiency program) to the cost-effectiveness of utility energy efficiency programs.
We need insights and research from economists, such as those affiliated with E2e. But economists keep writing broad critiques of energy efficiency programs and policies after studying programs that are not representative of the $7.6 billion of annual expenditure mentioned above. This is odd, because it is savings from those programs we expect to expand in the future to help states comply with the Clean Power Plan.
We need more, better estimates of savings
To the authors' credit, I have not seen any commentator find problems with the savings estimate, and I did not find any problems when I reviewed a version of the working paper before publication. The randomized-encouragement design used by the authors is very good at isolating the impact of retrofit because it is the authors that assigned households to the encouragement group, not customers themselves. The authors thus were able to overcome the self-selection issues that bias engineering or non-experimental-based savings estimates.
Key questions in energy efficiency - What would have happened absent the program? How did the customer change behavior after they made the efficiency-increasing change? How much did these changes increase their quality of life?" - are difficult to answer without employing advanced research designs and gathering granular data (hours of use changes, changes in settings). And as an aside, I don't think "EM&V 2.0" approaches or "efficiency meters" address these specific questions, which are the ones that matter to policymakers, because we cannot be sure what the customer would have done absent the intervention. It is unlikely that any one study will be able to gather enough data to answer these questions. So authors should be careful when mapping their research findings to the outside world.
And that may be where we can work together
NRDC has asked E2e to partner with us so that future research can better represent the reality of most programs. We have also asked them to stop misapplying the results of their research until they better understand the field they are critiquing. So far, they've agreed to talk but have not backed down from their damaging and misleading campaign.
The efficiency community knows which programs get the most investment, how programs are designed, and where eligibility thresholds or differential availability could provide opportunities for quasi-experimental savings estimates. Energy economists know how to make use of this information. We all presumably want effective climate, energy, and poverty-reduction policies.
I think there is room for partnership here. I'd love to write a future blog post on that.