CPUC Should Scrutinize ULP Evaluation Results

The Energy Division of the California Public Utilities Commission (CPUC) recently released its evaluation of the success of the utilities energy efficiency programs, including the lighting program implemented by the three largest utilities in the state from 2006 through 2008. This assessment is particularly important because the energy efficiency program, known as the Upstream Lighting Program (ULP), accounts for 56% of the net expected energy savings from the portfolio of programs run by these utilities over that 3-year program period. More generally, this study is an attempt to measure the impacts of one of the largest single energy efficiency programs ever implemented.

In brief, the ULP provides incentives to manufacturers of efficient light bulbs (mostly compact florescent lamps, known as “CFLs”) in order to lower the price of these products and encourage consumers to buy and use efficient CFLs instead of inefficient incandescent bulbs. From 2006 through 2008, PG&E, SCE and SDG&E provided incentives to manufacturers averaging $1.57 per bulb on nearly 100 million CFLs. By providing the incentive directly to manufacturers (rather than as a rebate to the purchaser), the utilities leveraged their investment, resulting in an average discount for consumers at the register of $2.70 per bulb.

The goal of the ULP Evaluation Report was to estimate how much electricity was saved and how much peak demand was reduced by the CFLs that received rebates through the program. The study also tried to estimate how much of that savings would have happened in the absence of the program.

Surprisingly, the report concludes that the net savings from each bulb was only 25% of what was expected when the CPUC approved the programs. As a result, even though the utilities provided incentives on 95 million lamps, which was well in excess of their goals, the ULP evaluation proposes to credit the utilities with achieving only a small fraction of their savings targets for this program. Because of the substantial consequences for current and future programs, both in California and around the country, this conclusion deserves close scrutiny.

Perhaps the biggest issue arising from the ULP Evaluation Report is the estimate of the fraction of the total program savings that would have occurred even if the program had not been implemented. This estimate, known as the net-to-gross ratio (NTGR), is difficult to estimate with confidence in any case. In this instance, the estimation of NTGR was particularly challenging.

Estimation of NTGR usually requires an assessment of the market conditions prior to program implementation. However, the ULP evaluation didn’t begin to collect market data from participants until 2008, following two years of a massive market intervention. By that time, it had become extremely difficult – if not impossible – to estimate the market activity that would have occurred in the absence of the program. 

The NTGR estimation was further complicated by the use of complex modeling approaches whose practical effectiveness had not been tested. As the report acknowledges, the proposed modeling approaches failed and the authors had to rely on alternative models with significant but unknown biases and measurement methods that weren’t fully representative of the program. Ultimately, the authors chose to reject the only NTGR estimates that were defined as representative of the full 2006-2008 program and instead to simply select a set of estimates based on “best judgment.” Given that the final proposed NTGR ratio of 54% is based on the authors’ judgement, it is perhaps unsurprising that it differs substantially from the NTGR ratio adopted for similar CFL programs in other states.

The ULP evaluation also ran into problems in the attempt to estimate the installation rate of CFLs through the program. The evaluation plan proposed to estimate a set of three inter-related models from a survey of users. Unfortunately, as the authors explain, the models did not produce meaningful results and an ad hoc alternative had to be developed late in the study process. The installation rate that emerged from this ad hoc analysis resulted in an additional 15% reduction in the program savings estimate, mainly due to CFLs that were in customers’ homes but had not yet been installed.

Taken as a whole, the substantially delayed data collection and the problems with estimation of NTGR and installation rate, along with a number of other issues identified by commenters, raise concerns about the validity and robustness of the ULP Evaluation Report’s conclusions. The results of this study will have far-reaching implications both in California and around the nation; it will affect the CPUC’s assessment of the utilities’ performance and calculation of any financial rewards or penalties, the cost-effectiveness of the utilities’ current efficiency programs, and the savings and cost-effectiveness of efficiency programs around the country (as my colleague Noah Long states in his blog post), since many states look to California’s evaluation studies to determine savings from their own programs.

Given the importance of the results of this study, it is imperative that the CPUC fully review the report’s methodology and conclusions and formally resolve the disputes about the ULP impacts, something the Commission has not yet indicated that it plans to do.