Perspectives

Utilities should be paying much more for transformers, but won't

They are incentivized to waste electricity anywhere it results in less capex

More technical
More technical
More technical

....and we don't make enough of the materials that allow better transformers.

About 2 percent of all electricity generated in the United States vanishes inside the green boxes on concrete pads and gray cans on utility poles dotting the landscape. These are distribution transformers—the last-mile voltage converters that turn medium-voltage power into the 120/240 volts we use at home. Physically, they’re simple: coils of copper or aluminum wrapped around a steel core. Because the design is so basic, efficiency and cost depend almost entirely on the metals chosen, not on clever engineering tweaks. Each transformer therefore tells a story about the incentives its buyer faces: pay for more or better materials and waste less energy, or stick with the status quo and let the gigawatt-hours slip away.
 
Over a 30- to 40-year life, the energy a typical transformer dissipates as heat can cost close to five times its purchase price. Upgrading to a higher-grade core alloy—or simply right-sizing the unit—would save customers billions of dollars per year. Yet many U.S. utilities still order the cheapest devices that meet federal minimum standards, and virtually none adopt the higher-performing devices common in other countries. Why? Because ratemaking rules let them pass those losses straight through to customers and penalize them for spending that doesn’t add capacity directly.
 
The picture has grown even murkier since supply chain shocks stretched already-long delivery times beyond two years. When capacity itself is scarce, any change that slows production looks risky, no matter how much lifetime value it delivers. To solve the shortage—and unlock the efficiency gains—we must untangle how utility incentives, regulations, and manufacturing constraints converge in the design of the materials inside each transformer. That’s the story this post explores.
 
So, how much more should we be willing to pay for a transformer that wastes less?

Power transformers are conductive coils of copper or aluminum wound around a magnetically permeable core with a simple loop geometry. The windings are responsible for resistive losses that scale with the instantaneous load, while core losses are incurred at every reversal of the field – double the 60 Hz mains frequency – and while they don’t vary with load in watt terms, they get more expensive when they’re incurred at peak hours. The design of the winding and the core are geometrically constrained: the cross-section of the windings must fit in the loop of the core, and the core cross-section must not carry so much magnetic field that it saturates. The mutual constraint means that a cost analysis based on like-for-like geometry from one material to the another is inadequate. Only by asking

"What is the total cost of a transformer that's optimized for material Y, compared to one optimized for material X?"

can we see how the choice of core material truly affects costs for the system as a whole. Even though core and winding losses are incurred independently in a particular instance, designing better transformers requires us to attribute changes in winding costs, loss or otherwise, to changes in loss or saturation characteristics of the core material.
So we built a transformer optimal design tool to compare total cost of ownership (net present value of upfront cost + cost of losses) between transformers using different core materials.[1] Given a nameplate power rating, an expected typical utilization (the time average of power output relative to the nameplate rating), some pricing data on peak vs non-peak power, and a high-efficiency core material[2] costing some multiple of the electrical steel most commonly in use, when would a rational buyer choose to use the one designed with more expensive core material?
 
The blue and yellow chart illustrates the decision surface: yellow if, for that cost multiple and utilization, the more expensive core material has a lower TCO and blue otherwise. The more expensive material should be favored almost all of the time even at over triple the cost of legacy material – a result that makes sense for systems that at moderate utilizations incur over two-thirds of their TCO during operation rather than upfront.

This replicates reality: in much of the world, newer materials called ‘amorphous’ or ‘nanocrystalline amorphous’ alloys dominate in the deployment of many classes of power transformer. Not in the US, where combined distribution and generation monopolies, “rate-of-return" regulation, and “cost-of-service ratemaking” overlay to distort incentives away from efficient transformers.

Distribution Transformers

In 1992, Congress gave the Department of Energy the authority to regulate the efficiency of distribution transformers. The first such regulations were issued in 2007.

From Noe 2017

Distribution transformers come in many sizes and configurations between 10 and 5000 kVA; though most visible as the barrel-sized pole-mounted type ubiquitous in North America, the pad-mounted types that provide the last AC power conversion step for new datacenters and housing developments also fall in this category. At the end of the line, they operate at low and varying utilization, averaging less than 25% of their nameplate capacity. Highly variable utilization means that it’s much more likely for a distribution transformer to operate away from its peak efficiency than the large power transformers — 100 MVA and above — used in long-range transmission.

As of 2022. Via Bonneville Power Administration, a federal agency managing hydropower in the Pacific Northwest.

At low utilizations, transformer losses are in the core rather than the resistive losses in the windings. Core losses come from hysteresis in the magnetizable material in the ferromagnetic loop of the transformer each time the field changes: reversing the magnetic orientation at every cycle of the alternating current. At higher utilizations, conduction losses from resistance in copper, aluminum, or silver windings dominate.

With distribution transformers dissipating nearly 2% of all the electricity generated in the US with nothing more than a 120 Hz hum, they are a disproportionately impactful nexus for efficiency improvements. In a 2022 Notice of Proposed Rulemaking DoE proposed a set of rules that would roughly halve losses in new distribution transformers, assessing a range of net-present-value of $4.28-12.8 billion over the lifetime of transformers installed under the rule, for just one category of affected transformer (liquid-immersed – about half the total impact) and excluding (favorable) emissions impacts.[1] The essential mechanism for this efficiency improvement would have been a reduction in core losses by a switch from GOES to amorphous/NCA steels in many of the regulated categories of transformer. In Canada, China, and elsewhere, AM/NCA steels already dominate new distribution transformer production – but not in the US.

A lack of capacity for production of grain-oriented electrical steel (GOES) and its higher efficiency alternatives was the principal objection to the proposed rule. There is only one US manufacturer each for GOES and the utility scale amorphous material.
 
Unfortunately this NPRM, though not to be effective until 2027, was concurrent with a broad shortage in electrical components and a step change away from thirty years of zero secular growth of electrical demand: an inability to deliver the electrical capacity requiring transformers was more troubling than delivering the capacity at higher costs. Even as of early 2025, lead times for distribution transformers can exceed two years. In January 2024, Congress threatened to withdraw DoE’s authority to regulate distribution transformer efficiency unless an amended proposal was released. In April 2024 one was, with the benefit net-present-value of the effort cut in a third, down to $0.56-3.41 B in the liquid-immersed category.

The commentary the DoE collected during the rulemaking process is informative about the reasons cited, which I’ve drawn into a few categories.

Link to Howard

All are is/ought argument variations on the same factual reality expressed in the press for the Senate’s portion of the action last January: a bottleneck to activities that depend on the deployment of more grid is more of a concern than efficiency improvements to the grid itself.

When that bottleneck comes down to manufacturing capacity for the underlying magnetic materials, and those materials, while slightly more expensive than legacy GOES, should pay off well above their current price, why didn’t the capacity exist already? Utilities are a savvy customer – if the new materials save the public money as DoE claims, why doesn’t it also save the utility money? How do their incentives differ?

Utilities manage capacity, not opportunity cost

Discussions of utility incentives center on the rate-of-return structure that is, in the US, commonly set by the states on distribution and generation monopolies. Utilities are permitted to return to investors and bondholders a regulated fraction of equity and capital, providing a profit incentive to maximize their capital base beyond an optimal capex-opex balance: the Averch-Johnson effect. More expensive but lower-loss transformers would satisfy that – why don’t they?
 
Utilities purchase distribution transformers on a total-cost-of-ownership basis, which means purchasing the unit sized to minimize the sum of first cost and cost attributed to its electrical losses. This makes sense: the cost of losses in a typically-sized distribution transformer will exceed 80% of the TCO. The convention used by utilities and transformer makers[1] is to average the no-load losses associated with the core, the loaded losses associated with the windings, and convert them to units of currency using “capitalization factors” associated with the costs of servicing those losses with base-load capacity or peak capacity.

TCO = CF × FCR + A × PNL + B × PLL

CF: first cost
FCR: Capital recovery factor (lifetimes are 25 – 40 years, so between 2 and 3 at foreseeable costs of capital to utilities)
A: Capitalization factor for average no-load-losses (PNL)
B: Capitalization factor for average loaded losses (PLL)

The transformer-buying process, notionally, consists of the utility determining A and B for a population of transformers it wishes to replace, giving those values to the manufacturer, and receiving a transformer that minimizes the resulting calculation of TCO. In the US, there’s some variation in A and B where the data has been made available. A and B, assuming no distortion by the utility, measure the capital cost of installing more generation capacity to offset transformer losses, not as a levelized cost of the power they consume. More simply, this “implicit TCO” basis is minimizing the capital intensity of the utility’s capacity overall, not the explicit TCO for transformers adequate to a particular service or the expected lifetime total cost of delivering service to the utility’s customers.

This answers the natural question: if utilities already consider at least an implicit TCO when buying transformers, why should the DoE bother to set an efficiency standard if the cost of the capacity is already minimized? The efficiency standard, in implicit TCO terms, sets a floor on combinations of A and B. For many US utilities, surveyed A and B values are anomalously low compared to both other countries and the opportunity cost of power not sold to end users, by 50% or more: the $5/W average A value in the BPA’s survey amounts to an implied cost of electricity of only $38/MWh: 60% of the volume-weighted average wholesale electricity cost in the same period. For a distribution transformer already at the last mile of the distribution network, the reason for this assumed cost to be markedly different from the metered rate is the incentive created by state-based regulatory structure of public utilities in the US.
 
For example, ratemaking for each investor-owned utility in California occurs once every four years through proceedings called the General Rate Case. PG&E’s 2023 GRC extensively discusses the capital intensity of various options for improving distribution infrastructure: capex in these categories is evaluated only by comparison to the cost of delivering the same capacity in previous years, not relative to projected variable operational costs. Those variable operational costs of fuel or imported energy are passed through in inter-GRC-cycle years (“attrition years”) to the customer via the Energy Resource Recovery Account. While the utility makes no return on the revenue collected through the ERRA, they also suffer no loss. These dynamics are discussed more thoroughly in a 2024 report to the California legislature. This natural endpoint of cost-of-service ratemaking eliminates the nexus between the evaluation that an investment in transformers would have positive return and the decision to appropriate capex for them that would require the utility to set A and B values consistent with the effective reduction in rates projected in the DoE’s proposal.
 
Glibly, the utilities are constrained in return on capital so manage their capital base to maximize capacity, discount the impact of underestimated variable costs since they are easily passed through to ratepayers, and allocate capex between generation capacity and transmission losses to suit.

Only the DoE rules offset this tendency. In the BPA’s survey, a third of respondents assumed A and B values so low that the cheapest transformer they could purchase was the DoE-minimum one. If a significant fraction of the market effectively doesn’t consider TCO at all — the prospect for significant changes to the state regulatory structure of utilities aside — we shouldn’t assume utilities have a broad willingness to spend more money on more efficient transformers. Conversely, customers without a utility incentive structure or the ability to substitute artificially-cheap generation for transformer loss can buy using an explicit (sum of actual projected costs) TCO formulation, which absolutely does clear the market for significantly more expensive transformer materials at efficiencies significantly higher than DoE minimums.
 
Bringing new production of distribution or large power transformers to market with an argument for credible capacity to deliver will require understanding how those incentives propagate all the way down to properties of the materials they contain, from commodity copper and aluminum to electrical steel.

Footnotes

[1] We correlated the cost outputs of the tool with publicly-reported figures from multiple sources, including US Department of Energy publications.

[2] The loss performance here is modeled to match the amorphous alloy most commonly in use for distribution transformers.

[3] This analysis, in the Technical Support Document for the final rule, and the original, is extremely detailed,identifying costs as discrete as individual tank bushings and labor costs in cents-per-winding-turn.

[4] These are 2020 values, except 2019 for US Steel. Post-2020 values in all cases are even more dramatic.

[5] For example, Hitachi’s TCO tool.

About the Author

George Hansel

See Other Notes