• Home
  • PROOF OF CONCEPT
  • Solutions
    • AVIONICS
    • ELECTRIC UTILITIES
    • EP-OWNERS
    • EP-SUPPLIERS
    • HV TRANSFORMERS
    • NO FAULT FOUND
    • OPTIMIZED POWER MGT.
    • SCADA
  • Technology
    • AI-MSET TECHNOLOGY
    • ZENO'S CIRCULAR FILE
  • COMING SOON
  • ABOUT US
  • More
    • Home
    • PROOF OF CONCEPT
    • Solutions
      • AVIONICS
      • ELECTRIC UTILITIES
      • EP-OWNERS
      • EP-SUPPLIERS
      • HV TRANSFORMERS
      • NO FAULT FOUND
      • OPTIMIZED POWER MGT.
      • SCADA
    • Technology
      • AI-MSET TECHNOLOGY
      • ZENO'S CIRCULAR FILE
    • COMING SOON
    • ABOUT US
  • Home
  • PROOF OF CONCEPT
  • Solutions
    • AVIONICS
    • ELECTRIC UTILITIES
    • EP-OWNERS
    • EP-SUPPLIERS
    • HV TRANSFORMERS
    • NO FAULT FOUND
    • OPTIMIZED POWER MGT.
    • SCADA
  • Technology
    • AI-MSET TECHNOLOGY
    • ZENO'S CIRCULAR FILE
  • COMING SOON
  • ABOUT US

Optimized Power Management™

Thermodynamically Accurate “Work per Watt”™ for Data Centers

                                   Technical Framework for ASHRAE and Engineering Audiences


From PUE™ to Performance: A Thermodynamic Shift

  

Power Usage Effectiveness (PUE™) has been the dominant metric for evaluating data center efficiency since its introduction in 2008 by The Green Grid. It is defined as:


PUE™ = Total Facility Power / IT Equipment Power


This metric characterizes infrastructure overhead, especially HVAC performance. Lower PUE values historically indicated improved separation of IT and non-IT power, and in the early-2000s this correlated well with total energy efficiency.


However, the underlying assumptions that made PUE an effective proxy no longer hold for modern IT equipment. Air-cooled server thermals, fan laws, chip leakage-power behavior, and storage vibration sensitivity have all fundamentally changed the energy dynamics of workload execution.


PUE™ Does Not Measure Useful Work Output


PUE quantifies how efficiently facility HVAC systems deliver cooling to the air-intake grilles of IT systems and recycle the very hot air exiting servers, but it does not measure the rate at which IT equipment converts energy into computational work.

Modern servers exhibit:


  • Dynamic frequency and voltage scaling (DVFS) in response to inlet temperature
  • Nonlinear fan-power increases governed by the cubic fan law
  • Race-to-idle behavior, where performance bursts dominate energy use
  • Thermally induced I/O latency increases in rotating storage


The ambient inlet temperature directly affects frequency limits, leakage power, fan RPM, and vibration-induced performance degradation. As a result, elevated ambient temperature prolongs workload duration, increasing IT energy consumption irrespective of reported CPU utilization.


A low PUE reading in a high-temperature environment, therefore, provides a false indication of overall data center energy efficiency.


Thermal Setpoint Increases Are Now Net-Energy-Negative


HVAC efficiency has improved substantially over the last two decades. Once a site deploys modern EC fans, high-efficiency chillers, CRAH/CRAC optimizations, indirect evaporative cooling, and ASHRAE-compliant economization strategies, the marginal gains from increasing supply-air temperature diminish sharply.


Raising the ambient temperature does reduce cooling-system power, improving the PUE numerator. However, utilities bill for energy (kWh or MWh), and the total energy required to complete workloads increases because:


  • DVFS throttling reduces processor throughput
  • Fan power increases cubically as RPM increases, often exceeding HVAC savings
  • HDDs and NVMe-HDD hybrids experience I/O slowdowns due to mechanical vibration
  • Workload completion time lengthens, increasing total IT runtime energy consumption


Controlled measurements on production servers demonstrate that total IT energy consumption increases significantly at elevated ambient temperatures, even for identical workloads on unchanged hardware.


Because PUE is a power-based ratio, this increased IT-side energy consumption is not reflected in the PUE value.


PUE’s Foundational Assumptions Are Obsolete


When PUE was formulated, servers typically had:


  • Fixed-frequency CPUs with minimal throttling
  • Fans operating at constant or near-constant RPM
  • Storage performance largely insensitive to chassis vibration
  • Relatively low leakage currents and predictable thermal behavior


Under these conditions, inlet temperature had limited influence on workload energy.


After ~2008, sub-30-nm semiconductor nodes and increasing HDD areal densities introduced rapid thermal sensitivity:


  • Chip leakage power grows exponentially with temperature, increasing cooling load and triggering throttling
  • DVFS policies degrade throughput sharply once thermal limits are reached
  • Fan power rises with the cube of RPM, dominating system power under high thermal stress
  • Vibration coupling—driven by fans, PSU resonance modes, and chassis modes—materially slows HDD I/O operations


In this regime, switching from a 12 °C to a 30 °C inlet can double or triple the wall-clock time required to complete a compute or I/O-bound workload. Because every subsystem (PSUs, DIMMs, chipsets, NICs, PCIe devices, etc.) remains energized for the entire duration, total energy consumption scales with the slowdown.


PUE cannot capture this behavior because it is not defined on an energy basis.


The Case for “Work per Watt”™


To accurately quantify end-to-end efficiency, a modern metric must incorporate:


  • IT throughput
  • Workload completion time
  • IT power and cooling power
  • Total energy consumed over the workload duration


Work per Watt™ = Computational Work Output / Total Energy Consumed


This metric reflects actual energy productivity, not instantaneous facility power ratios. It integrates both sides of the equation:


  • Infrastructure efficiency (HVAC, power distribution)
  • IT performance (throughput, thermal behavior, I/O stability)


This aligns with emerging international trends toward energy productivity rather than power separation.


TNP Optimized Power Management™ (OPM™) Methodology


True North Prognostics’ OPM™ implements a closed-loop approach using real-time telemetry from server internals:


  • On-board current and voltage sensors
  • Fan RPM and tachometer data
  • Chip thermal sensors and DVFS telemetry
  • Vibration signatures and I/O performance counters


Using this data, OPM™ dynamically adjusts thermal and workload parameters to minimize:


  • Thermal throttling
  • Fan-induced power spikes
  • Vibration-related I/O slowdowns


This yields:


  • ≥30% reduction in total energy consumption (facility + IT)
  • Lower throttling incidence and faster job completion
  • Higher asset utilization and deferred capital expenditures
  • Lower carbon emissions normalized to compute delivered


This approach aligns with ASHRAE’s broader goal of optimizing thermal envelopes, not simply maximizing allowable temperatures.


Alignment with ASHRAE, Standards Bodies, and Global Trends


Organizations such as ASHRAE TC 9.9, The Green Grid, NVIDIA, the IEA, and the Green IT Promotion Council (Japan) are moving toward metrics that account for both infrastructure and IT behavior. Standards including DCRE and DPPE represent early steps toward harmonizing energy productivity with sustainability reporting.


The emerging consensus: total facility efficiency is measured not by minimizing cooling power, but by maximizing computational output per unit of total energy.


This reframes the thermal-management priority from:

                    “How warm can we operate?” to 

                     “At what thermal conditions is the entire facility most energy-productive?”


Key Engineering Takeaways


  • PUE is necessary but insufficient; it reflects HVAC cooling-air distribution and hot-air exhaust recycling, not IT energy productivity.
  • Modern IT hardware introduces nonlinear thermal, electrical, and mechanical coupling effects that invalidate PUE’s original assumptions.
  • Work per Watt™ provides a thermodynamically coherent metric for total data-center energy efficiency.
  • Optimized Power Management™ integrates IT performance characteristics with facility energy data to achieve substantial real-world energy reductions.
  • Cooling setpoints should be optimized for workload energy minimization, not PUE minimization.

POWER MANAGEMENT

Download PDF

OPTIMIZED POWER MANAGEMENT BIBLIOGRAPHY

Download PDF

Copyright © 2025 True North Prognostics - All Rights Reserved.



True North Prognostics, LLC

614 5th Ave. Ste D-1

San Diego, CA 92101

Phone: 844-565-2770

Fax:        866-476-9393

info@tnprognostics.com

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept