Fact-checked by Grok 2 weeks ago

Power management

Power management refers to the systematic , , and optimization of electrical and in devices, computer systems, and broader infrastructures to enhance , ensure reliability, and minimize waste. This encompasses , software, and strategic techniques that balance needs with constraints, particularly critical in battery-powered and energy-sensitive applications. In electronics, power management relies on specialized components such as power management integrated circuits (PMICs), which integrate functions like , DC-DC conversion, battery charging, and low-power modes to deliver stable power while minimizing heat and extending device lifespan. Common techniques include dynamic power management (DPM), which adapts power allocation based on real-time usage, and modes like or that reduce activity in idle circuits. For instance, linear and switch-mode power supplies convert input sources—such as AC mains, DC adapters, or batteries—into regulated DC outputs, with switch-mode designs achieving higher efficiency (up to 95%) through and filtering to handle frequencies from 100 kHz to 1 MHz. In computing environments, power management architectures enable operating systems to coordinate device states, supporting standards like the for seamless transitions between active, idle, and low-power states. This facilitates features such as , which lowers CPU clock speeds during light workloads to conserve energy, and system-wide policies that maintain availability while cutting consumption from wall outlets or batteries. On a larger scale, in industrial and building systems, power management monitors load sharing, generator control, and alarms to prevent blackouts, optimize HVAC and lighting integration, and ensure compliance with efficiency regulations, often boosting low-input voltages (e.g., 0.4 V to 14 V) via with up to 77% efficiency.

Motivations

Energy Conservation

Power management encompasses a suite of techniques designed to minimize in devices and systems while maintaining acceptable levels. These methods dynamically adjust resource utilization to reduce overall power draw, thereby extending life in portable electronics and lowering operational costs in fixed installations. Central to this is the balance between computational demands and , ensuring that idle or low-activity components consume minimal power without compromising responsiveness. The impetus for power management emerged prominently in the with the proliferation of portable computing devices, such as laptops, where battery life became a critical differentiator in the market. Prior to standardized approaches, vendors implemented proprietary solutions to extend runtime on battery-powered systems, driven by the limitations of early lithium-ion and the need for untethered mobility. This era saw the introduction of the Advanced Configuration and Power Interface () in 1996, a collaborative standard developed by , , and to enable operating system-directed power management, replacing earlier fragmented protocols like (). ACPI provided a unified framework for controlling device states, power planes, and sleep modes across hardware and software. Subsequently, the explosive growth of data centers in the early amplified needs, as farms scaled to support internet services, , and . By the mid-, data centers accounted for a rising share of global use, prompting innovations in facility-wide power optimization to curb escalating demands that could reach several gigawatts per site. These developments built on principles but extended to server-specific controls, addressing both and loads. Key metrics for evaluating in power management include , which quantifies heat and per unit area in integrated circuits—typically ranging from 1 to 2 watts per square millimeter in modern processors—and performance-per-watt ratios, which measure computational throughput relative to input, often expressed in operations per joule. These indicators help advancements, such as shifts from 0.5 W/mm² in early chips to higher densities today, while prioritizing designs that maximize output per unit of consumed. In practical applications, power management yields tangible savings; for instance, smartphones incorporating adaptive power controls, such as dynamic screen brightness and background process throttling, can extend life by 20-50% under typical usage patterns compared to unmanaged operation. These gains stem from holistic system-level optimizations that reduce idle power draw without user intervention. While primarily aimed at reduction, such techniques also indirectly mitigate by lowering overall heat generation.

Thermal and Reliability Benefits

Power management techniques play a crucial role in mitigating thermal challenges in computing systems by preventing overheating, which can lead to performance degradation or hardware failure. Thermal throttling, a key mechanism, automatically reduces clock speeds and voltage when junction temperatures approach critical thresholds, typically between 85°C and 105°C for modern CPUs, to maintain safe operating conditions. This process ensures that the generated heat, primarily from power dissipation governed by Joule's law where power P = I^2 R (with I as current and R as resistance), does not exceed the system's thermal limits. Junction temperature models, such as T_j = T_a + P \cdot \theta_{ja} (where T_a is ambient temperature and \theta_{ja} is junction-to-ambient thermal resistance), quantify this relationship, allowing designers to predict and control heat buildup through dynamic adjustments. Beyond immediate thermal protection, power management enhances long-term hardware reliability by alleviating stresses that accelerate degradation mechanisms. Reduced power consumption via techniques like dynamic voltage and frequency scaling (DVFS) lowers current densities, thereby mitigating —the atomic diffusion in interconnects caused by high currents—which shortens component lifespan. Similarly, lowering supply voltages decreases stress on materials, reducing the risk of time-dependent dielectric breakdown (TDDB). , by disconnecting idle circuits, further minimizes leakage currents and voltage exposure, prolonging mean time to failure (MTTF); studies on multi-core systems demonstrate that DVFS-integrated management can balance MTTF across cores, achieving up to 18-fold improvements in reliability slack compared to temperature-only approaches. These strategies collectively extend device reliability, with electromigration-induced MTTF models showing exponential gains from even modest voltage reductions. Early implementations highlighted these benefits in . Intel's technology, introduced in 2000 with the Mobile processors, dynamically scaled voltage and frequency to cut power draw by up to 50% in low-demand scenarios, significantly reducing CPU heat generation in battery-powered laptops and enabling quieter, more efficient operation without compromising peak performance. In contemporary large-scale environments, power management in server farms has proven instrumental; for instance, advanced optimization in , incorporating processor-level controls, reduced cooling energy by 40%, directly lowering thermal loads and associated reliability risks across thousands of systems. These case studies underscore how targeted power adjustments not only avert thermal emergencies but also sustain hardware integrity over extended operational periods.

Economic and Environmental Impacts

Power management in ecosystems, particularly in data centers, plays a pivotal role in addressing economic pressures driven by escalating energy demands. In 2024, global data centers consumed approximately 415 terawatt-hours (TWh) of , accounting for about 1.5% of worldwide use. The rise of applications has accelerated this growth, with the (IEA) projecting consumption to double to around 950 TWh by 2030. This substantial footprint translates to significant operational costs, as expenses constitute roughly 20% of a typical data center's total cost base. Implementing power management strategies, such as optimized cooling and utilization, can reduce by 20-40%, directly lowering these costs by a comparable margin through decreased bills and improved (PUE). On the environmental front, effective power management mitigates the associated with operations, which contribute around 0.5% of global CO2 emissions. For instance, efficiency improvements in and cooling systems have enabled significant reductions, as seen in initiatives like Google's deployment of DeepMind AI for cooling. These efforts align with regulatory frameworks like the European Union's Directive (2012/27/EU), which mandated measures to achieve a 20% reduction in consumption by 2020 across sectors, including , thereby promoting sustainable power practices. Looking ahead, the (IEA) projects that doubling the global rate of improvements to 4% annually by 2030 could substantially offset rising IT energy demands, potentially curbing electricity growth and saving up to 10-15% of projected consumption through advanced power management. A notable industry example is Google's 2016 deployment of DeepMind for cooling, which achieved up to 40% energy savings on cooling systems—responsible for about 40% of total energy use—resulting in a 15% overall reduction in power consumption and demonstrating scalable economic and environmental benefits.

Core Techniques

Dynamic Voltage and Frequency Scaling

Dynamic voltage and frequency scaling (DVFS) is a power management technique that dynamically adjusts a processor's operating voltage and clock frequency in response to workload demands, thereby reducing energy consumption while maintaining required performance levels. The core mechanism exploits the quadratic relationship between dynamic power dissipation and supply voltage in CMOS circuits, expressed as P = C V^2 f \alpha, where P is the dynamic power, C is the switched capacitance, V is the supply voltage, f is the clock frequency, and \alpha is the activity factor. By lowering both voltage and frequency, significant power savings are achieved, as power scales quadratically with voltage and linearly with frequency. For instance, a 10% reduction in voltage at constant frequency can yield approximately 19% power savings, derived from the V^2 term. The concept of DVFS originated in the 1990s within VLSI design research, with early work demonstrating its potential for energy-efficient microprocessors. A seminal contribution was the paper by Burd and Brodersen, which outlined dynamic voltage scaling principles for low-power systems, emphasizing runtime adjustments to balance performance and energy. DVFS gained commercial prominence in 2000 with the Crusoe processor, which introduced LongRun technology—a hardware-supported DVFS scheme that automatically scaled voltage and to optimize power for mobile applications. Implementation of DVFS requires integrated and software components. Hardware support typically involves phase-locked loops (PLLs) to generate variable clock frequencies and voltage regulators to adjust supply levels precisely. On the software side, operating systems like use CPUFreq subsystems with s to monitor workload and trigger scaling decisions; the governor, for example, aggressively increases frequency under high load while scaling down during low utilization to conserve power. Despite its benefits, DVFS involves trade-offs, particularly in transition latency and performance overhead. Voltage and frequency adjustments can take tens of microseconds for frequency changes and up to milliseconds for voltage settling, especially with external regulators, potentially delaying responses to sudden workload bursts. This latency may cause temporary performance degradation during high-demand periods, as the system ramps up to higher operating points, necessitating careful tuning to minimize impacts on responsiveness. DVFS principles are also applied in graphics processing units for similar power optimizations, though specifics differ from CPU implementations.

Power Gating

Power gating is a low-power design technique that reduces static power consumption by completely isolating idle portions of a from the power supply, thereby eliminating leakage currents in those sections. This is achieved using sleep transistors—high-threshold voltage MOSFETs connected between the logic blocks and the power rails ( or ground)—which act as switches to disconnect the supply when the block is inactive. The approach, originally developed as multi-threshold (MTCMOS), employs low-threshold transistors for active logic to maintain while high-threshold sleep transistors minimize leakage during standby. Introduced in 1995, MTCMOS laid the foundation for modern power gating by enabling efficient standby modes without significant performance degradation in active operation. In sub-90 nm processes, where subthreshold and gate leakage dominate static power (often comprising over 50% of total consumption), can achieve 90-99% reduction in leakage for powered-down blocks, while dynamic power remains unaffected in active regions. To preserve computational state during power-down, designers incorporate always-on flip-flops or specialized state-retention cells that remain powered via a separate low-voltage , avoiding upon reactivation. However, reactivation incurs a recovery latency of approximately 10-100 clock cycles to stabilize voltage and restore state, limiting its use to deeper idle periods rather than frequent short pauses. Key challenges include area overhead from the sleep transistors (headers for isolation or footers for ground) and associated control logic, typically adding 1-5% to the overall die area, as well as managing inrush currents during to prevent voltage droops. Early adoption in processors occurred in the early , with microarchitectural techniques for of execution units proposed and evaluated in simulations of POWER4-like processors. Today, is a standard feature in mobile system-on-chips (SoCs), where fine- or coarse-grained isolation of blocks significantly extends life in standby scenarios.

Clock Gating

Clock gating is a power optimization technique employed in synchronous circuits to minimize dynamic power dissipation by selectively disabling the to inactive modules or components. The mechanism involves inserting logic gates, such as AND or OR gates, between the clock source and the clock inputs of registers or functional blocks; an enable signal controls these gates to block clock transitions when the downstream logic is idle, thereby preventing unnecessary toggling of flip-flops and reducing switching activity. This directly addresses the dynamic power component given by P_{dynamic} = \alpha C V^2 f, where the effective f or activity factor \alpha is lowered for idle portions without altering voltage V or C. Two primary types of clock gating exist based on : fine-grained gating, which applies at the individual register or small for precise control, and coarse-grained gating, which targets larger functional blocks or modules for simpler . variants include latch-based gating, which uses integrated clock gating cells to combine clock buffering with enable logic in a single stage, and flop-based gating, which employs separate flip-flops for enable signals to avoid glitches but with added . These approaches ensure glitch-free clock signals, with latch-based methods offering lower overhead in high-speed designs while flop-based provide robustness against timing variations. The benefits of clock gating include substantial reductions in dynamic power, typically achieving 20% overall savings in superscalar processors by gating execution units, pipeline latches, and cache components with minimal performance impact or area overhead. For instance, deterministic clock gating, which predicts usage one to two cycles ahead using pipeline control signals, yields an average 19.9% power reduction across integer and floating-point workloads in an 8-issue out-of-order processor, outperforming predictive methods by avoiding unnecessary gating latency. Overhead is low, often limited to 1-2% area increase, making it suitable for integration without significant design complexity. Clock gating saw early adoption in commercial processors, for example, in IBM's processor introduced in 2001, where it was applied to reduce dynamic power in high-performance designs. In modern designs, it is a standard feature in IP cores, such as the Cortex-R52, where architectural instructions like WFI (Wait For Interrupt) enable core-level clock disabling to eliminate most dynamic power during standby while maintaining powered-up state for quick resumption. This technique remains integral to IP blocks from vendors like , ensuring broad applicability in and .

Processor-Level Management

Homogeneous Processor Strategies

Homogeneous processor strategies for power management focus on uniform multi-core CPUs, where all cores share identical architectures and capabilities, enabling symmetric workload handling without specialized units. These approaches leverage techniques such as per-core dynamic voltage and frequency scaling (DVFS) to independently adjust the operating points of individual cores based on their specific workloads, thereby optimizing energy use while maintaining performance. For instance, in a chip multiprocessor (CMP), per-core DVFS can reduce power dissipation by reconfiguring voltage regulators to match varying demands, achieving up to 9% total energy savings without significant performance loss. Additionally, fine-grained DVFS during cache misses in a 16-core homogeneous tiled CMP can lower peak temperatures by 8.4°C and core-dynamic energy by 39%, demonstrating its role in mitigating thermal hotspots and leakage power. Thread migration complements per-core DVFS by relocating threads between cores to consolidate workloads onto fewer active units, allowing idle cores to enter low-power states and reducing overall system energy. In homogeneous multi-core systems, this technique, often implemented via mechanisms like Thread Motion, swaps threads to align them with optimal voltage-frequency domains, improving performance by up to 20% over coarser DVFS methods while minimizing migration latency. Idle core parking further enhances efficiency by dynamically disabling unused cores, flushing their caches, and redirecting threads to active ones, which keeps parked cores in a near-zero state to minimize draw. This is particularly effective in operating systems like Windows, where core parking balances energy conservation with responsiveness, though it may increase load on unparked cores. Practical implementations in processors like the Core i7 exemplify these strategies through Intel Turbo Boost Technology, which opportunistically boosts core frequencies beyond base levels while enforcing power caps to stay within the processor's limits, automatically adjusting based on workload and thermal conditions. Power budgets are monitored and controlled via the Running Average Power Limit (RAPL) interface, an Intel feature that provides real-time energy reporting for domains like the CPU package using model-specific registers, enabling software to cap consumption and integrate with tuning technologies for sustained efficiency. In homogeneous setups, RAPL facilitates accurate measurements with minimal overhead, correlating closely with external power meters for applications like microbenchmarks on multi-core systems. Despite these advantages, homogeneous strategies offer less flexibility than heterogeneous approaches, as they assume symmetric workloads and cannot tailor core types to diverse task requirements, potentially leading to suboptimal —such as linear power scaling with sublinear gains in high-demand scenarios. For example, homogeneous CMPs may consume up to 92.88W peak power for demanding applications, whereas heterogeneous designs can reduce energy-delay products by 84% through better resource matching.

Heterogeneous Computing Approaches

Heterogeneous computing approaches in power management leverage systems comprising diverse processor types, such as combinations of central processing units (CPUs) and graphics processing units (GPUs) or architectures like 's big.LITTLE, to optimize by assigning tasks to the most suitable cores based on workload demands. In these systems, light computational loads are offloaded to low-power cores, while demanding tasks utilize high-performance ones, enabling significant efficiency gains over homogeneous setups. introduced the big.LITTLE architecture in 2011, pairing high-performance "big" cores (e.g., Cortex-A15) with energy-efficient "LITTLE" cores (e.g., Cortex-A7) that share the same (ISA), allowing seamless task migration. This design can achieve CPU power savings of up to 50% or more compared to homogeneous high-performance cores, with system-level reductions reaching 76% in scenarios like idle homescreen operation. Effective management in heterogeneous systems relies on runtime schedulers that profile workloads in to determine assignments, often integrating heterogeneous dynamic voltage and (DVFS) policies tailored to the varying characteristics of different types. These schedulers employ techniques like thread load tracking and thresholds to and , using models such as global task scheduling (GTS) in big.LITTLE implementations, which supports , wake, and idle-pull migrations for optimal utilization. Workload involves metrics like mix and execution time to predict costs, enabling decisions that minimize overall draw without excessive overhead. For instance, predictive DVFS adjusts frequencies per cluster, achieving savings in applications through coordinated scaling. Prominent examples include Qualcomm's Snapdragon processors, which employ heterogeneous multi-processing units (MPUs) integrating ARM-based CPUs, digital signal processors (DSPs), and GPUs to offload tasks like image processing and neural networks to specialized, low-power accelerators, improving by up to 40% in rendering while enhancing and power efficiency. Similarly, NVIDIA's integrated CPU-GPU architectures, such as the Superchip, facilitate power sharing through unified memory and high-bandwidth interconnects like NVLink-C2C, allowing dynamic allocation of computational resources across dissimilar cores to reduce data movement overhead and optimize energy for and workloads. As of 2025, the heterogeneous CPU era continues to evolve, with solutions like ' L2600 family integrating multiple CPU architectures for improved in edge applications. Despite these benefits, heterogeneous approaches face challenges including task migration overhead, which can introduce from context switching and across dissimilar cores, potentially offsetting power gains in short-lived tasks. Thermal balancing across heterogeneous components is another key issue, as high-performance cores generate uneven , necessitating advanced policies to prevent hotspots and ensure reliability without uniform cooling assumptions. These challenges are addressed through prediction-based heuristics that factor in and deadline constraints during scheduling.

Operating System Strategies

Sleep and Hibernation Modes

Sleep modes in operating systems, particularly those adhering to the Advanced Configuration and Power Interface () standard, enable low-power states during periods of inactivity to conserve energy while allowing quick resumption of operations. The S3 state, known as suspend-to-RAM, maintains the system's memory contents powered while shutting down non-essential components such as the CPU, peripherals, and display. In this state, the system appears off to the user, with power consumption typically below 5 watts, depending on hardware configuration. Resuming from S3 generally takes a few seconds, as the processor and other components reinitialize rapidly without needing to reload data from storage. Hibernation, corresponding to the ACPI S4 state, provides a deeper power-saving mechanism by saving the entire system state—including memory contents, open applications, and running processes—to non-volatile storage such as a hard drive or SSD, after which the system powers off completely. This results in zero power draw during the inactive period, making it ideal for extended inactivity where battery life or energy efficiency is critical. Resume times from S4 typically range from 10 to 60 seconds, as the saved state must be read back from disk and restored to RAM before reactivation. Unlike S3, S4 eliminates ongoing power use but introduces latency due to the disk operations involved. ACPI specifications define these states and provide a standardized for operating systems like Windows and to manage transitions, ensuring compatibility across hardware platforms. For laptops, hybrid sleep modes combine elements of S3 and S4 by writing the hibernation file to disk while initially entering a suspend-to-RAM state; this allows fast resume if power remains available, but falls back to full if the battery depletes, balancing speed and reliability. These implementations often integrate with processor-level techniques like to further minimize leakage current during suspension. Key trade-offs in these modes include the significant disk I/O overhead during hibernation entry and resume, which can strain storage resources and increase wear on SSDs over time. Security considerations are also prominent, as the hibernation file stored on disk may contain sensitive data in plain text if not encrypted; modern systems mitigate this through full-disk encryption tools like or , which protect the file without compromising the power-saving benefits.

Scheduling and Idle Management

In operating systems, scheduling and idle management are critical software mechanisms for optimizing power consumption during by intelligently distributing computational tasks and handling periods of low activity. These strategies aim to balance performance with , particularly in battery-powered devices and data centers, by predicting workload patterns and adjusting accordingly. Power-aware scheduling prioritizes energy minimization over maximum throughput, while management leverages processor sleep states to reduce leakage power when tasks are paused. Power-aware scheduling algorithms dynamically assign tasks to CPU cores based on energy profiles rather than solely on speed. A prominent example is the framework introduced in the Linux kernel version 4.4 in 2016, which extends the (CFS) to incorporate per-task energy models and select the most energy-efficient core for execution. EAS uses a wake-up path that evaluates the energy impact of migrating tasks to heterogeneous cores, favoring lower-frequency options for light workloads to reduce dynamic power dissipation. This approach has been shown to improve battery life in mobile devices by up to 20% in typical usage scenarios without significant performance degradation. Idle management complements scheduling by transitioning the to low-power s during inactivity, minimizing static power losses from leakage currents. Modern processors define a of idle states, known as C-states, ranging from C0 (active state with full clock speed) to deeper levels like ( with core voltage reduced to near zero and context saved to cache). Operating systems use timers and prediction heuristics to enter these states; for instance, the kernel's tickless idle mechanism (CONFIG_NO_HZ) delays timer interrupts based on anticipated inactivity, allowing entry into C-states for durations predicted via historical workload data. This prevents unnecessary wake-ups, reducing average power by 10-50% during idle periods depending on the depth of the state. In mobile operating systems, specialized implementations enhance these techniques. 's Doze mode, introduced in Android 6.0 () in , clusters application activity into maintenance windows during idle periods, deferring background tasks and aggressively entering deep idle states to extend battery life by restricting network access and CPU usage. Similarly, and later versions implement PowerThrottling, which caps CPU resources for background processes deemed low-priority, such as antivirus scans, allowing them to run in low-power modes while prioritizing foreground applications. These features optimize the Energy Delay Product (EDP), a metric that quantifies the trade-off between energy consumption and execution delay (EDP = Energy × Delay), enabling schedulers to select configurations that minimize this product for power-sensitive workloads. Such optimizations are particularly impactful in heterogeneous systems, where idle management ensures underutilized cores enter deep C-states promptly.

GPU and Graphics Management

DVFS in Graphics Processing

Dynamic voltage and frequency scaling (DVFS) in graphics processing units (GPUs) adapts core principles of voltage and frequency adjustment to the unique demands of workloads, enabling dynamic balancing of rendering and power consumption. Unlike CPU-centric DVFS, GPU implementations emphasize high-throughput parallel execution, where voltage-frequency (V-F) curves are optimized for massive thread parallelism across thousands of cores. These curves define stable operating points, with granularity as fine as 12.5 mV and 13 MHz, ensuring reliability under varying thermal and power constraints. A seminal example is NVIDIA's GPU Boost 2.0, introduced in 2013 with the Kepler architecture, which dynamically scales boost clocks based on thermal headroom and workload intensity, allowing GPUs to exceed base frequencies when conditions permit. Key techniques in GPU DVFS include frame-rate dependent scaling, which ties frequency adjustments to target frames-per-second (FPS) deadlines, such as 60 FPS requiring sub-17 ms frame times, to minimize latency violations while conserving energy. This approach uses performance counters like ALU cycles and memory reads to predict rendering times and select optimal operating performance points (OPPs) from a multi-dimensional space, including GPU and DDR memory frequencies. Additionally, power limits are enforced at the streaming multiprocessor (SM) level in architectures like NVIDIA's, where individual SMs monitor utilization and thermal sensors to apply localized frequency caps, preventing overload in parallel workloads. These methods integrate hardware feedback loops for rapid adaptation, often reducing energy within 3% of ideal oracle predictions. In mobile GPUs, such as Qualcomm's series, DVFS yields significant power savings, with integrated CPU-GPU schemes achieving up to 20% energy reduction for gaming workloads while maintaining performance within 3% loss. For instance, on platforms like the Snapdragon 8 Gen 3 with 750 (as of 2024), cooperative DVFS caps frequencies based on workload dominance, optimizing for GPU-intensive rendering and yielding 15-30% total system power savings across diverse games. This integration with CPU DVFS in system-on-chips (SoCs) unifies control via lookup tables or hierarchical finite state machines, enhancing overall efficiency by coordinating heterogeneous resources. Despite these benefits, GPU DVFS faces challenges from variable workloads, such as fluctuating graphics demands in games, which can induce frequency oscillations in reactive governors—rapid up-down scaling that wastes energy and increases latency. Prediction inaccuracies in frame-based or control schemes exacerbate this, leading to up to 10% frame drops and suboptimal OPP selection in dynamic environments. Fine-grained predictive models, rather than reactive ones, mitigate oscillations by anticipating workload shifts, but require accurate runtime visibility into parallel execution patterns.

Power Gating for GPU Components

Power gating for GPU components involves selectively cutting off the power supply to idle subunits, such as cores, compute units, and render output units (ROPs), to minimize static leakage power while the GPU processes graphics or compute workloads. This technique employs sleep transistors to isolate power domains, allowing fine-grained control at the level of core clusters or individual functional blocks like ROPs, which handle final operations. In architectures like AMD's RDNA (introduced in 2019), is implemented per compute unit or shader array, enabling dynamic shutdown of unused portions during variable workloads to enhance overall efficiency. The primary benefit of in GPU shaders and execution units is substantial leakage power reduction during idle periods, achieving up to 60% savings in clusters by predicting and shutting down inactive blocks across . For non- units, such as fixed-function geometry pipelines, leakage can be cut by up to 57% through deferred gating that exploits computational imbalances. To preserve architectural during power-down, shadow or retention flip-flops critical like values, allowing quick restoration upon reactivation without full recomputation. These mechanisms integrate with broader idle detection, such as when the display is inactive, to gate larger GPU domains seamlessly. AMD's RDNA implementations further demonstrate this by gating ROPs and compute clusters, reducing idle power in integrated and discrete GPUs alike. However, power gating introduces overheads, including wake-up latency from re-powering circuits and restoring state, typically ranging from 3 cycles for fine-grained execution units to 1-10 ms for larger domains, which can lead to frame rate drops in latency-sensitive rendering if not managed carefully. Techniques like idle-time-aware prediction mitigate this by gating only when idle durations exceed break-even thresholds, ensuring performance impacts remain below 2%.

Display and Screen Optimization

Display and screen optimization focuses on techniques that reduce power consumption in display hardware connected to GPU outputs, such as LCD and panels, by dynamically managing backlight, refresh rates, and data transmission. These methods address the significant energy demands of displays, which can account for 20-30% of a device's total power usage in mobile systems. By integrating sensors and adaptive algorithms, displays can respond to environmental conditions and content demands, extending life without compromising visual quality. One primary technique is adaptive brightness control, which employs ambient light sensors to automatically adjust the display's or level based on surrounding illumination. This approach prevents unnecessary high in dim environments, where reducing backlight can lower draw substantially; for instance, studies show that adaptive systems can cut display by up to 20% in typical mobile usage scenarios. In LCD displays, where backlights dominate energy use, such adjustments can achieve reductions of around 50% in backlight under low-light conditions, as the sensor detects incident light and scales output accordingly. Variable refresh rates represent another key optimization, particularly in displays, where pixel self-emission allows for per-frame power scaling. These displays support dynamic rates from as low as 1 Hz for static content to 120 Hz for dynamic video, minimizing unnecessary refreshes and reducing overall power consumption by up to 22% across applications. This variability is enabled by low-temperature polycrystalline (LTPO) thin-film transistor backplanes, which facilitate efficient clocking adjustments without performance penalties. The GPU plays a crucial role in display optimization by processing frame data more efficiently before transmission to the panel. Frame buffer compression techniques, such as those using lossless or near-lossless algorithms, reduce the volume of transferred over the display interface, cutting and associated costs; implementations have demonstrated GPU savings of up to 12.7% in systems. Similarly, dithering methods convert higher-bit-depth images to lower-bit-depth formats while preserving perceived , further lowering requirements and for handling in the GPU-to-display pipeline. Established standards like Display Power Management Signaling (DPMS), introduced by the (VESA) in 1993, provide a foundational protocol for coordinating power states between the GPU, video controller, and display. DPMS defines signaling levels—such as active, standby, suspend, and off—that allow the display to enter low-power modes during inactivity, reducing energy use across desktop and early mobile systems. In contemporary devices, advanced implementations like LTPO displays in smartphones integrate these techniques to yield measurable battery life improvements of 15-20% compared to traditional panels, primarily through combined variable refresh and efficient driving circuits. As of 2025, micro-LED displays in premium devices (e.g., successors) further enhance efficiency with up to 30% lower power for equivalent brightness via self-emissive pixels without backlights. A representative example is (AOD) modes, which use low-power partial updates to refresh only changed portions of the screen—such as notifications or clocks—at minimal rates (e.g., 1 Hz), consuming less than 1% of per hour while keeping essential information visible. These features, common in flagship and devices, exemplify how targeted optimizations balance functionality and efficiency in GPU-driven displays.

Emerging Developments

Machine Learning Integration

integration in power management leverages predictive models to dynamically optimize energy consumption by anticipating workloads and adjusting parameters in , surpassing traditional rule-based heuristics in handling variability. (RL) techniques, in particular, have been applied to formulate power management as a sequential problem, where agents learn optimal policies for actions like dynamic voltage and frequency scaling (DVFS) to balance performance and energy use. For instance, DeepMind's RL-based system for cooling control uses deep neural networks to process sensor data and recommend adjustments, achieving a 40% reduction in cooling energy. Similarly, workload prediction models employing neural networks analyze historical patterns from counters to forecast computational demands, enabling proactive that minimizes idle power waste. In mobile system-on-chips (SoCs), on-device implementations facilitate localized power optimization without cloud dependency, reducing latency and enhancing privacy. Qualcomm's Engine, integrated into Snapdragon SoCs since the early 2020s, incorporates dedicated neural processing units (NPUs) to run lightweight models that profile user-specific workloads and adjust DVFS governors accordingly, contributing to overall power efficiency in AI-driven tasks. Complementary approaches, such as (CNNs) deployed via Lite on devices, have demonstrated practical efficacy; one study on industrial mobile terminals used an on-device CNN to predict environmental conditions for targeted heating control, yielding an 86% extension in battery life under cold-storage scenarios. These ML-driven strategies offer 10-30% improvements in over conventional methods, particularly in variable-load environments like , by adapting to non-stationary patterns that static rules cannot capture. For example, -based DVFS policies on multi-core processors have reported up to 20% energy reductions compared to standard governors while meeting performance deadlines. Deep for multi-task systems further achieves 3-10% savings in dynamic power through fine-grained . Such gains stem from the models' ability to learn complex interactions, like dependencies and bursty workloads, fostering adaptive policies that scale efficiently across heterogeneous . Despite these advantages, challenges persist in deploying ML for power management. Training overhead demands significant initial computational resources, often requiring offline simulation or to mitigate on-device inference costs, which can temporarily increase power draw during policy updates. Additionally, predicting user behaviors for personalized profiles raises privacy concerns, as models may inadvertently process sensitive data; variants address this by aggregating insights without centralizing raw inputs, though adoption remains limited in resource-constrained SoCs.

Approximate and Adaptive Computing

Approximate computing techniques in power management involve intentionally relaxing computational accuracy to achieve substantial reductions in , particularly in applications where minor errors do not significantly impact overall functionality. These methods exploit the inherent error resilience of certain workloads by trading for efficiency, enabling lower operation without requiring exact results. Key concepts include voltage over-scaling, where supply voltage is reduced below nominal levels—often to near-threshold operation—to minimize dynamic , which is proportional to the square of the voltage; this can induce soft errors due to timing failures or increased susceptibility to , but such errors are tolerated in non-critical applications like processing. Adaptive further enhances savings by dynamically adjusting data representation, such as using 8-bit integers or low- floating-point formats instead of 32-bit floats, which reduces both and memory access costs while maintaining acceptable output quality in error-tolerant domains. Prominent examples illustrate these concepts' impact. The Eyeriss deep (DNN) accelerator, developed at and fabricated in 65 nm , employs adaptive precision and dataflow optimizations to support low-bit-width computations for convolutional neural networks, achieving of up to 600 GOPS/W—representing 2-10x improvements over prior accelerators for inference tasks by minimizing data movement and precision overhead. Loop perforation, another technique, selectively skips loop iterations to accelerate execution; for instance, in iterative algorithms, perforating 10-20% of iterations can yield 1.5-3x speedup with controlled accuracy loss, directly translating to power savings in battery-constrained environments. These approaches find strong applications in image processing and , where outputs are robust to 5-10% errors, such as in or models, allowing up to 50% power reductions in systems like wearables or devices without perceptible quality degradation. In ML inference, quantizing weights to 8 bits can reduce computational requirements by 4x compared to 32-bit operations while preserving over 95% accuracy in tasks like . However, trade-offs must be managed carefully: error bounding techniques, such as statistical verification or , are essential to quantify and limit deviations, ensuring reliability; these methods are unsuitable for safety-critical tasks like diagnostics or automotive control, where even small inaccuracies could lead to failures.

References

  1. [1]
    Power Management System - an overview | ScienceDirect Topics
    A power management system is defined as a system that monitors and manages the power distribution within a building, tracking power consumption, quality, ...
  2. [2]
    Introduction to Power Management - Windows drivers | Microsoft Learn
    Apr 30, 2025 · Microsoft Windows supports a power management architecture that provides a comprehensive approach to system and device power management.Missing: definition authoritative sources<|control11|><|separator|>
  3. [3]
    Power Management IC (PMIC) - Semiconductor Engineering
    An integrated circuit that manages the power in an electronic device or module, including any device that has a battery that gets recharged.
  4. [4]
    What is Power Management in Electronics and Why is it Important?
    Jun 27, 2025 · Power management in electronics refers to the process of controlling and optimizing the distribution and consumption of electrical power within ...Missing: definition | Show results with:definition
  5. [5]
    Power Management, Chapter 1: Power Supply Fundamentals
    The key component of the dc power management system is the power supply that provides dc power for the associated system.
  6. [6]
    Power Management - an overview | ScienceDirect Topics
    Power management techniques are closely connected to energy efficiency and minimizing power consumption in computer systems.Introduction to Power... · Hardware-Level Power... · Power Management in Data...
  7. [7]
    A Brief Tutorial on Power Management in Computer Systems
    Mar 13, 2007 · Server Systems. – Historically, a very performance oriented segment. – Always on & ready for maximum performance.
  8. [8]
    [PDF] Advanced Configuration and Power Interface Specification
    Dec 22, 1996 · Device power management - ACPI tables describe motherboard devices, their power states, the power planes the devices are connected to, and ...
  9. [9]
    Data Center Power and Energy Management: Past, Present, and Future
    Insufficient relevant content. The provided link (https://ieeexplore.ieee.org/document/10597394) does not display accessible text or content for extraction and summarization due to access restrictions or lack of public preview. No historical context, key developments, or energy conservation details can be extracted.
  10. [10]
    Boom In Data Centers: Energy Parks and Technology&#x2019
    Aug 20, 2025 · Access to power was easy and un- remarkable. But as the Internet economy exploded in the 2000s, “hyperscale” data centers started en- tering the ...
  11. [11]
    [PDF] Summarizing CPU and GPU Design Trends with Product Data - arXiv
    The energy consumption per mm2 also generally maintains a constant value between 1.2. W/mm2 to 1.8 W/mm2 in most of the technology nodes. The 32nm, 16nm, and ...<|control11|><|separator|>
  12. [12]
    Key Data Center Metrics You Should Know - Upsite Technologies
    Jan 22, 2025 · Performance Per Watt (PPW) measures the energy efficiency of every device in the data center. It is a really complex to calculate. It ...Missing: square mm
  13. [13]
    What You Should Do to Extend Your Phone's Battery Life | Wirecutter
    Feb 25, 2016 · Low-power mode is a better alternative when your phone's battery is on its last legs and you just need to make it to the next charge, or when ...
  14. [14]
    [PDF] CELLULAR PHONES Advancements in Energy Efficiency ... - NRDC
    Based on our estimates, it appears that the energy consumed by cell phone charging has been reduced by at least 50 percent in the past two years due to two ...Missing: credible source
  15. [15]
    What Is Throttling and How Can It Be Resolved? - Intel
    Throttling is a mechanism in Intel® Processors to reduce the clock speed when the temperature in the system reaches above TJ Max (or Tcase).
  16. [16]
  17. [17]
    [PDF] How to Properly Evaluate Junction Temperature with Thermal Metrics
    According to the definition of thermal resistance, it is the ratio of the temperature difference and the actual conducted thermal power between the two points.
  18. [18]
    [PDF] EM-Reliability System Modeling and Performance Optimization for ...
    The follow-up work by the same authors proposed a dynamic reliability management (DRM) concept by dynamic voltage and frequency scaling (DVFS) [4]. It.
  19. [19]
    What is Intel speedstep? - Tom's Hardware Forum
    Jul 31, 2010 · What is Intel speedstep? ... reduced, further reducing power consumption and heat generation. This can conserve battery power in notebooks ...Missing: laptops | Show results with:laptops
  20. [20]
    Intel Delivers Fastest Mobile Pentium® III Processors Featuring ...
    "Intel SpeedStep technology provides the ideal combination of instantaneous performance and low power to extend battery life." Intel SpeedStep Technology Mobile ...Missing: thermal benefits
  21. [21]
    Data Center Modernization Cuts Energy Costs By 40%, Study Shows
    Google's implementation of machine learning for data center cooling management reduced energy consumption by 40%, directly improving operational expenses ...Rising Energy Costs And... · Capex Vs Opex Trade-Offs In... · Scalability For Future...<|control11|><|separator|>
  22. [22]
    [PDF] Energy and AI - NET
    Global electricity consumption by data centres is projected to reach around 945 TWh by 2030 ... Consuming around 360 TWh of electricity in 2023, data centres ...<|separator|>
  23. [23]
    How data centers and the energy sector can sate AI's hunger for power
    Sep 17, 2024 · Electricity operating expenditures make up about 20 percent of the total cost base for data center business models, which have proved to be ...
  24. [24]
    AI: Five charts that put data-centre energy use – and emissions
    Sep 15, 2025 · As shown in the chart below, data centres are currently responsible for just over 1% of global electricity demand and 0.5% of CO2 emissions, ...
  25. [25]
    Google's Report Shifts Focus onto Data Centre Emissions
    Jul 6, 2024 · AI-powered fuel-efficient routing has enabled 2.9 million metric tons of GHG emissions reductions, equivalent to removing 650,000 cars from the ...
  26. [26]
    [PDF] Directive 2012/27/EU of the European Parliament and of the Council ...
    Oct 25, 2012 · Directive 2012/27/EU aims to save 20% of primary energy by 2020, improve energy security, reduce emissions, and establish a framework for  ...
  27. [27]
  28. [28]
    DeepMind AI Reduces Google Data Centre Cooling Bill by 40%
    Jul 20, 2016 · ... data centres, we've managed to reduce the amount of energy we use for cooling by up to 40 percent. In any large scale energy-consuming ...
  29. [29]
    Dynamic Voltage and Frequency Scaling - Arm Developer
    The quadratic relationship between power consumption and operational voltage. This relationship is given as: P = C × V2 × f. Where: P Is the dynamic power. C ...
  30. [30]
    [PDF] System Power Savings Using Dynamic Voltage Scaling
    V is the voltage the capacitor is charged to, F is the frequency that the voltage is switched across the capacitor. • Power goes up by the square of the voltage ...
  31. [31]
    [PDF] LongRun™ Power Management
    Jan 17, 2001 · LongRun power management also enables low-power sleep states through voltage scaling. The reduced operating voltage used while the Crusoe ...
  32. [32]
    CPU Performance Scaling — The Linux Kernel documentation
    CPUFreq provides generic scaling governors that can be used with all scaling drivers. As stated before, each of them implements a single, possibly parametrized, ...
  33. [33]
    [PDF] Fine-Grained DVFS Using On-Chip Regulators
    Dynamic Voltage and Frequency Scaling (DVFS) is a widely used power reduction technique: DVFS lowers supply voltage (V) and clock frequency (f) to reduce both.<|control11|><|separator|>
  34. [34]
    DVFS On The Sidelines - Semiconductor Engineering
    Aug 13, 2015 · Dynamic voltage and frequency scaling is moving out of favor even as companies are attempting more aggressive power reduction. Will it have a ...
  35. [35]
    [PDF] CoScale: Coordinating CPU and Memory System DVFS in Server ...
    In this situation, the CPU power manager might predict that lowering voltage/frequency will improve energy efficiency while still keeping performance within a ...
  36. [36]
  37. [37]
    [PDF] Power Gating to reduce Leakage Current in Low Power CMOS Circuits
    Abstract : Power-gating has proved to be one of the most effective solutions for reducing stand-by leakage power in nanometer scale CMOS circuits, ...Missing: seminal | Show results with:seminal
  38. [38]
    Power gating techniques able to have data retention and variability ...
    This clamping action ensures that the data at the output is not lost during power gating. Power gating with VRC reduces leakage by 40% or more compared to 90% ...<|separator|>
  39. [39]
    Power Gating Retention - Semiconductor Engineering
    To speed power-up recovery, state retention power gating (SRPG) flops can be used. These retain their state while the power is off, provided that specific ...Missing: MTBF | Show results with:MTBF<|control11|><|separator|>
  40. [40]
    Embedded SRAM and Cortex-M0 core with backup circuits using a ...
    ... power gating (PG) with backup time of 100 ns and recovery time of 10 clock cycles (including data restoration time (100 ns)). Further, memory cell area and ...
  41. [41]
    ReGate: Enabling Power Gating in Neural Processing Units
    Oct 17, 2025 · The hardware implementation of power-gating logic introduces less than 3.3% overhead in NPU chips. 1 Introduction. Hardware accelerators for AI ...
  42. [42]
    [PDF] Microarchitectural Techniques for Power Gating of Execution Units
    Aug 11, 2004 · In this paper, we explore the potential of architectural techniques to reduce leakage through power-gating of execution units. This paper first ...
  43. [43]
    Power gating verification of a core in SOC - IEEE Xplore
    Power gating is one of the power management feature which is gaining popularity in the mobile devices to reduce the leakage power. This paper gives an ...Missing: source | Show results with:source
  44. [44]
    Clock Gating - Semiconductor Engineering
    Clock gating reduces power dissipation for the following reasons: Power is not dissipated during the idle period when the register is shut-off by the gating ...<|control11|><|separator|>
  45. [45]
    [PDF] Deterministic Clock Gating for Microprocessor Power Reduction
    This is the first paper to show that a deterministic clock-gating methodology is better than a predictive methodology such as PLB. • DCG not only achieves more ...
  46. [46]
    Architectural clock gating - Arm Developer
    These instructions typically disable the clocks in the core while keeping the core powered up. This eliminates most of the dynamic power consumption in the core ...
  47. [47]
  48. [48]
    WaFFLe: Gated Cache-Ways with Per-core Fine-Grained DVFS
    This article presents WaFFLe, an approach that targets the leakage power of the last-level cache (LLC) and hotspots occurring at the cores.
  49. [49]
    Dynamic power management techniques in multi-core architectures
    This paper explores the concepts of multi-core, trending research areas in the field of multi-core processors and then concentrates on power management issues.
  50. [50]
    Windows Server 2012 R2 default Core Parking behavior changes
    Jan 15, 2025 · This allows the parked core to stay idle more (decreasing its power consumption), at the cost of placing additional work on unparked cores ( ...Missing: techniques | Show results with:techniques<|separator|>
  51. [51]
    Overview Information for Intel® Turbo Boost Technology
    Intel Turbo Boost Technology automatically runs the processor core faster than the marked frequency, increasing performance, and is enabled by default.Missing: caps | Show results with:caps
  52. [52]
    Running Average Power Limit Energy Reporting CVE-2020-8694,...
    Feb 8, 2022 · Intel processors provide Running Average Power Limit (RAPL) interfaces for reporting the accumulated energy consumption of various power domains.
  53. [53]
    RAPL in Action: Experiences in Using RAPL for Power Measurements
    Intel's Running Average Power Limit (RAPL) interface is a powerful tool for this purpose. RAPL provides power limiting features and accurate energy readings ...
  54. [54]
    [PDF] Heterogeneous Chip Multiprocessors - cs.wisc.edu
    Heterogeneous (or asymmetric) chip multiprocessors present unique opportunities for improving system throughput, reducing processor power, and mitigating ...<|control11|><|separator|>
  55. [55]
    Ten Things to Know About big.LITTLE - Arm Developer
    Sep 11, 2013 · Are the power savings available from big.LITTLE significant at the system level? Saving fifty percent or more of the power of the CPU ...
  56. [56]
    [PDF] big.LITTLE Technology: The Future of Mobile - NET
    Big.LITTLE technology is a heterogeneous processing architecture which uses two types of processor. ”LITTLE” processors are designed for maximum power ...<|control11|><|separator|>
  57. [57]
    Power profiling and optimization for heterogeneous multi-core systems
    Dec 19, 2011 · This paper presents a systematic approach for profiling the power and performance characteristics of application targeting heterogeneous multi- ...Missing: assignment | Show results with:assignment
  58. [58]
    Energy efficient task scheduling for heterogeneous multicore ...
    Apr 7, 2025 · These include communication overhead, load balancing issues, memory constraints, DVFS granularity concerns, and thermal management problems.
  59. [59]
    Heterogeneous Computing: An architecture and a technique
    Mar 22, 2017 · Heterogeneous computing is designed to help you achieve better application performance while improving thermal and power efficiency.
  60. [60]
    NVIDIA Grace Hopper Superchip Architecture In-Depth
    Nov 10, 2022 · The NVIDIA Grace Hopper Superchip Architecture is the first true heterogeneous accelerated platform for high-performance computing (HPC) and AI workloads.
  61. [61]
    Price theory based power management for heterogeneous multi-cores
    However, the heterogeneity imposes significant challenges to power-aware scheduling. We present a price theory-based dynamic power management framework for ...
  62. [62]
    16.1. Sleeping States — ACPI Specification 6.4 documentation
    ACPI defines attributes for four sleeping states: S1, S2, S3 and S4. (Notice ... The S3 state is defined as a low wake-latency sleep state. From the ...
  63. [63]
    System power states - Win32 apps | Microsoft Learn
    Jul 14, 2025 · Learn about the multiple system power states that correspond to the Advanced Configuration and Power Interface (ACPI) specification.
  64. [64]
    ACPI FAQ
    Many systems are designed to draw less than 15 watts of power in the S3 sleep state for any configuration. Many configurations will draw less than 5 watts. In ...<|separator|>
  65. [65]
    Reduce PC Power Consumption During Sleep Mode - Expert Q&A
    Jul 18, 2013 · Resuming a computer from S3 sleep mode normally takes a few seconds, whereas resuming a computer from S4 sleep mode takes a few minutes, so use ...
  66. [66]
    [PDF] Advanced Configuration and Power Interface (ACPI) Specification
    Aug 29, 2022 · This framework establishes a hardware register set to define power states. (sleep, hibernate, wake, etc). The hardware register set can ...
  67. [67]
    System Sleep States - The Linux Kernel documentation
    Put the system into a special low-power state (e.g. ACPI S4) to make additional wakeup options available and possibly allow the platform firmware to take a ...Missing: consumption | Show results with:consumption<|separator|>
  68. [68]
    BitLocker FAQ - Microsoft Learn
    What is Used Disk Space Only encryption? BitLocker lets users choose to encrypt just their data. Although it's not the most secure way to encrypt a drive ...
  69. [69]
    GPU Boost Technology Overview | GeForce - NVIDIA
    Discover how GPU Boost can push the performance of your GPU. More >. Videos. Watch a video on GPU Boost 2.0.
  70. [70]
    NVIDIA GPU Boost 1.0 - SkatterBencher
    Every GPU comes with a factory-fused voltage-frequency curve with a 12.5mV and 13 MHz granularity. This curve defines the relationship between the operating ...
  71. [71]
    [PDF] Dynamic voltage and frequency scaling for 3D graphics applications ...
    If we can assume that the GPU frequency has more impact on the total energy, a DVFS ... rate the notion of FPS into GPU and CPU DVFS, respectively. Yet, their ...
  72. [72]
    A survey and measurement study of GPU DVFS on energy ...
    On their testbed, NVIDIA GeForce8800, the GPU dynamic frequency scaling alone saved about 6% of system energy and 14.5% of GPU energy. Ge et al.
  73. [73]
    Integrated CPU-GPU Power Management for 3D Mobile Games
    In this paper, we propose a power management approach that takes a unified view of the CPU-GPU DVFS, resulting in reduced power consumption for latest 3D mobile ...
  74. [74]
    Predict; Don't React for Enabling Efficient Fine-Grain DVFS in GPUs
    Mar 24, 2024 · This paper proposes a novel prediction mechanism, PCSTALL, that is tailored for emerging DVFS capabilities in GPUs and achieves near-optimal energy efficiency.
  75. [75]
  76. [76]
    Idle-Time-Aware Power Management for GPU Execution Units - ITAP
    The most fine-grain implementation is to apply a power reduction mode for each lane individually based on the lane's idle period length.
  77. [77]
    AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High ...
    ... AMD RDNA Architecture • Power Management features • GDDR6 (G6) PHY ... power gating. 23. 8.4: Radeon RX5700 Series : The AMD 7nm Energy-Efficient High ...
  78. [78]
    AMD RDNA 3 GPU Architecture Deep Dive - Tom's Hardware
    Jun 5, 2023 · AMD's RDNA 3 GPU architecture promises improved performance, more than a 50% boost in efficiency, better ray tracing hardware, and numerous ...
  79. [79]
    Maximising Mobile User Experience through Self-Adaptive Content
    Isuwa et al [18] showed that using adaptive brightness settings and energy-saving modes can extend the battery life of mobile devices by up to 20%.
  80. [80]
    Ambient light sensors adjust LCD brightness to save battery power
    Dec 26, 2006 · Ambient light sensors can save power and extend the life of LCD screens in portable display products by automatically adjusting LCD ...
  81. [81]
    Samsung announces a new variable refresh rate technology for its ...
    Aug 12, 2020 · Running OLEDs at low refresh rates when possible can reduce the power consumption of the display (over all applications) by up to 22%. Samsung ...
  82. [82]
    Frame Buffer Compression without Color Information Loss
    A prototype system-on-a-chip indicates that the proposed FBC coprocessor can reduce GPU power consumption by up to 12.7% for an example automotive application.
  83. [83]
    A compressed frame buffer to reduce display power consumption in ...
    We introduce some methodologies for frame buffer compression that efficiently reduce the power consump- tion of display systems and thus distinctly extend ...Missing: dithering | Show results with:dithering
  84. [84]
    US6404423B1 - Method for display power management and monitor ...
    VESA DPMS Standard, "Display Power Management Signaling (DPMS) Standard", Version 1.0, Aug. 20, 1993, Video Electronics Standards Assoc. (VESA), San Jose CA ...Missing: history | Show results with:history
  85. [85]
    Apple's iPhone 13 Pro to use 120Hz LTPO displays from Samsung
    May 4, 2021 · Analyst Ming-Chi Kuo said in April that the iPhone 13 Pro's display could cut power consumption by around 15-20%. The standard iPhone 13 won't ...
  86. [86]
    WO2016025393A1 - Displaying always on display-related content
    Systems and methods for displaying always-on content on a display (120, 300, 405) of a mobile device (110, 401) allow the device to use a low power ...
  87. [87]
    Controlling Commercial Cooling Systems Using Reinforcement ...
    Nov 11, 2022 · This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems.
  88. [88]
    [PDF] Workload Prediction for Adaptive Power Scaling Using Deep Learning
    Abstract—We apply hierarchical sparse coding, a form of deep learning, to model user-driven workloads based on on-chip hardware performance counters.
  89. [89]
    Mobile AI Solutions | On-Device AI Benefits - Qualcomm
    Our latest AI Engine features a turbocharged NPU and next-gen sensing hub. Delivering up leading AI performance and power efficiency. These enhancements enable ...
  90. [90]
    Reinforcement Learning-Based Dynamic Voltage and Frequency ...
    May 31, 2025 · Panda et al. [30] introduced a reinforcement learning (RL)-based approach to optimize DVFS configurations, reducing processor energy ...
  91. [91]
    [Literature Review] Energy-Efficient Computation with DVFS using ...
    Energy savings of approximately 3% to 10% compared to traditional Linux governors. Effectiveness in handling deadlines, with high percentages of tasks meeting ...
  92. [92]
    Multi-Core Power Management through Deep Reinforcement Learning
    In this paper, we present a reinforcement learning-based DVFS control approach to reduce energy consumption under user-specified performance requirements.Missing: policies | Show results with:policies
  93. [93]
    [PDF] Near Threshold Voltage (NTV) Computing
    Feb 23, 2017 · Alpha and cosmic ray-induced soft errors cause transient failure of memory, sequentials, and logic at low voltage. Frequency starts degrading ...
  94. [94]
    (PDF) Voltage Overscaling Algorithms for Energy-Efficient Workflow ...
    In this paper, we discuss several scheduling algorithms to execute independent tasks with voltage overscaling algorithms. Given a frequency to execute the tasks ...Missing: management | Show results with:management<|separator|>
  95. [95]
    Managing performance vs. accuracy trade-offs with loop perforation
    Sep 9, 2011 · Loop perforation provides a general technique to trade accuracy for performance by transforming loops to execute a subset of their iterations.
  96. [96]
    Approximate Computing: A Survey - ResearchGate
    Aug 6, 2025 · ... applications with intrinsic error tolerance, such as image processing and machine learning. Existing efforts on this front are mostly ...
  97. [97]
    [PDF] Approximate Computing for ML: State-of-the-art, Challenges and ...
    ABSTRACT. We present a survey of approximate techniques that cover the main pillars of approximate computing research. Our analysis considers.
  98. [98]
    (PDF) Design, Verification, Test and In-Field Implications of ...
    Approximate Computing (AxC) trades off between the level of accuracy required by the user and the actual precision provided by the computing system to achieve ...Missing: bounding | Show results with:bounding
  99. [99]
    [PDF] applications of formal Methods in approximate computing - Theses
    However, as shown in [21, 132], many safety-critical applications favour provable error bounds on the resulting approximate circuits, and thus the ...Missing: bounding | Show results with:bounding