Fact-checked by Grok 2 weeks ago

Frequency scaling

Frequency scaling, also known as frequency ramping, is a fundamental technique in that involves dynamically adjusting the clock of a to optimize the balance between computational performance and power consumption. This method allows systems to increase during high-demand workloads for faster execution and decrease it during idle or low-load periods to reduce energy use and heat generation. Often implemented through operating system drivers and hardware support, frequency scaling enables real-time adaptation without requiring manual intervention. A key advancement in frequency scaling is its integration with dynamic voltage scaling, forming dynamic voltage and frequency scaling (DVFS), where both the supply voltage and clock frequency are adjusted proportionally. Since power consumption scales linearly with frequency and quadratically with voltage (P ∝ f · V²), DVFS can achieve significant savings—up to 70% in some scenarios—while maintaining acceptable performance levels. This technique supports both global scaling, affecting all cores simultaneously, and per-core or fine-grained scaling for more precise control in multicore environments. Frequency scaling originated in the late 1990s as demanded better battery life, with introducing technology in 2000 alongside the Mobile processor, allowing frequency reductions from 650 MHz to 500 MHz. followed with PowerNow! in 2001 for its mobile processors. Today, it is a standard feature in modern CPUs, GPUs, and systems-on-chip, governed by policies like performance, powersave, or ondemand modes in Linux's CPUFreq subsystem, and extended to heterogeneous architectures for and workloads. Despite benefits, challenges include transition overheads and ensuring timing correctness in real-time systems.

Fundamentals

Definition and Principles

Frequency scaling refers to the technique of dynamically adjusting the operating frequency of a digital circuit, such as a , to balance performance and power consumption. In processors, this involves varying the clock speed to execute instructions more quickly or slowly as needed, enabling optimization for different workloads without altering the underlying hardware design. At its core, clock frequency denotes the rate at which a processor's oscillates, measured in (Hz), or cycles per second. Each clock cycle provides the timing for basic operations, such as fetching, decoding, and executing ; thus, higher frequencies allow more (), enhancing throughput, while lower frequencies reduce it. Instruction throughput is fundamentally tied to this frequency, as the number of cycles required per instruction (, or CPI) interacts with the to determine overall performance. Power dissipation in CMOS-based processors comprises two main components: dynamic and static. Dynamic power arises from the charging and discharging of capacitances during transistor switching and is given by the equation P_{\text{dynamic}} = \alpha C V^2 f, where \alpha is the switching activity factor, C is the effective switched capacitance, V is the supply voltage, and f is the clock frequency. In contrast, static power stems from leakage currents when transistors are off, independent of frequency but influenced by voltage and temperature; frequency scaling primarily impacts dynamic power, which historically dominated in high-performance computing but now shares significance with static power in modern scaled technologies.

Performance-Power Trade-offs

In processors, increasing the operating frequency generally results in linear improvements under ideal conditions, as the execution time of instructions scales inversely with frequency, allowing more instructions to complete per unit time. For parallel workloads, this linear scaling applies to both serial and parallel components, though the overall speedup is bounded by , which highlights how the non-parallelizable fraction limits total gains even as frequency rises. However, this performance boost comes at the cost of higher power consumption; in CMOS-based processors, dynamic power dissipation follows the relation P_{dynamic} = \alpha C V^2 f, where f is frequency, making power linearly proportional to frequency when supply voltage V remains fixed. When voltage scales with frequency to maintain circuit reliability—often approximately linearly—power can increase super-linearly, roughly as f^2 or f^3, creating a non-linear cost that outpaces gains. Key metrics for evaluating these trade-offs include instructions per watt (IPW), which measures computational efficiency as the ratio of executed instructions to consumed, and the energy-delay product (), defined as EDP = E \times D where E is total and D is execution delay. At fixed voltage, IPW remains constant with , as both and scale linearly with f; per instruction is independent of . For power-sensitive tasks, lower frequencies reduce instantaneous draw, aiding in thermal management and peak constraints, even if (IPW) is unchanged. EDP decreases proportionally to 1/f at fixed voltage, improving with higher since for a fixed remains constant while delay reduces. For instance, doubling with fixed voltage doubles and dynamic , keeping IPW constant if activity factor remains unchanged, while EDP halves as delay halves with constant . These trade-offs play a critical role in extending battery life, particularly in mobile devices, where reducing frequency during low-utilization periods cuts average power draw and prolongs runtime without significantly impacting user-perceived performance. Similarly, frequency scaling mitigates heat generation by lowering power dissipation, as higher frequencies elevate thermal output through increased dynamic power, potentially necessitating advanced cooling to avoid throttling or reliability issues.

Historical Development

Early Concepts

The conceptual roots of frequency scaling emerged in the and amid the transition from mainframe computers to early microprocessor-based designs, where processors operated at fixed clock frequencies that constrained adaptability to varying workloads or thermal conditions. Early microprocessors, such as the introduced in 1971, ran at clock speeds under 1 MHz, typically around 740 kHz, limiting performance and efficient resource utilization due to the inability to dynamically adjust speed in response to power or heat demands. These fixed-frequency architectures, common in systems like the at 2 MHz in 1974, highlighted the need for scalable clock management as counts grew and power dissipation became a bottleneck. A pivotal influence on early frequency scaling ideas was , proposed in 1974, which established theoretical relationships between transistor dimensions, voltage, and frequency in devices. According to the scaling rules, reducing linear dimensions by a factor K (e.g., K = 5) increases density by K^2, reduces supply voltage and circuit by $1/K, and scales circuit delay time (and thus inversely frequency) by $1/K, while maintaining constant across the chip. This framework suggested that frequency could rise proportionally with shrinking feature sizes without escalating power per unit area, enabling higher performance in denser circuits until practical limits like leakage currents emerged in later decades. Key early ideas for frequency scaling focused on clock throttling to mitigate heat in high-performance systems, particularly supercomputers where posed immediate challenges. In designs like the (1976), operating at 80 MHz, fixed high frequencies exacerbated overheating in vector processing units. Proposals in the late and advocated variable clock mechanisms in mainframes to improve efficiency without hardware redesigns. Academic research in the advanced these concepts, with early proposals for (DFS) algorithms, such as predictive scheduling techniques to adjust clock speeds based on workload patterns, laying the groundwork for practical implementations. Before the advent of dynamic voltage scaling in the , frequency management remained limited by manual interventions, such as switches in personal computers that toggled between standard and accelerated modes to ensure software compatibility while addressing power variability. These approaches underscored the era's reliance on operator-controlled throttling rather than automated adaptation, paving the way for more sophisticated techniques.

Key Technological Milestones

The Advanced Configuration and Power Interface () specification, released in December 1996 by , , , and other collaborators, established a foundational standard for operating system-directed , including mechanisms for processor performance states that enabled software-controlled frequency adjustments across compatible hardware. In the late 1990s, advanced frequency scaling through its technology, initially patented concepts for dynamic clock speed reduction to extend battery life in , with the first commercial implementation appearing in the Mobile Pentium III processors launched on January 18, 2000, at speeds up to 700 MHz. This on-demand frequency reduction allowed processors to operate at lower speeds during idle periods, marking a key shift toward adaptive power efficiency in x86 mobile platforms. responded competitively in 2001 by introducing PowerNow! with the Mobile Athlon 4 processors on May 14, offering more granular voltage and frequency adjustments compared to SpeedStep's binary modes, thereby enhancing battery life in competing notebook designs without sacrificing peak performance. The 2000s saw refinements in these technologies, with Intel unveiling Enhanced Intel SpeedStep Technology (EIST) in 2005, which expanded beyond binary switching to support multiple performance states (P-states) for finer-grained OS-controlled frequency and voltage scaling, first integrated into processors to balance thermal constraints and power draw in and environments. Concurrently, introduced its Intelligent Energy Manager (IEM) in 2006, a software-hardware framework for embedded systems that dynamically optimized frequency and voltage in based on workload demands, significantly reducing energy consumption in low-power devices like mobile processors when paired with compatible libraries. By the 2010s, frequency scaling became integral to mobile ecosystems, exemplified by Qualcomm's Snapdragon processors, which integrated dynamic voltage and frequency scaling (DVFS) starting prominently with the second-generation lineup in 2010, enabling smartphones to adapt core frequencies up to 1.5 GHz for efficient multitasking and multimedia while minimizing battery drain across billions of devices. Apple's A-series chips, debuting with the in the and first-generation in 2010, incorporated custom frequency scaling tailored to ARM-based architectures, allowing seamless adjustments between performance peaks and efficiency modes to support the closed ecosystem's demands for prolonged battery life and responsive user interfaces. These developments, building on ACPI's OS-level control, propelled frequency scaling from niche mobile features to ubiquitous standards in .

Implementation Techniques

Dynamic Frequency Scaling

Dynamic frequency scaling (DFS) is a technique that modulates a processor's clock frequency in real-time to align with varying computational workloads, primarily through hardware components like phase-locked loops (PLLs) and clock dividers. PLLs serve as frequency synthesizers that lock onto a reference signal and generate adjustable output frequencies, enabling seamless scaling from a base clock rate to higher turbo boosts by altering the or ratios within the loop. Clock dividers, by contrast, achieve lower frequencies by of a higher base clock, providing a cost-effective method for downward scaling without requiring full PLL reconfiguration. This core mechanism allows processors to operate efficiently across a of speeds, such as from base frequencies up to turbo modes that can exceed 2 GHz in modern implementations. The adjustment process in DFS is triggered by a combination of sensors and software controls that and respond to system demands. load s, embedded within the , detect utilization levels by tracking metrics like throughput or cycles, signaling the need for changes. Software s, such as the in the cpufreq subsystem, interpret these signals and dynamically adjust the based on patterns; for instance, when CPU load surpasses a configurable (typically 95%), the ramps the upward to the maximum available rate, while dropping loads prompt a proportional downward ramp after a sampling delay to avoid oscillations. This responsive control ensures that transitions occur swiftly, maintaining during bursts while conserving resources in states. Frequency scaling in DFS operates with a defined , typically in discrete steps of 100–200 MHz to balance precision and hardware feasibility. Early implementations, such as Intel's Enhanced SpeedStep technology introduced in mobile processors around 2003, supported ranges like 600 MHz to 1.2 GHz with 200 MHz increments, allowing fine-tuned adjustments via processor model-specific registers (MSRs). These steps enable processors to select from multiple performance states (P-states) without excessive overhead, facilitating rapid adaptation to load variations. Modern extensions, such as Intel's Speed Shift technology (introduced in 2015), enable hardware-driven frequency adjustments with latencies under 1 ms, minimizing OS overhead. One key advantage of DFS lies in its relative simplicity compared to more comprehensive approaches, as it eschews voltage alterations and thus avoids the complexities of regulator synchronization. Additionally, DFS achieves faster response times, typically in the range (around 10-20 μs), due to the quick reconfiguration of PLLs or dividers, in contrast to the longer delays (around 2 ms or more) associated with voltage transitions in broader scaling techniques. This enables quicker workload responsiveness, such as ramping to turbo frequencies during short intensive tasks, while still supporting trade-offs by lowering frequency to curb dynamic dissipation during low utilization.

Voltage-Frequency Scaling

Dynamic voltage and frequency scaling (DVFS) extends frequency scaling by jointly adjusting the processor's supply voltage V and clock frequency f to achieve greater efficiency, as scales linearly with f while consumption benefits from the quadratic voltage dependence. The total dissipation in CMOS-based processors is modeled as P = C V^2 f + I_\text{leak} V, where C is the effective switched , the first term represents dynamic , and the second term approximates static (age) with I_\text{leak} as the age current. By reducing both V and f proportionally to the required level, DVFS enables sub-linear reduction; for instance, halving f and V can quarter the total under typical workloads where dynamic dominates, though static contributions temper the savings. In hardware implementations, DVFS operates through discrete operating performance points (OPPs), which are predefined pairs of voltage and frequency values stored in device tables and selected by the operating system kernel to match workload needs. These OPPs ensure safe and stable operation by accounting for process variations and thermal constraints, with the OS querying hardware sensors to choose the lowest-power OPP that meets deadlines. For example, in mobile platforms like , governors such as interactive select OPPs based on load feedback, while research-enhanced variants incorporate models to predict future workload patterns and preemptively adjust settings for smoother performance. DVFS algorithms are categorized as reactive or predictive to balance responsiveness and overhead. Reactive approaches monitor current CPU utilization or event rates and scale V and f accordingly, often using simple thresholds for quick adaptation to bursts. Predictive methods, in contrast, employ historical data or analytical models to forecast demand, enabling proactive scaling that minimizes latency from transitions. To avoid inefficient oscillations—where frequent up and down scaling negates savings—algorithms incorporate hysteresis, setting distinct thresholds (e.g., 10-20% apart) for increasing versus decreasing the operating point, thus stabilizing operation under variable loads. Hardware facilitation of DVFS relies on power management integrated circuits (PMICs) integrated into system-on-chips (SoCs), which regulate voltage rails and coordinate with clock generators for synchronized changes. PMICs support fast slewing rates, achieving voltage transitions in approximately 10 microseconds, far quicker than early discrete solutions, to minimize performance stalls during scaling.

Applications

In Computing Hardware

In computing hardware, frequency scaling plays a pivotal role in optimizing performance for desktops, laptops, and servers by dynamically adjusting processor clock speeds to match workload demands while respecting power and thermal envelopes. In desktop and laptop environments, Intel Turbo Boost Technology, introduced in 2008 with the Nehalem microarchitecture, enables burst scaling by opportunistically increasing core frequencies above the base clock when thermal and power conditions allow, by fixed increments such as 133 MHz (one step) or 266 MHz (two steps) above the base frequency, depending on the number of active cores and thermal/power conditions. This feature enhances responsiveness in performance-oriented applications, such as gaming or content creation, by allowing the CPU to temporarily exceed its rated speed without manual overclocking. Similarly, AMD's Precision Boost, integrated into Ryzen processors since 2017, coordinates multi-core frequency adjustments in real-time, evaluating factors like active core count, power delivery, and thermal headroom to maximize throughput across threads; for instance, it can sustain higher all-core clocks during parallel workloads compared to static frequency operation. Server applications leverage frequency scaling for workload balancing in data centers, where dynamic voltage and frequency scaling (DVFS) techniques adjust CPU speeds to align with varying computational demands, thereby optimizing energy use without sacrificing service levels. Google's implementation of DVFS in its data centers during the , as part of broader power capping strategies, demonstrated efficiency gains through power capping and DVFS, achieving up to 19% reduction in peak power consumption during varying utilization periods by lowering frequencies on underloaded servers while maintaining for peak bursts. This approach is particularly valuable in large-scale environments, where coordinating frequency across thousands of nodes prevents hotspots and supports scalable operations, often integrating with orchestration tools to predict and preempt load variations. Operating system support further enables these hardware capabilities through policy-driven frequency management. In Windows, Power Throttling—introduced in —applies frequency capping to background processes, reducing power consumption by up to 11% by lowering CPU frequencies for background or low-priority tasks, which allows foreground performance-oriented applications like games to access full scaling headroom. Linux provides similar functionality via the ondemand CPU frequency governor, which dynamically ramps frequencies based on load; during idle states, it caps clocks at minimum levels (e.g., 800 MHz on modern x86 systems) to minimize power while swiftly boosting to maximum upon demand, ensuring efficient in server farms or desktop multitasking. In rigs, frequency scaling sustains high clocks under thermal limits by intelligently throttling only when necessary, preventing performance degradation from overheating. For example, during extended sessions of resource-intensive titles, technologies like Turbo Boost or Precision Boost maintain near-peak frequencies (e.g., 4.5-5.0 GHz on recent or CPUs) as long as temperatures stay below 90-100°C, delivering consistent frame rates; however, inadequate cooling can trigger scaling reductions, which can lead to clock speed reductions and corresponding decreases in frame rates () in cases of inadequate cooling. This balance underscores frequency scaling's role in enabling reliable, without exceeding hardware safe guards.

In Power Management Systems

In battery-constrained environments such as , frequency scaling enables significant by dynamically adjusting clock speeds to match demands, thereby extending battery life during light tasks like browsing or idle periods. For instance, modern smartphone system-on-chips (SoCs) like Apple's A17 Pro, introduced in , employ heterogeneous core architectures with high-performance cores operating up to 3.78 GHz and efficiency cores up to 2.11 GHz, allowing the system to shift to lower frequencies for non-intensive operations, thereby reducing average compared to fixed high-frequency modes. This aggressive downclocking prioritizes longevity over peak performance, resulting in improved battery life in real-world usage scenarios dominated by low-demand activities. In embedded and devices, frequency scaling is crucial for achieving years-long operation in wearables and sensors, where processors utilize ultra-low-power modes with clock speeds scalable down to the kHz range during dormant states to minimize leakage and active power draw. These modes enable duty-cycled operation, where the core activates briefly at low frequencies (e.g., 8-32 kHz for timing-critical tasks) before returning to sleep, supporting applications like health monitoring sensors that can sustain operation on coin-cell for extended periods without recharging. Such scaling integrates with peripheral to achieve sub-microwatt idle consumption, directly correlating to multi-year life in resource-limited deployments. System-level integration of frequency scaling with operating systems further optimizes power in portable devices; for example, Low Power Mode coordinates with to disable high-performance cores and reduce efficiency core frequencies (e.g., by approximately 23% on certain models), which conserves energy during sustained tasks such as video playback by limiting unnecessary boosts and background processing. This mode conserves energy during by limiting performance boosts and background processing, extending usable time without fully compromising functionality. Qualcomm's implementations exemplify adaptive frequency scaling in modems, where techniques like 5G PowerSave dynamically adjust clock rates and power states based on network load variability, powering down unused RF components during idle data intervals to reduce overall energy in bursty scenarios common to mobile and applications. This integration ensures efficient handling of fluctuating demands while preserving battery margins in power-sensitive devices. As of 2025, frequency scaling applications have expanded to workloads in devices, such as Qualcomm's Snapdragon X Elite , which uses adaptive DVFS to balance power and performance for on-device inference, and automotive systems like NVIDIA's platforms, employing fine-grained scaling for in advanced driver-assistance systems (ADAS) and autonomous driving.

Challenges and Limitations

Thermal and Efficiency Issues

Thermal constraints impose significant limits on frequency scaling in processors, as excessive generation necessitates throttling to protect integrity. When core temperatures approach or exceed the maximum (Tjmax), typically around 100°C for many processors, the system automatically reduces clock frequencies to dissipate more effectively. This protective mechanism can lead to substantial performance losses in sustained workloads where buildup is prolonged. The breakdown of around 2006 has exacerbated efficiency challenges in frequency scaling, as voltage reductions no longer track shrinkage linearly, causing consumption to escalate super-linearly with increasing frequency. In multi-core architectures, this power wall manifests as "," where limited thermal and budgets prevent simultaneous activation of all s or cores at full speed. Seminal projects that at 22 nm technology, approximately 21% of a chip's area must remain powered off, escalating to over 50% at 8 nm under conventional scaling assumptions. At elevated frequencies, leakage currents further undermine by dominating static dissipation in advanced nodes. In advanced processes like 7 nm, static —primarily from leakage—can constitute a significant portion of total consumption without targeted reduction techniques, shifting the power profile away from dynamic switching costs. Basic mitigation relies on enhanced cooling solutions to alleviate these barriers, enabling processors to maintain higher frequencies by improving heat extraction from the die. Air-based heatsinks and cooling systems, for example, extend the operational envelope before throttling activates, though they cannot fully overcome inherent process limits.

Future Directions

Emerging research in frequency scaling is increasingly incorporating machine learning to enable predictive dynamic voltage and frequency scaling (DVFS) in data centers, allowing systems to anticipate workload fluctuations and adjust frequencies proactively for enhanced efficiency. These approaches leverage patterns in CPU demand to forecast needs, achieving energy savings compared to reactive methods. Near-threshold computing (NTC) represents another promising direction, operating processors at voltages near or below the transistor threshold (sub-0.5V, typically 400-500mV) to achieve substantial efficiency gains while maintaining fine-grained frequency control. This paradigm can deliver 10x or higher energy efficiency improvements at constant performance levels, as demonstrated in architectural designs that mitigate performance degradation through multi-core clustering and shared resources. In the context of AI chips, NTC supports sustainable AI inference with reduced power draw. Early explorations into quantum and optical aim to transcend traditional silicon-based frequency limitations through frequency-adaptive mechanisms in photonic systems. Research from 2023-2025 highlights the use of quantum dots in scalable photonic quantum computers, where entangled pairs enable adaptive frequency encoding to support fault-tolerant operations beyond classical scaling barriers. Complementary advancements in silicon-chip-based photonic clocks facilitate precise quantum synchronization, with theoretical frameworks establishing frequency stability limits that could enable hybrid classical-quantum frequency control in future processors. With sustainability at the forefront, AI-optimized frequency scaling is projected to contribute to significant power reductions in global s by 2030, countering rising demands from AI workloads. For instance, integrations like those developed by have already achieved 40% reductions in cooling energy— a major component of power—through predictive control, suggesting broader applications in DVFS could yield comparable efficiencies across operations. Overall projections indicate that such optimizations could help limit net power growth, potentially saving hundreds of terawatt-hours annually as electricity demand approaches 945 TWh by 2030.

References

  1. [1]
    Dynamic Frequency Scaling - an overview | ScienceDirect Topics
    Dynamic voltage and frequency scaling (DVFS) is defined as a power management technique that allows real-time adjustment of a processor's operating frequency ...
  2. [2]
    Chapter 17. Tuning CPU frequency to optimize energy consumption
    CPUfreq, also referred to as CPU speed scaling, is the infrastructure in the Linux kernel that enables it to scale the CPU frequency in order to save power. CPU ...
  3. [3]
    Dynamic Voltage and Frequency Scaling - Arm Developer
    ... Frequency Scaling (DVFS) is an energy saving technique that exploits: The linear relationship between power consumption and operational frequency. The ...Missing: definition | Show results with:definition
  4. [4]
  5. [5]
    Intel Technology In New Mobile Pentium® III Processors Turbo ...
    SANTA CLARA, Calif., Jan. 18, 2000 -- Intel Corporation today introduced mobile Pentium® III processors featuring Intel® SpeedStep™ technology at 650 and ...
  6. [6]
  7. [7]
    CPU Frequency Scaling - Zephyr Project Documentation
    The CPU Frequency Scaling subsystem in Zephyr provides a framework for SoC's to dynamically adjust their processor frequency based on a monitored metric and ...<|control11|><|separator|>
  8. [8]
  9. [9]
    Dynamic Voltage And Frequency Scaling - ScienceDirect.com
    Dynamic voltage and frequency scaling (DVFS) is defined as a power management technique that allows real-time adjustment of a processor's operating ...
  10. [10]
    Dynamic Voltage and Frequency Scaling (DVFS)
    In DVFS, the voltage levels of the targeted power domains are scaled in fixed discrete voltage steps. Frequency-based voltage tables typically determine the ...<|control11|><|separator|>
  11. [11]
    [PDF] Quantifying Performance
    Cycle time = time between ticks = seconds per cycle. Clock rate (frequency) = cycles per second (1 Hz = 1 cycle/sec). • A 200 MHz clock has a cycle time of …
  12. [12]
    [PDF] EEC 216 Lecture #1: CMOS Power Dissipation and Trends
    Total Power: To reduce power, minimize each term – starting with the biggest! Historically, biggest has been dynamic power…
  13. [13]
    [PDF] A Static Power Model for Architects - cs.wisc.edu
    Power consumption in CMOS circuitry is classified as either dynamic or static (Figure 3). Dynamic power dissipation occurs during state changes (i.e., when ...
  14. [14]
    Workload frequency scaling law: derivation and verification
    This article presents equations that relate to workload utilization scaling at a per-DVFS subsystem level.
  15. [15]
    FLEXDP | Proceedings of the 19th ACM International Conference on ...
    May 17, 2022 · We focus on different scalarizations of the problem by optimizing for performance, energy consumption, as well as energy-delay product (EDP) and ...
  16. [16]
    404 Not Found
    No readable text found in the HTML.<|control11|><|separator|>
  17. [17]
    [PDF] When Extending Battery Life, Two Processors Are Often Better Than ...
    Thus, to extend the battery life, the average current consumption must be reduced. Typical high-end processors have clock scaling and other power-saving ...
  18. [18]
    Microprocessors: the engines of the digital age - PubMed Central - NIH
    the increase in clock speed from just under 1 MHz in the 1970s to around 3–4 GHz today, a figure that is limited by power rather by the ...
  19. [19]
    [PDF] Clock Frequency Scaling Trends - Robert Dick
    Clock Frequency Scaling Trends. Page 2. History. • ENIAC, 100 kHz, 1946. • Intel 8080, 2MHz, 1974. • IBM ... • Dynamic Frequency Scaling. • Fabrication Processes.<|separator|>
  20. [20]
    Four New Ways to Chill Computer Chips - IEEE Spectrum
    Over the years, chipmakers have used tricks like throttling back clock speeds and putting multiple microprocessor cores on a chip to spread out the heat.
  21. [21]
    Preexisting ACPI Specifications - UEFI Forum
    The ACPI open standard for device configuration and power management by the OS was first released in December 1996. It was originally developed by Intel, ...
  22. [22]
    [PDF] Enhanced Intel SpeedStep Technology for the Intel Pentium M ...
    This paper will introduce the processor power state levels (P-states), and map them to how Enhanced Intel. SpeedStep Technology transitions are made. Other ...
  23. [23]
    TSMC and ARM Collaboration Achieves Significant Power ...
    Jul 18, 2006 · ARM Intelligent Energy Manager (IEM) technology supports dynamic voltage and frequency scaling, and is now being extended to include leakage ...
  24. [24]
    Qualcomm Ships First Dual-CPU Snapdragon Chipset
    May 31, 2010 · Third-generation Snapdragon solutions feature two application processor cores running up to 1.2 GHz to enable advanced smartphones.Missing: DVFS | Show results with:DVFS
  25. [25]
    2010: The year Apple also became a chip company - CNET
    Mar 24, 2011 · 2010: The year Apple also became a chip company. The market share numbers for chipmakers Intel and Advanced Micro Devices changed little in 2010 ...
  26. [26]
    [PDF] Configuration of Phase Fractional Dividers - NXP Semiconductors
    The frequency switch is much faster than a PLL, and this feature is useful for supporting dynamic voltage and frequency scaling (DVFS).
  27. [27]
    CPU Performance Scaling — The Linux Kernel documentation
    ### Summary of On-Demand Governor in cpufreq
  28. [28]
    [PDF] Analysis of The Enhanced Intel® Speedstep® Technology of the ...
    600 Mhz. 800 Mhz. 1000 Mhz. 1200 Mhz. Optimal point (performance) is at the highest frequency that dose not accede Tj_max with no transitions up and down. 100 ...
  29. [29]
    A layer‐wise frequency scaling for a neural processing unit
    Sep 12, 2022 · DFS improves FPS of NN applications by 33% on average, compared with DVFS, because voltage scaling incurs about 2.15 ms delay, which is even ...Missing: response | Show results with:response
  30. [30]
    Overview of Enhanced Intel SpeedStep® Technology for Intel ...
    Enhanced Intel SpeedStep allows OS to control P-states, using multiple frequency/voltage points for optimal performance. Frequency is software-controlled, and ...Missing: 2005 | Show results with:2005
  31. [31]
    The simulation and evaluation of dynamic voltage scaling algorithms
    Burd and R. W. Brodersen, "Energy efficient CMOS microprocessor design," Proc. ... The limit of dynamic voltage scaling and insomniac dynamic voltage scaling.
  32. [32]
    An Interpretable Machine Learning Model Enhanced Integrated ...
    As shown in Figure 8, the prediction phase comprises two steps: (1) merging the models into an integrated CPU-GPU DVFS governor framework and (2) setting CPU ...
  33. [33]
    [PDF] Predict; Don't React for Enabling Efficient Fine-Grain DVFS in GPUs
    The varied nature of GPU frequency sensitivity contradicts the conventional wisdom that reactive DVFS mechanisms can scale down to fine-grain time epochs.
  34. [34]
    Reduce power consumption in embedded designs with dynamic ...
    Jan 16, 2007 · They support small step voltage adjusting and they can complete this adjusting very quickly (~10 microseconds). Additionally, the frequency ...
  35. [35]
    [PDF] First the Tick, Now the Tock: Intel® Microarchitecture (Nehalem)
    Intel® Turbo Boost Technology. Intel® Turbo Boost Technology is an innova- tive feature that automatically allows active processor cores to run faster than the ...
  36. [36]
    AMD Ryzen™ Technology: Precision Boost 2 Performance ...
    Precision Boost 2 is a performance-maximizing technology that automatically raises clock speeds, adjusting up to 1000 times per second, based on factors like ...Missing: coordination | Show results with:coordination
  37. [37]
    [PDF] Managing Distributed UPS Energy for Effective Power Capping in ...
    To avoid this, data centers can employ power capping approaches such as CPU capping, virtual CPU management, and dy- namic voltage and frequency scaling (DVFS) ...
  38. [38]
    Customize the Windows performance power slider | Microsoft Learn
    May 18, 2022 · Power throttling saves up to 11% in CPU power by throttling CPU frequency of applications running in the background. With power throttling ...
  39. [39]
    CPU Performance Scaling - The Linux Kernel documentation
    The Linux kernel supports CPU performance scaling by means of the CPUFreq (CPU Frequency scaling) subsystem that consists of three layers of code.
  40. [40]
    Defective Heat Sinks Causing Garbage Gaming - Random ASCII
    Aug 6, 2013 · These traces showed performance problems caused by thermal throttling – CPUs overheating enough that they throttle back their performance in order to stop ...Audiodg · Measuring Cpu Frequency · What To Do?<|control11|><|separator|>
  41. [41]
    Apple A17 Pro Processor - Benchmarks and Specs - Notebookcheck
    It offers 6 cores divided in two performance cores (up to 3.78 GHz) and four power-efficiency cores (up to 2.11 GHz). Thanks to the higher clock speed and ...Missing: dynamic scaling management<|control11|><|separator|>
  42. [42]
    (PDF) Dynamic frequency scaling on android platforms for energy ...
    Results of the proposed approach were compared to the performance mode of this smartphone and an average energy reduction of about 23.4% was obtained.Missing: percentage | Show results with:percentage
  43. [43]
    ARM Cortex-M0 Clock Speed - SoC
    Sep 8, 2023 · For low-power IoT edge devices that may rely on batteries or energy harvesting, lower frequencies like 8-16 MHz are common. Products needing ...Missing: kHz | Show results with:kHz
  44. [44]
    The definitive guide to ARM Cortex-M0/M0+: Low-power requirements
    Oct 10, 2016 · Today we see many low-power Cortex-M microcontrollers with very sophisticated system features which enable longer battery life. For example ...
  45. [45]
    Cortex-M0+ | Processor for Sensors, Wearables, and Low-Power Use
    Sensors in wearables that provide constant monitoring require a long battery life combined with a small silicon footprint. Ultra-low power consumption and ...Missing: frequency | Show results with:frequency
  46. [46]
    A Full Breakdown of what Low Power mode Actually does with Data ...
    Jun 27, 2022 · A Full Breakdown of what Low Power mode Actually does with Data (iPhone 13 Pro Max, Geek edition) · Disables the two performance cores entirely.Low Power mode on iPhone 13 Pro Max is able to sustain Genshin ...Does anyone notice a significant difference when iPhone is in low ...More results from www.reddit.com
  47. [47]
    Yes, using Low Power Mode slows down your iPhone - 9to5Mac
    Feb 17, 2025 · When Low Power Mode is on, iPhone heavily reduces the use of the performance cores, relying mainly on the efficiency cores – which are slower, ...
  48. [48]
    This Is What Happens to Your iPhone Every Time You Turn On Low ...
    Aug 29, 2024 · Aside from the refresh rate drop, iOS reduces your iPhone's overall CPU and GPU performance with Low Power Mode activated. So, you may notice ...
  49. [49]
    Dynamic voltage and frequency scaling
    Sep 25, 2025 · This technique is used to switch the CPU core frequency based on load requirement. DVFS helps in balancing the power, performance, and thermal ...Missing: first 2010<|separator|>
  50. [50]
    Qualcomm 5G PowerSave intelligently reduces power consumption
    Apr 2, 2020 · The Qualcomm Snapdragon 5G modem-RF system uses Qualcomm 5G PowerSave to take advantage of inactive periods between data packets and power ...
  51. [51]
    Information about Temperature for Intel® Processors
    The maximum junction temperature limit varies per product and usually is between 100°C-110°C. Consult the product specification page (ark.intel.com) to ...<|separator|>
  52. [52]
    How Temperature Affects Your Computer's Performance - EezIT
    Performance throttling: 20-50% speed reduction under high temperatures ; Computational errors: Heat causes processing mistakes that crash applications ; Permanent ...<|separator|>
  53. [53]
    A 30 year retrospective on Dennard's MOSFET scaling paper
    Aug 6, 2025 · Ever since the breakdown of Dennard scaling around 2006 [3] , the path of processor innovation is dictated by power constraints and thermal ...<|separator|>
  54. [54]
    Dark silicon and the end of multicore scaling - ACM Digital Library
    This paper models multicore scaling limits by combining device scaling, single-core scaling, and multicore scaling to measure the speedup potential for a set ...
  55. [55]
    [PDF] Energy Evaluation of Nanometer CMOS Technologies - SBMicro
    In modern designs, if none leakage reduction technique are applied, the static power can respond for more than. 50% of total power consumption [6]. In [7] ...Missing: percentage 7nm
  56. [56]
    PC Cooling: The Importance of Keeping Your PC Cool - Intel
    Find out how heat affects your PC and which PC cooling systems can help. Learn how heat can impact your gaming performance.
  57. [57]
    [PDF] Generalizable Machine Learning Models for Predicting Data Center ...
    This resulted in a framework that enables power conservation in data centers through dynamic voltage and frequency scaling. Similarly, Ge et al. [31] ...
  58. [58]
    None
    ### Summary on Predictive DVFS Using ML, Accuracy, or Efficiency Gains
  59. [59]
    [PDF] Near Threshold Computing: Overcoming Performance Degradation ...
    The strategy is to provide 10X or higher energy efficiency improvements at constant performance through widespread application of near-threshold computing (NTC ...
  60. [60]
    Intel Builds World's Largest Neuromorphic System to Enable More ...
    Apr 17, 2024 · Hala Point, the industry's first 1.15 billion neuron neuromorphic system, builds a path toward more efficient and scalable AI.Missing: near- threshold frequency
  61. [61]
    Study Proposes Scalable Path for Photonic Quantum Computing ...
    Jul 29, 2025 · A new study proposes a detailed and experimentally grounded blueprint for building a scalable, fault-tolerant photonic quantum computer.Missing: frequency- 2023-2025
  62. [62]
    Quantum clock synchronization with the silicon-chip based ...
    Aug 7, 2025 · Our theoretical framework establishes quantum clock synchronization's fundamental precision limits through rigorous CRLB derivation, explicitly ...Missing: dots | Show results with:dots
  63. [63]
    DeepMind AI Reduces Google Data Centre Cooling Bill by 40%
    Jul 20, 2016 · Our machine learning system was able to consistently achieve a 40 percent reduction in the amount of energy used for cooling, which equates to a ...
  64. [64]
    AI is set to drive surging electricity demand from data centres ... - IEA
    Apr 10, 2025 · It projects that electricity demand from data centres worldwide is set to more than double by 2030 to around 945 terawatt-hours (TWh).Missing: scaling | Show results with:scaling