Single-core
A single-core processor is a central processing unit (CPU) that features a single processing core, enabling it to execute program instructions sequentially through one thread at a time.[1] This design relies on increasing clock speeds and architectural improvements, such as wider execution pipelines, to enhance performance by processing more instructions per clock cycle.[2] Unlike multi-core processors, single-core systems handle tasks by rapidly switching between them via the operating system's scheduler, rather than executing multiple instructions simultaneously across separate cores.[1]
Single-core processors dominated computing from the invention of the microprocessor in the early 1970s, with the Intel 4004 released in 1971 as the first commercially available single-chip CPU, initially operating at 740 kHz for basic calculator functions. Throughout the 1980s and 1990s, advancements like pipelining and superscalar execution significantly boosted their efficiency, allowing clock speeds to reach gigahertz levels by the early 2000s and enabling them to power personal computers, servers, and embedded systems effectively for single-threaded workloads.[2] They remain relevant today in low-power devices, legacy software environments where licensing is per-core, and applications optimized for sequential processing, such as certain database engines.[2]
The shift to multi-core architectures in the mid-2000s, driven by power efficiency limits on single-core clock speeds (known as the "power wall"), marked the decline of single-core dominance, as manufacturers like Intel and AMD introduced dual-core chips around 2005 to sustain performance gains through parallelism.[1] Despite this evolution, single-core performance metrics continue to influence CPU benchmarks, emphasizing the importance of per-core efficiency in modern hybrid designs that blend single- and multi-threaded capabilities.[2]
Historical Development
Origins in Early Computing
The concept of single-core processing originated with Charles Babbage's Analytical Engine, a mechanical general-purpose computer designed in 1837 that served as a conceptual precursor to modern computing systems. This device featured a central "Mill" functioning as the arithmetic processing unit, capable of addition, subtraction, multiplication, and division, paired with a separate "Store" for holding numbers and instructions, which operated in a serial fetch-execute cycle to process computations sequentially using punched cards for programming.[3]
A significant advancement occurred with the ENIAC (Electronic Numerical Integrator and Computer), completed in 1945 and publicly announced in 1946, marking the first programmable general-purpose electronic digital computer and relying on approximately 18,000 vacuum tubes to implement a single processing unit with 20 accumulators for arithmetic tasks. Capable of executing 5,000 additions per second, ENIAC dramatically accelerated computations compared to prior electromechanical machines, yet it suffered from limitations including manual programming via physical cable connections and switch settings, as well as high failure rates due to the inherent fragility and short lifespan of vacuum tubes, which often required operation at reduced power to enhance reliability.[4][5][6]
The Von Neumann architecture, proposed in 1945, provided the enduring blueprint for single-core systems by specifying sequential execution of instructions in a centralized processing unit that integrated a control unit to fetch and interpret commands from memory, an arithmetic logic unit (ALU) to perform calculations and logical operations, and a unified memory space accessible for both program instructions and data, thereby streamlining the flow of operations in early electronic computers.[7]
The invention of the point-contact transistor in December 1947 by John Bardeen and Walter Brattain at Bell Laboratories revolutionized single-core design by replacing unreliable vacuum tubes with compact semiconductor devices that amplified signals more efficiently and with lower failure rates, paving the way for smaller, faster systems in the late 1950s. This shift was demonstrated in the IBM 7090, announced in 1958 and delivered starting in 1959, which became one of the earliest commercially successful transistorized computers, achieving markedly higher speeds and reliability for scientific and data processing tasks compared to vacuum tube-based predecessors like the IBM 709.[8][9]
Advancements in Microprocessor Era
The invention of the microprocessor marked a pivotal shift in single-core processor development, enabling the integration of central processing unit functions onto a single silicon chip. In 1971, Intel introduced the 4004, recognized as the first commercially available single-chip CPU, featuring a 4-bit architecture, 2,300 transistors, and a clock speed of 740 kHz.[10][11] This design, initially developed for a calculator, demonstrated the feasibility of programmable logic on a compact scale, laying the groundwork for broader computational applications.
Subsequent advancements in the 1970s and 1980s drove the adoption of single-core processors in personal computing and specialized systems. The Intel 8080, released in 1974, emerged as the first fully general-purpose 8-bit microprocessor, with enhanced instruction sets and memory addressing that facilitated the development of early personal computers like the Altair 8800.[12][13] By 1979, the Motorola 68000 introduced a more advanced architecture, boasting 32-bit internal registers and a 16-bit external data bus, which powered innovative graphical user interface systems such as the original Apple Macintosh in 1984.[14][15] These processors exemplified the trend toward higher integration and versatility, transitioning single-core designs from niche embedded uses to mainstream consumer devices. The period also saw the onset of the RISC versus CISC architectural debate, ignited by the Stanford MIPS project in 1981, which pioneered reduced instruction set computing principles to simplify hardware and boost efficiency through streamlined pipelines.[16][17]
Guided by scaling principles like Moore's Law—formulated in 1965 by Gordon E. Moore, predicting that transistor density on integrated circuits would double approximately every two years—single-core processors achieved dramatic miniaturization and performance improvements through the 1980s and into the 1990s.[18] This exponential growth enabled designs like the Intel 80486 in 1989, which integrated over 1.2 million transistors, an on-chip cache, and memory management units on a single die.[19] Key architectural innovations further enhanced throughput: the 80486 introduced a five-stage pipelining mechanism, allowing overlapping of instruction fetch, decode, execution, and write-back phases to sustain higher clock rates without proportional power increases.[20] Building on this, the Intel Pentium, launched in 1993, adopted superscalar execution, capable of processing two integer instructions per clock cycle via dual pipelines, marking a significant leap in instruction-level parallelism for x86 architectures.[21][22] These developments underscored the era's focus on optimizing single-core efficiency through denser integration and sophisticated execution strategies.
Architectural Principles
Core Components and Functionality
A single-core processor's primary components include the Arithmetic Logic Unit (ALU), which performs essential arithmetic operations such as addition and subtraction, as well as logical operations like comparisons, on data operands supplied from registers.[23] The Control Unit (CU) serves as the orchestrator, decoding instructions fetched from memory, directing the flow of data between components, and managing the overall execution sequence through special-purpose registers like the Instruction Register (IR) and Program Counter (PC).[23] Registers form a small set of high-speed storage locations integral to the processing unit, holding temporary data, addresses, and intermediate results; for instance, the program counter tracks the address of the next instruction, while the accumulator stores operands for ALU computations in many architectures.[23]
The memory hierarchy in a single-core processor enables efficient data access, with the CPU interfacing directly with RAM for bulk storage of programs and data, and employing on-chip L1 cache—typically split into instruction and data sections—as the fastest level for frequently accessed items, providing low-latency retrieval without the coherence overhead of multi-core sharing.[24] This L1 cache, often ranging from 8 KB to 128 KB in size (typically split between instruction and data caches) and embedded closest to the core, bridges the speed gap between the processor's rapid operations and slower external RAM, ensuring that common data patterns in sequential workloads are handled swiftly.[25] The interaction occurs via the CPU's load and store instructions, which move data between registers, cache, and RAM as needed during processing.[23]
Central to the data path are the bus systems that facilitate communication within the single-core environment: the address bus transmits memory location identifiers from the CPU to RAM or peripherals, determining where data should be read from or written to; the data bus carries the actual information being transferred bidirectionally between the CPU and memory; and the control bus conveys signals such as read/write commands to coordinate these single-threaded transfers.[26] In single-core operations, these buses support sequential instruction handling without parallel contention, allowing the processor to manage one data transaction at a time efficiently.[26]
The foundational operational mechanism is the fetch-decode-execute cycle, a repeating process where the CPU first fetches the next instruction from memory using the program counter's address, then decodes it in the control unit to identify the required operation and operands, and finally executes it by routing data to the ALU for computation or other actions.[27] This cycle is synchronized by the clock signal, which generates precise pulses—typically on rising and falling edges—to trigger actions across components, ensuring that each stage (fetch, decode, execute, and optional write-back) completes within defined time intervals without overlap in the single-core's linear workflow.[27] The clock's frequency, measured in cycles per second, thus dictates the processor's ability to advance through instructions methodically.[27]
Instruction Execution Process
In a single-core processor, the instruction execution process follows a sequential cycle known as the fetch-decode-execute-memory-writeback (often abbreviated as the five-stage pipeline), where instructions are processed one at a time without concurrent execution from multiple cores. This cycle begins with the fetch stage, in which the program counter (PC) provides the memory address of the next instruction, and the instruction is retrieved from main memory or cache into the instruction register. Next, the decode stage interprets the opcode and operands, determining the required operation and accessing necessary registers via the control unit (CU). The execute stage then performs the computation using the arithmetic logic unit (ALU) or other functional units, followed by the memory stage for load/store operations that interact with data memory. Finally, the writeback stage stores the results back to the register file or memory, updating the PC for the next instruction. This sequential handling ensures orderly execution but limits throughput to one instruction per cycle in the ideal case.[28][29][30]
To support efficient sequential access in single-core designs, processors employ various addressing modes that specify how operands are located without relying on parallel threads. In immediate mode, the operand value is embedded directly in the instruction, allowing quick access for constants without memory fetches. Direct (or absolute) mode uses an address field in the instruction to point to the operand's exact memory location. Indirect mode loads the address from a memory location specified in the instruction, enabling deferred addressing for dynamic data. Indexed mode adds an offset from a base register or index register to the instruction's address, facilitating array traversal or relative positioning in sequential programs. These modes optimize single-core performance by minimizing memory accesses in linear instruction flows.[31][32]
While the basic cycle can be non-pipelined (single-cycle execution), modern single-core processors typically use a five-stage pipeline to overlap stages of different instructions, though still processing sequentially without inter-core parallelism. Hazards arise in this pipeline due to dependencies, particularly data hazards where an instruction requires a result from a prior one not yet available. For instance, a read-after-write (RAW) dependency might occur if Instruction 2 needs the output of Instruction 1's writeback, which is still in the execute stage. Single-core pipelines resolve such hazards through techniques like forwarding (bypassing data between stages) and, when necessary, stalling: the pipeline inserts no-op bubbles or holds earlier stages (e.g., freezing fetch and decode) until the dependency resolves, potentially reducing effective throughput to below one instruction per cycle. Control hazards from branches and structural hazards from resource contention (e.g., shared memory access) are similarly managed via stalling or prediction, but data dependencies highlight the sequential bottlenecks inherent to single-core execution.[33][34][33]
A representative example is the execution of a simple ADD instruction (e.g., ADD R1, R2, R3, adding values in registers R2 and R3 and storing in R1) in a basic five-stage pipeline:
Cycle 1: Fetch - Load ADD opcode and register indices from memory at PC; PC += 4.
Cycle 2: Decode - Identify ADD operation; read R2 and R3 values from register file.
Cycle 3: Execute - ALU computes R2 + R3, producing result in temporary register.
Cycle 4: Memory - No data memory access needed for register-register ADD (NOP equivalent).
Cycle 5: Writeback - Store ALU result in R1; update PC for next instruction.
Cycle 1: Fetch - Load ADD opcode and register indices from memory at PC; PC += 4.
Cycle 2: Decode - Identify ADD operation; read R2 and R3 values from register file.
Cycle 3: Execute - ALU computes R2 + R3, producing result in temporary register.
Cycle 4: Memory - No data memory access needed for register-register ADD (NOP equivalent).
Cycle 5: Writeback - Store ALU result in R1; update PC for next instruction.
If a subsequent instruction depends on R1 (e.g., MUL R4, R1, R5), a data hazard occurs, but it can often be resolved by forwarding the result from the execute stage without stalling; otherwise, stalling may insert bubbles until writeback completes to maintain sequential integrity.[35][36]
Clock Speed and Throughput
Clock speed, or clock frequency, refers to the rate at which a processor's internal clock generates pulses to synchronize operations, measured in hertz (Hz) as the number of cycles per second.[37] Each cycle represents a basic unit of time in which the processor can execute a portion of an instruction, enabling sequential processing in single-core architectures.[38] Higher clock speeds allow more cycles per unit time, theoretically increasing the potential for greater computational output, though actual performance depends on architectural efficiency.[39]
In the evolution of single-core processors, clock speeds advanced significantly from the late 1970s onward. The Intel 8086, introduced in 1978, operated at initial clock speeds of 5 MHz, marking an early milestone in microprocessor design with its 16-bit architecture. By the early 2000s, this had scaled dramatically; for instance, the Intel Pentium 4 processor, released in 2000, achieved clock speeds up to 3 GHz, reflecting improvements in fabrication processes and circuit design that pushed single-core frequencies into the gigahertz range. This progression from megahertz to gigahertz eras enabled single-core systems to handle increasingly complex tasks, though it also highlighted physical limits like power and heat constraints.
Throughput in single-core processors quantifies the effective processing output, commonly measured using Instructions Per Cycle (IPC) and Millions of Instructions Per Second (MIPS). IPC represents the average number of instructions completed per clock cycle and is calculated as:
\text{IPC} = \frac{\text{total instructions executed}}{\text{total clock cycles}}
This metric captures architectural efficiency beyond mere clock speed.[40] MIPS then derives overall throughput as:
\text{MIPS} = \text{clock rate (in MHz)} \times \text{IPC}
or more precisely, MIPS = (clock rate in Hz × IPC) / 10^6, providing a standardized measure of instruction execution rate.[41] These metrics emphasize that single-core performance arises from the interplay of frequency and per-cycle efficiency, rather than speed alone.
Several factors influence single-core throughput, particularly pipeline depth and branch prediction accuracy. Deeper pipelines, which divide instruction execution into more stages for greater overlap, can elevate IPC by allowing multiple instructions to process simultaneously, but they heighten the risk of stalls from dependencies or hazards, potentially reducing sustained performance.[42] Branch prediction mitigates pipeline disruptions from conditional jumps by speculatively fetching instructions; in modern single-core designs, predictors achieve accuracies often exceeding 90%, minimizing misprediction penalties that could otherwise flush the pipeline.[43]
Distinctions between peak and sustained throughput are critical in evaluating single-core capabilities. Peak throughput denotes the theoretical maximum IPC or MIPS under ideal conditions, such as simple, non-branching workloads, while sustained throughput reflects real-world performance over extended periods, accounting for stalls, cache misses, and varying instruction mixes that lower effective rates.[44] For parallelizable workloads, Amdahl's Law illustrates single-core limitations: the speedup from optimizations is bounded by the fraction of serial code, as the inherently sequential nature of a single core cannot exploit parallelism, capping overall efficiency even in high-frequency designs.[45]
Power Consumption and Heat Management
In single-core processors, power consumption is dominated by two primary components: dynamic power, which arises from charging and discharging capacitances during switching activity, and static power, resulting from leakage currents in inactive transistors. The dynamic power dissipation is given by the equation P_{dynamic} = C V^2 f, where C represents the effective switched capacitance, V is the supply voltage, and f is the clock frequency; this quadratic dependence on voltage makes it particularly sensitive to scaling efforts in complementary metal-oxide-semiconductor (CMOS) designs.[46] Static power, conversely, stems from subthreshold leakage and gate oxide tunneling in CMOS transistors, becoming more prominent as transistor sizes shrink below 100 nm, where off-state currents flow even when devices are nominally off.[47]
Heat generation in single-core processors follows from Joule's law, where the power dissipated as heat in transistors is Q = I^2 R, with I as the current through the resistive channel and R as the channel resistance; this self-heating effect intensifies at high frequencies, elevating local temperatures and potentially degrading performance or reliability in densely packed silicon.[48] To quantify overall thermal demands, manufacturers specify thermal design power (TDP), the maximum heat output a cooling system must handle under typical loads; for instance, high-end single-core processors like later Pentium 4 models reached TDPs around 100 W, reflecting the escalating energy needs of aggressive clock speeds.[49]
A pivotal example is the Intel Pentium 4 processor introduced in 2000, which initially hit TDP limits of 55 W at frequencies above 1.5 GHz, exacerbating heat management challenges and contributing to the "power wall" concept by 2004—a recognition that further single-core speed increases were constrained by unsustainable power and thermal densities, prompting a paradigm shift in processor design.[50][51]
To mitigate these issues, single-core designs employ techniques like dynamic voltage and frequency scaling (DVFS), which adjusts supply voltage and clock speed in real-time based on workload to reduce dynamic power while maintaining performance; for example, lowering voltage from 1.2 V to 0.9 V can cut power by over 50% in CMOS circuits.[52] Thermal management further relies on passive heat sinks—aluminum or copper extrusions with fins to increase surface area for convection—and active cooling via fans that direct airflow over the chip package, essential for dissipating TDPs up to 100 W in desktop single-core systems without throttling.[49]
Advantages and Limitations
Benefits in Specific Applications
Single-core processors excel in embedded systems where simplicity and predictability are paramount, such as in microcontrollers for real-time control tasks. For instance, 8-bit AVR microcontrollers, like the ATmega328P used in Arduino boards, enable precise timing in sequential operations for household appliances, including washing machines and microwave ovens, by executing instructions in a single clock cycle without the overhead of multi-core coordination.[53][54][55] This design ensures low-latency responses critical for tasks like sensor polling and actuator control, where any scheduling variability could disrupt operation.[56]
In specialized networking and automotive applications, single-core architectures provide deterministic behavior that outperforms parallel alternatives in environments demanding unwavering reliability. Embedded routers often utilize single-core processors for packet processing, as the absence of inter-core contention guarantees consistent throughput and minimizes jitter in data flows.[57] Historically, automotive electronic control units (ECUs) have relied on single-core designs to prioritize timing predictability over computational parallelism in simpler vehicle subsystems like engine management and braking systems, though multi-core adoption is increasing for advanced features as of 2025.[58][59][60]
The efficiency of single-core processors manifests in reduced design complexity and cost savings, making them ideal for high-volume deployment. Mass-produced single-core microcontrollers can achieve unit costs under $0.10 in volumes exceeding millions, facilitating economical integration into consumer and industrial products. Moreover, their inherent predictability supports safety-critical systems, such as pacemakers, where single-threaded execution isolates tasks and guarantees bounded response times to maintain life-sustaining rhythms.[61][62]
A prominent example is the ARM Cortex-M series, which employs single-core configurations optimized for Internet of Things (IoT) devices. Operating at clock speeds typically ranging from a few MHz to over 200 MHz depending on the variant, these processors deliver extended battery life through ultra-low power modes, enabling years of operation in battery-constrained sensors and wearables while handling sequential data acquisition and transmission efficiently.[63][64] This combination of low energy draw and deterministic performance has made Cortex-M variants dominant in the IoT microcontroller market, with approximately 69% share as of 2024.[65]
Drawbacks Compared to Multi-core Systems
Single-core processors face significant scalability limitations when handling modern workloads that benefit from parallelism, as they cannot execute multiple threads simultaneously on separate execution units. This leads to bottlenecks in multitasking environments, where tasks must be serialized, resulting in underutilization and extended execution times. According to Amdahl's Law, formulated by Gene Amdahl in 1967, the theoretical speedup of a program is given by the formula:
\text{speedup} \approx \frac{1}{S + \frac{1-S}{P}}
where S is the fraction of the program that must be executed serially, and P is the number of processors. For a single-core system, P = 1, limiting the speedup to 1 regardless of clock speed improvements, as any parallelizable portion (1-S) provides no benefit without additional cores. This law underscores how single-core architectures inherently constrain performance in applications with even modest parallel components, preventing linear scaling with increased resources.
In parallel-intensive applications such as video rendering, single-core processors exhibit pronounced utilization gaps compared to multi-core systems. For instance, in video encoding tasks using tools like HandBrake or Adobe Premiere Pro, single-core execution serializes frame processing and effects application, leading to prolonged render times; early benchmarks showed dual-core processors providing substantial speedup over single-core equivalents for threaded encoding workloads, as the parallelizable encoding pipeline (e.g., motion estimation) could be distributed across cores. Additionally, operating system multitasking on single-core systems incurs higher context switching overhead, where the CPU must frequently save and restore process states, reducing effective throughput in high-load scenarios with many concurrent threads.[66]
The end of classical frequency scaling around 2005, driven by the breakdown of Dennard scaling, further exacerbated single-core drawbacks by halting the exponential clock speed increases that previously masked parallelism limitations. As transistor dimensions shrank below 90 nm, power density rose uncontrollably, preventing higher frequencies without prohibitive heat and energy costs; this "power wall" shifted industry focus to multi-core designs for performance gains.[67] By the early 2010s, multi-core processors had become the standard in desktop PCs, rendering single-core systems largely obsolete for general computing due to their inability to meet demands for concurrent processing in software like browsers, media players, and productivity suites.[68]
Transition to Multi-core
Drivers for Parallelism
The transition from single-core processors to multi-core architectures in the mid-2000s was primarily driven by the "power wall," a fundamental limitation arising from the breakdown of Dennard scaling. Dennard scaling, which had historically allowed transistor dimensions to shrink while maintaining constant power density by proportionally reducing voltage, began to fail around 2004 as voltage reductions stalled due to leakage currents and manufacturing constraints, leading to exponential increases in power consumption and heat generation for higher clock speeds.[69] This shift forced processor designers to redirect transistor budgets away from increasing single-core frequency toward integrating multiple lower-frequency cores on the same die, enabling performance gains without prohibitive power penalties. A seminal example is IBM's announcement of the POWER5 processor in 2004, the first commercial dual-core server chip operating at 1.9 GHz per core, which exemplified the industry's pivot to multithreading and multi-core designs to circumvent power limitations.
Economic factors further accelerated this adoption, as continued transistor scaling under Moore's law provided more transistors per chip, but escalating power costs made single-core frequency boosts uneconomical. Instead of pursuing a hypothetical single-core processor at 4 GHz, which would consume excessive power, manufacturers like Intel opted for designs such as the Core Duo in 2006, featuring two cores each at approximately 2 GHz within a comparable transistor budget and power envelope. This redirection allowed for sustained performance improvements while aligning with thermal design power limits, making multi-core chips more cost-effective for production and deployment.
Parallelism was also propelled by evolving software ecosystems that made multi-core utilization feasible. The introduction of OpenMP in 1997 provided a standardized API for shared-memory parallel programming, enabling developers to exploit multiple threads on emerging multi-processor systems. Operating systems advanced accordingly, with Windows XP in 2001 incorporating enhanced threading support, including compatibility with hyper-threading, to schedule tasks across multiple logical processors. The rise of GPU acceleration in the mid-2000s, exemplified by NVIDIA's CUDA framework in 2006, further demonstrated the viability of parallel models for compute-intensive workloads, influencing CPU design trends.
Key industry events underscored the consensus on abandoning the single-core era. At the Hot Chips 16 conference in 2004, presentations from leading firms highlighted the inevitability of multi-core due to power and thermal barriers, marking a collective acknowledgment of the paradigm shift. AMD's release of the Athlon 64 X2 in May 2005 as the first mainstream dual-core desktop processor further catalyzed adoption, offering superior multitasking performance at competitive power levels compared to single-core contemporaries.
Legacy and Continued Use of Single-core
Despite the widespread adoption of multi-core architectures in high-performance computing, single-core microcontrollers continue to dominate the embedded systems landscape, particularly in cost-sensitive and low-complexity applications. This dominance is driven by their minimal overhead, making them ideal for straightforward tasks without the need for parallel processing.
In consumer wearables, single-core microcontrollers excel in handling simple sensor processing, such as monitoring heart rate, steps, and motion in fitness trackers. Devices like the Fitbit Charge series and Garmin Vivoactive rely on low-power single-core units to manage these real-time data streams efficiently, prioritizing battery life over computational intensity. Their architecture allows for seamless integration with sensors while consuming minimal energy, enabling extended usage in compact form factors.
Hybrid systems further illustrate the continued relevance of single-core designs, where they serve as auxiliary units within multi-core system-on-chips (SoCs) for dedicated real-time tasks. In automotive advanced driver-assistance systems (ADAS), external single-core safety microcontrollers provide physical isolation and deterministic performance for critical functions like sensor fusion and fault monitoring, complementing the main multi-core processors. For instance, companion MCUs in centralized automotive architectures handle low-latency operations, ensuring compliance with ISO 26262 safety standards without interfering with higher-level computations.
Looking ahead, single-core processors are poised to play a key role in quantum-resistant cryptography hardware, where post-quantum cryptography (PQC) algorithms can be implemented efficiently on resource-constrained platforms. NIST's PQC standards, such as ML-KEM and ML-DSA, are evaluated on single-core reference machines to ensure compatibility with embedded devices, enabling secure key generation and encryption in hardware like secure elements. Additionally, in edge AI applications, single-core chips support tinyML models for on-device inference, with variants like the ESP32-C3—featuring a single RISC-V core—deploying lightweight neural networks for tasks such as voice recognition and anomaly detection at the edge.
The viability of single-core technology in ultra-low-power domains is exemplified by energy harvesting devices, where these MCUs operate in sleep modes as low as 1 µW to enable battery-free operation. This efficiency positions single-core MCUs as a cornerstone for sustainable, always-on applications through 2030.