Epyc
EPYC is a brand of multi-core x86-64 server microprocessors designed and marketed by Advanced Micro Devices (AMD) for enterprise data centers, cloud computing, and high-performance computing applications, utilizing the company's Zen microarchitecture in a chiplet-based design that enables high core counts and scalability.[1][2]
Introduced in 2017 with the first-generation 7001 series (codename Naples) based on Zen cores, EPYC processors have evolved through multiple generations, including the second-generation 7002 series (Rome) in 2019, third-generation 7003 series (Milan) in 2021, fourth-generation 9004 series (Genoa) in 2022, and fifth-generation 9005 series (Turin) launched in October 2024, each iteration incorporating advancements such as increased core densities up to 192 cores per socket, higher clock speeds reaching 5 GHz boosts, support for DDR5 memory with up to 12 channels, and expanded PCIe lanes for enhanced I/O connectivity.[3][4][5]
Key defining characteristics include AMD's Infinity Fabric interconnect for multi-chiplet coherence, which facilitates cost-effective scaling of compute resources while maintaining performance, and optimizations for workloads like AI inference, virtualization, and technical computing, where EPYC has demonstrated leadership in benchmarks with over 250 world records in areas such as throughput, energy efficiency, and total cost of ownership reductions compared to prior generations and competitors.[6][7][3]
History
Origins and development
The origins of AMD EPYC trace back to the Zen microarchitecture project, initiated in the early 2010s as a response to the commercial and performance shortcomings of AMD's prior Bulldozer and Piledriver architectures, which had eroded market share against Intel's offerings. AMD rehired CPU architect Jim Keller in August 2012 to lead the effort, resulting in a ground-up redesign emphasizing higher instructions per clock through features like wider execution units, improved branch prediction, and a 4-wide out-of-order pipeline.[8][9] Lisa Su's ascension to CEO in October 2014 marked a pivotal refocus, with AMD divesting non-core assets and channeling resources into Zen's completion amid financial pressures, including near-bankruptcy risks. This commitment yielded tape-out in late 2015 and initial Zen-based consumer Ryzen launches in March 2017, validating the architecture's ~52% IPC uplift over Excavator cores in independent tests.[10][8] EPYC's server-specific development diverged by adopting a multi-chip module (MCM) with chiplet integration from inception, motivated by yield limitations on monolithic dies at GlobalFoundries' 14 nm node and the need for scalable core counts beyond 8-16. Each EPYC "Naples" processor combined up to four 8-core Zen chiplets with a centralized I/O die via Infinity Fabric links and a silicon interposer, enabling configurations from 8 to 32 cores at TDPs up to 225 W. This design, finalized for data center demands like NUMA-aware memory and PCIe 3.0 expansion, culminated in the EPYC 7001 series launch on June 20, 2017.[11][12]Initial launch and market entry
The first-generation AMD EPYC processors, codenamed Naples and branded as the EPYC 7000 series, were previewed by AMD on March 7, 2017, as part of its re-entry into the high-performance server market following years of diminished presence against Intel's Xeon dominance.[13] These processors utilized the Zen microarchitecture, fabricated on GlobalFoundries' 14 nm process, and supported up to 32 cores per socket with dual-socket configurations enabling up to 64 cores in a system.[14] AMD positioned EPYC as offering superior per-socket core density and Infinity Fabric interconnect for scalable multi-chip performance compared to contemporaneous Intel offerings.[15] Official availability began on June 20, 2017, with initial pricing ranging from approximately $400 for entry-level 8-core models like the EPYC 7251 (2.1 GHz base, 120W TDP) to over $4,000 for high-end 32-core variants such as the EPYC 7601 (2.2 GHz base, 180W TDP).[16][17] Launch-day support came from major original equipment manufacturers (OEMs) including Dell, Hewlett Packard Enterprise (HPE), Lenovo, and Supermicro, which introduced compatible server platforms emphasizing EPYC's advantages in virtualization, database, and HPC workloads.[11] Independent benchmarks at launch highlighted EPYC's competitiveness, with single-socket systems outperforming dual-socket Intel Xeon Scalable equivalents in SPECrate2013_int_base by up to 50% in some configurations, though real-world adoption was tempered by ecosystem maturity and Intel's entrenched market position exceeding 95% share in x86 servers.[16][15] Market entry faced challenges from limited initial validation cycles and supply constraints, but EPYC gained traction in cost-sensitive segments like cloud and edge computing, with early adopters citing lower total cost of ownership (TCO) due to higher core-per-dollar ratios.[17] By late 2017, volume shipments ramped through OEM channels and hyperscalers, marking AMD's first meaningful server revenue resurgence since the Opteron era, though it captured under 5% market share in the initial year amid Intel's response with new Xeon launches.[14]Evolution through generations
The first-generation AMD EPYC processors, codenamed Naples and marketed as the EPYC 7001 series, launched on June 20, 2017. Built on the Zen 1 microarchitecture using a 14 nm process node, these processors supported up to 32 cores and 64 threads per socket on the SP3 socket, with eight DDR4 memory channels and 128 PCIe 3.0 lanes.[18] This debut marked AMD's re-entry into the server market after the Opteron line, emphasizing a chiplet-based design to scale core counts cost-effectively compared to monolithic dies. The second generation, EPYC 7002 series codenamed Rome, arrived on August 7, 2019. Shifting to the Zen 2 microarchitecture with 7 nm compute chiplets (CCDs) atop a 14 nm I/O die, it doubled maximum core counts to 64 cores and 128 threads per socket while retaining the SP3 socket.[19] Key advancements included higher clock speeds, improved Infinity Fabric interconnect for better multi-chiplet coherence, and sustained support for eight DDR4 channels and 128 PCIe 4.0 lanes, yielding up to 1.8 times the performance of Naples in certain workloads.[20] Third-generation EPYC 7003 processors, codenamed Milan, launched March 15, 2021, incorporating the Zen 3 microarchitecture on refined 7 nm CCDs. Retaining SP3 compatibility, they maintained up to 64 cores but delivered 19% instructions per clock (IPC) uplift over Zen 2, alongside unified L3 cache per chiplet for reduced latency.[21][22] A variant, Milan-X, introduced in 2022 with 3D V-Cache stacking up to 768 MB per socket, targeted cache-sensitive applications like databases and HPC simulations.[23]| Generation | Codename | Launch Date | Microarchitecture | Max Cores/Threads | Socket | Key Process Nodes |
|---|---|---|---|---|---|---|
| 1st | Naples | June 20, 2017 | Zen 1 | 32/64 | SP3 | 14 nm |
| 2nd | Rome | August 7, 2019 | Zen 2 | 64/128 | SP3 | 7 nm CCD / 14 nm IOD |
| 3rd | Milan | March 15, 2021 | Zen 3 | 64/128 | SP3 | 7 nm |
| 4th | Genoa/Bergamo | November 10, 2022 (Genoa); June 13, 2023 (Bergamo) | Zen 4 / Zen 4c | 96/192 (Genoa); 128/256 (Bergamo) | SP5 | 5 nm CCD / 6 nm IOD |
| 5th | Turin | October 10, 2024 | Zen 5 | 192/384 | SP5 | 4 nm / 3 nm variants |
Architecture and design
Core microarchitecture
The core microarchitecture of AMD EPYC processors is derived from the company's Zen family of x86-64 designs, optimized for server workloads with emphases on per-core performance, multi-threading via simultaneous multithreading (SMT), and scalability in multi-socket configurations. Each generation introduces iterative improvements in instructions per clock (IPC), branch prediction, cache hierarchies, and execution pipelines, enabling higher throughput for compute-intensive tasks such as virtualization, databases, and high-performance computing (HPC).[1][6] First-generation EPYC processors (7001 series, codenamed Naples), introduced in June 2017, employed the initial Zen microarchitecture on a 14 nm process node, supporting up to 32 cores and 64 threads per socket with 512 KB L2 cache per core and shared 8 MB L3 cache per chiplet. These cores featured a 4-wide decode/rename stage, 6-execution pipelines (including 4 integer and 3 floating-point), and a 96-entry reorder buffer, marking AMD's return to competitiveness in server CPUs through balanced integer and FP performance.[30] Second-generation EPYC (7002 series, Rome), launched in August 2019, adopted Zen 2 cores fabricated on a 7 nm node, doubling L3 cache to 16 MB per chiplet and introducing chiplet-based scaling for up to 64 cores per socket. Zen 2 enhanced IPC by approximately 15% over Zen 1 via wider dispatch (up to 5 operations per cycle), improved branch prediction with a larger 20-bit global history predictor, and doubled the floating-point throughput with dual 256-bit FMA units, while supporting PCIe 4.0 for better I/O integration.[31][32] Third-generation EPYC (7003 series, Milan), released in March 2021, utilized Zen 3 cores on a refined 7 nm process, unifying L3 cache across chiplets into a 32 MB per-core complex for reduced latency in cross-core access. Key advancements included a unified scheduler for integer and floating-point operations, doubled branch target buffer size to 1K entries, and up to 19% IPC uplift, enabling configurations up to 64 cores with sustained boosts exceeding 3.5 GHz in high-core-count variants.[33][34] Fourth-generation EPYC (9004 series, Genoa), announced in November 2022, incorporated Zen 4 cores on a 5 nm node, supporting up to 96 cores per socket with variants including dense Zen 4c cores featuring reduced cache (1 MB L2, 4 MB L3 per core) for higher core density in cost-sensitive workloads. Zen 4 delivered around 13% IPC gains through an expanded front-end (6-wide fetch/decode), AVX-512 support with double-pumped 512-bit execution, and enhanced matrix operations for AI inferencing, alongside full PCIe 5.0 compatibility.[35][36] Fifth-generation EPYC (9005 series, Turin), unveiled in October 2024, employs Zen 5 cores on TSMC's 4 nm process, achieving up to 17% IPC improvement for general workloads and up to 37% for specific AI/HPC tasks via optimizations in the out-of-order engine, larger 8-wide dispatch, and advanced AI accelerators like FP4/FP6 support. Dense Zen 5c variants scale to 192 cores per socket with traded-off cache for density, maintaining compatibility with prior Infinity Fabric interconnects while prioritizing efficiency in large-scale cloud and edge deployments.[37][5][6]Chiplet-based multi-chip module
AMD EPYC processors from the second generation (Rome) onward utilize a chiplet-based multi-chip module (MCM) architecture, integrating multiple specialized dies into a single SP5 socket package. This design separates compute logic into core complex dies (CCDs) and system interfaces into a dedicated I/O die (IOD), connected via AMD Infinity Fabric links operating at up to 40 GT/s in recent implementations.[38] The approach enables scalable core counts by stacking additional CCDs, reaching up to 12 CCDs in fifth-generation Turin processors for configurations exceeding 128 cores.[6] Each CCD typically houses eight full Zen cores with 32 MB of L3 cache in standard variants, or denser Zen c cores with reduced cache per core for higher thread density, fabricated on leading-edge nodes such as TSMC's 5 nm for fourth-generation Genoa or 3 nm-class for Turin.[32] The central IOD, produced on more mature processes like 12 nm (Genoa) or 6 nm (Turin), integrates eight DDR5 memory controllers, up to 128 PCIe 5.0 lanes, and connectivity for CXL interfaces, exposing these resources uniformly across the package to minimize NUMA penalties.[39] Inter-die communication relies on the Global Memory Interconnect (GMI), a subset of Infinity Fabric, with each CCD linking directly to the IOD via dedicated high-bandwidth ports, ensuring low-latency access to shared resources.[38] This modular structure improves manufacturing yields by allowing defective CCDs to be discarded without scrapping the entire processor, as smaller dies have higher defect-free probabilities per first-principles semiconductor physics.[12] It also facilitates heterogeneous integration, mixing process technologies to optimize cost and performance—advanced nodes for core density and efficient I/O on cost-effective nodes—resulting in processors like the EPYC 9755 with 128 Zen 5 cores at a 500 W TDP. Empirical data from AMD deployments show this yields superior scalability over monolithic designs, enabling 96-core Genoa parts from yields unattainable in single-die equivalents.[40] The design's causal efficacy stems from disaggregating functions to exploit specialized fabrication, though it introduces minor latency overheads in inter-chiplet traffic, mitigated by Infinity Fabric's mesh topology and clock domain synchronization.[41]Interconnect and fabric technology
AMD EPYC processors employ Infinity Fabric, a proprietary scalable interconnect architecture that facilitates high-bandwidth, low-latency data transfer and cache coherency across chiplets within a single socket and between sockets in multi-processor configurations. This fabric replaces traditional on-die buses with a modular, die-to-die linking system derived from earlier technologies like HyperTransport, using embedded sensors to dynamically manage control and data flow.[42][43] In EPYC's chiplet-based design, Infinity Fabric connects up to 12 Core Complex Dies (CCDs)—each housing multiple Zen cores and shared L3 cache—to a central I/O Die (IOD) responsible for memory controllers, PCIe interfaces, and other peripherals, creating a unified NUMA domain per socket. Links operate with asymmetric bandwidth, providing 32 bytes per cycle for reads and 16 bytes for writes at the Infinity Fabric clock (FCLK) frequency, which typically ranges from 1.0 to 1.8 GHz depending on configuration and cooling. Intra-socket topology uses a star-like structure with the IOD as the hub, minimizing hops to at most two for data access between CCDs.[41][44][42] Early implementations in 1st-generation EPYC (Naples, 2017) delivered point-to-point die bandwidth of 10.65 GB/s, scaling to aggregate throughputs of 41.4 GB/s per die in dual-socket setups. Subsequent generations enhanced performance: 2nd-generation EPYC (Rome, Zen 2) maintained similar per-link specs but optimized FCLK ratios to memory clocks for better efficiency; 3rd-generation (Milan, Zen 3) introduced Infinity Fabric 3.0 with support for coherent GPU integration; 4th-generation (Genoa, Zen 4) doubled CPU-to-CPU connectivity speeds to up to 36 Gb/s per link (using PCIe Gen 5 physical layers with custom protocols) and reduced NUMA latency variances compared to predecessors; 5th-generation (Turin, Zen 5) further boosts inter-die throughput with dual links per CCD in optimized models, reaching 72 Gb/s aggregate per die.[43][44][42][6] For multi-socket scalability, external Infinity Fabric links (xGMI) enable direct peer-to-peer communication, bypassing full mesh overheads, with up to four 32 Gbps links per socket in 4th- and 5th-generation models, supporting configurations like 2P systems with 512 GB/s theoretical inter-socket bandwidth across four links. These advancements prioritize workload balance in data centers, though effective bandwidth depends on NUMA-aware software tuning to mitigate remote access penalties.[45][42]Memory subsystem and I/O capabilities
The memory subsystem of AMD EPYC processors utilizes integrated controllers within the central I/O die (IOD) to provide high-bandwidth access via multiple DDR channels per socket, optimized for memory-intensive server workloads such as virtualization and databases. First- through third-generation models (EPYC 7001, 7002, and 7003 series, based on Zen, Zen 2, and Zen 3 cores respectively) support eight channels of DDR4 memory at speeds up to 3200 MT/s, with maximum capacities of 4 TB per socket using registered DIMMs (RDIMMs) or load-reduced DIMMs (LRDIMMs).[46] This configuration delivers aggregate bandwidth exceeding 200 GB/s in balanced populations, with each channel supporting up to two DIMMs for flexibility in 1DPC (one DIMM per channel) or 2DPC setups.[47] Fourth-generation EPYC 9004 series processors (Genoa, Zen 4-based) advanced to twelve channels of DDR5-4800 memory, boosting per-socket bandwidth by approximately 50% over prior DDR4 setups and supporting up to 6 TB capacity with support for 3D-stacked DRAM options.[2] Fifth-generation EPYC 9005 series (Turin, Zen 5-based, launched October 2024) further enhances this with DDR5-6400 speeds on the same twelve channels, enabling up to 9 TB per socket and aggregate bandwidths approaching 500 GB/s in optimal configurations, while maintaining compatibility with prior DDR5 RDIMMs, LRDIMMs, and NVDIMMs.[38][48] Across generations, AMD recommends balanced population across channels to minimize latency and maximize throughput, with BIOS options for fine-tuning NUMA domains and prefetch behaviors.[49] I/O capabilities center on the IOD's extensive PCIe integration, providing up to 128 lanes per socket for connectivity to GPUs, storage, and networking devices, with bifurcation support down to x1 for flexible endpoint allocation. First-generation processors offered 128 PCIe 3.0 lanes at 8 GT/s, while second- and third-generation models upgraded to PCIe 4.0 at 16 GT/s for doubled bandwidth per lane.[46] Fourth- and fifth-generation processors deliver 128 PCIe 5.0 lanes at 32 GT/s, with dual-socket systems accessing up to 160 lanes via additional fabric links, enabling high-density accelerator deployments.[38][6] Starting with the fourth generation, support for Compute Express Link (CXL) 1.1+ allows up to 48 dedicated lanes for coherent memory pooling and fabric-attached devices, facilitating disaggregated memory expansion beyond local DRAM limits.[39] Embedded variants like EPYC 4004/8004 series scale down to 28-96 lanes while retaining Gen5 compatibility for edge and dense server use cases.[50]Performance and efficiency
Benchmark comparisons
The AMD EPYC processor family has shown marked advantages over Intel Xeon processors in multi-threaded benchmarks, driven by higher core counts, improved IPC via Zen architectures, and efficient chiplet designs. In SPEC CPU 2017 integer rate tests, the 5th generation EPYC 9965 (192 cores) achieves a peak score of over 1,000 in multi-threaded configurations, surpassing comparable Intel Xeon 6 series results submitted to SPEC.[51] Similarly, the EPYC 9754 from the 4th generation Genoa lineup set records with integer rate peaks around 1,780, representing a 2.8-fold improvement over Intel's prior Ice Lake Xeon baselines in equivalent setups.[52][53] Phoronix Test Suite evaluations, encompassing compilation, encoding, simulation, and database workloads, further highlight EPYC's edge. The EPYC 9965 Turin variant delivers geometric mean performance uplifts of 30-50% against the Intel Xeon 6980P in over 140 multi-threaded tests on Ubuntu platforms, with particular dominance in AVX-512 accelerated tasks like scientific simulations.[54] In AWS EC2 cloud instances, EPYC Turin-powered m8a configurations outperform Intel Granite Rapids m8i equivalents in CPU-bound Phoronix benchmarks, yielding higher throughput per vCPU despite EPYC's physical-core-only threading model.[55]| Benchmark Suite | EPYC Model (Generation) | Comparable Xeon | Performance Edge (EPYC) |
|---|---|---|---|
| SPEC CPU 2017 Integer Rate (Peak) | 9654 (Genoa, 4th) | Ice Lake Platinum | ~2.8x[56] |
| Phoronix Geometric Mean (Multi-Threaded) | 9965 (Turin, 5th) | 6980P (Sapphire Rapids refresh) | 30-50%[54] |
| AWS EC2 CPU-Bound Tasks | Turin m8a | Granite Rapids m8i | Higher throughput per core[55] |
Power consumption and thermal characteristics
AMD EPYC processors exhibit Thermal Design Power (TDP) ratings that scale with core count and generational advancements, generally ranging from 65 W in low-power variants like the EPYC 4004 series to 500 W in high-end 5th-generation models such as those in the 9005 lineup.[58][59] Earlier generations, including the 4th-generation 9004 series, feature TDPs from around 200 W for mid-range SKUs like the 64-core EPYC 8534P up to 400 W for dense configurations.[60] These ratings represent the maximum sustained power dissipation under typical workloads, influencing server design for power budgeting and cooling infrastructure. Actual power draw often deviates from TDP, with benchmarks revealing average consumption under load ranging from 221 W to peaks exceeding 355 W for models like the EPYC 9554 in performance-oriented modes.[61] Idle power consumption is notably higher than in consumer AMD Ryzen counterparts, typically 50–110 W for EPYC systems depending on generation, BIOS settings, and peripheral load, due to the server-oriented architecture prioritizing scalability over minimal quiescent draw.[62] Successive generations demonstrate improved performance-per-watt metrics; for instance, 5th-generation EPYC processors achieve up to 37% better instructions per clock efficiency in HPC workloads compared to 4th-generation equivalents, enabling lower overall energy use for equivalent throughput.[63][64] Thermal management relies on AMD's Infinity Power Management system, integrated via the System Management Unit (SMU), which dynamically adjusts voltage, frequency, and power allocation across chiplets while monitoring die temperatures to prevent throttling.[65] Operating junction temperatures are designed to sustain up to 95°C under load for standard models, with extended variants like the EPYC 8004PN supporting ambient environments from -5°C to 85°C for edge deployments.[66] Cooling requirements accommodate air-cooled heatsinks for most configurations, as validated by third-party tests with Noctua TR4-SP3 compatible coolers maintaining sub-throttle temperatures on 1st- and later-generation EPYC dies.[67] However, high-TDP SKUs approaching or exceeding 400 W, particularly in dense multi-socket setups, may necessitate liquid cooling to manage heat densities over 700 W per node when paired with accelerators.[68] Future iterations, such as the anticipated EPYC Venice, could surpass 1,000 W, pushing reliance on advanced direct-to-chip liquid solutions beyond traditional air cooling limits.[69]Workload-specific optimizations
AMD EPYC processors feature architectural enhancements and BIOS-configurable parameters tailored to high-performance computing (HPC) workloads, including support for up to 192 Zen 5 cores per socket in the 9005 series, which deliver a geometric mean instructions per cycle (IPC) uplift of 1.369x over prior generations across select HPC benchmarks.[6] These optimizations leverage Infinity Fabric interconnects for low-latency scaling across chiplets, enabling efficient parallel processing in simulations and scientific modeling, with BIOS settings like NUMA nodes per socket (NPS) configured to NPS1 for compute-bound tasks to minimize remote memory access latency.[70] For artificial intelligence (AI) and machine learning inference, frequency-optimized variants such as the EPYC 9575F prioritize clock speeds over core density, achieving over 10x lower latency in serving models compared to equivalent Intel Xeon processors in latency-constrained environments.[71] Integration with accelerators via up to 160 PCIe Gen5 lanes supports GPU offloading, while ZenDNN libraries enhance vectorized operations for computer vision and natural language processing tasks; simultaneous multithreading (SMT) can boost throughput by 30-60% in thread-parallel inference workloads when enabled in BIOS.[72][73] Database and analytics workloads benefit from high memory bandwidth via 12-channel DDR5 support and large last-level caches, with 3D V-Cache-equipped models (denoted by 'X' suffix) providing up to 768 MB L3 cache per socket to reduce cache misses in query-intensive operations.[1] BIOS tuning recommends NPS4 for memory-bound databases to align NUMA domains with data locality, improving throughput in OLTP and OLAP scenarios by optimizing data placement across chiplet-based dies.[74] Virtualization environments leverage EPYC's high core counts and Secure Encrypted Virtualization (SEV) for isolated VM execution, with BIOS options enabling maximum performance mode and all cores active to support dense VDI deployments.[75] For network-intensive virtualization, NIC Throughput Intensive profiles adjust Infinity Fabric clocking to sustain high packet rates without dynamic downclocking.[76] Financial services workloads further optimize via compiler flags for AVX-512 utilization and library tuning, yielding measurable gains in risk modeling and transaction processing on 9005 series processors.[77]Market reception and impact
Adoption by enterprise and hyperscalers
AMD EPYC processors have seen rapid adoption among hyperscalers, driven by their performance in AI inference, cloud workloads, and cost efficiency. Major providers including Meta, Google, Amazon, and Oracle expanded EPYC-based instances by approximately 27% in 2024, exceeding 1,000 instances across their platforms as they scaled data center operations. Meta, a leading adopter, has deployed over 1.5 million EPYC CPUs, utilizing them for training, fine-tuning, and running inference on large models such as its 405-billion-parameter Llama 3.1. In the second quarter of 2025, the largest hyperscalers introduced more than 100 new AMD-powered instances, reflecting sustained momentum in EPYC integration for high-density computing. OVHcloud, a hyperscale cloud provider, leverages EPYC for flexible, high-performance platforms supporting cutting-edge workloads. Enterprise adoption has similarly accelerated, with EPYC enabling efficiency gains in private data centers, virtualization, and AI applications. Kakao Enterprise, a South Korean cloud provider, reduced its data center footprint by 50% while increasing performance by 30% after migrating to EPYC CPUs. Cybersecurity firm Rubrik integrated 5th Gen EPYC processors across its data security platform in June 2025 to enhance AI-ready cloud deployments. Partnerships with OEMs have broadened access: Dell Technologies offers PowerEdge R6715 and R7715 servers with 5th Gen EPYC, delivering up to 37% more drive capacity; HPE ProLiant Gen11 servers incorporate EPYC for AI, HPC, and virtualization; and Supermicro expanded its MicroBlade portfolio with EPYC 4005 series in October 2025 for dense edge and enterprise configurations. Governments and organizations worldwide select EPYC-based servers for big data analytics and secure processing. This uptake correlates with EPYC capturing a record 41% revenue share in the data center CPU market during Q2 2025, according to Mercury Research data, up from near-zero in 2017 and reflecting a shift toward chiplet designs offering superior core counts and memory bandwidth for enterprise-scale deployments. Enterprise adoption tripled year-over-year in 2024, fueled by EPYC's optimizations for virtual machines, databases, and hybrid cloud environments.Competition dynamics with Intel Xeon
AMD EPYC processors entered the server market in June 2017 with the first-generation Naples series, challenging Intel's longstanding dominance in the x86 data center CPU segment, where Xeon held over 95% share prior to AMD's re-entry. AMD's chiplet-based architecture enabled higher core counts at competitive prices, disrupting Intel's premium pricing model sustained by limited competition. By offering up to 32 cores per socket initially—surpassing Intel's then-maximum of 28—EPYC targeted parallel workloads prevalent in cloud and HPC, where scaling cores directly correlated with throughput gains. Subsequent generations amplified this advantage: second-generation Rome (2019) reached 64 cores, third-generation Milan (2021) added Zen 3 IPC improvements, and fourth-generation Genoa (2022) scaled to 96 cores with Zen 4 efficiency. Intel responded with delayed monolithic designs like Ice Lake (2021, 40 cores max) and Sapphire Rapids (2023, 60 cores), hampered by process node struggles that limited density and power efficiency. AMD's 5nm-class processes in later EPYC iterations delivered superior performance per watt, often 1.5-2x in multi-threaded benchmarks against equivalent Xeon SKUs, driven by modular chiplets allowing cost-effective scaling without monolithic die yield issues. This forced Intel into pricing adjustments, with Xeon 6 series (launched late 2024) seeing MSRP cuts of up to 30% by January 2025 to counter EPYC 9005 Turin's 192-core density and lower total cost of ownership.[78] Market share dynamics reflect these technical edges: AMD's server CPU revenue share climbed from under 10% in 2018 to approximately 33% by June 2025, eroding Intel's from over 90% to 62%, per Mercury Research estimates, with hyperscalers favoring EPYC for cost-sensitive, core-heavy deployments.[79] In Q1 2025, AMD hit 39.4% unit share, up 6.5% quarter-over-quarter, fueled by EPYC's adoption in AI training and virtualization where thread-parallelism trumps single-thread latency.[80] Intel retained leads in latency-critical enterprise apps via optimized libraries and broader ecosystem maturity, but AMD's value proposition—higher cores at 20-50% lower effective pricing—shifted economics toward commoditization, prompting Intel's hybrid E-core approaches in Sierra Forest for density matching.[81] By mid-2025, both vendors discounted flagship models by up to 50% amid softening demand, underscoring intensified rivalry.[82]Influence on data center economics and AI workloads
AMD EPYC processors have driven significant cost reductions in data center operations through higher core densities and improved energy efficiency compared to competing Intel Xeon offerings, enabling greater workload consolidation and reduced total cost of ownership (TCO). For instance, the chiplet-based design supports up to 192 cores per socket in the 5th generation EPYC (Turin), allowing operators to achieve equivalent performance with fewer servers, which lowers capital expenditures on hardware and rack space while decreasing power and cooling demands.[27] Independent analyses indicate that EPYC deployments can consolidate infrastructure such that refreshed servers require 31% fewer cores for the same workloads, contributing to rack density improvements and operational savings.[83] Hyperscale providers have accelerated adoption, with over 100 new AMD-powered cloud instances launched in Q2 2025 alone, reflecting a market share of 36.5% for AMD's x86 server CPUs that year, driven by these economic advantages.[84][85] Specific case studies underscore these benefits: Twitter (now X) reported a 25% TCO reduction after deploying 2nd generation EPYC processors across its data centers in 2019, primarily from enhanced virtualization efficiency and reduced hardware footprint.[86] In virtualization environments, EPYC has enabled up to 42% lower VMware licensing costs through superior density, yielding CapEx payback periods of approximately two months.[87] AMD's TCO estimator tools further quantify potential savings, showing EPYC systems offsetting costs via reduced energy consumption and emissions compared to Intel equivalents, with full data center builds potentially paying for themselves through efficiency gains.[88] For AI workloads, EPYC processors enhance economics by providing a scalable CPU foundation that optimizes GPU utilization, data preparation, and inference serving, often at lower power draw than alternatives. The 4th and 5th generation models deliver over 10x better performance in latency-sensitive inference compared to Intel Xeon, balancing CPU-GPU ecosystems to minimize idle resources and operational expenses.[71] This efficiency supports "everyday AI" tasks like analytics and machine learning preprocessing, where EPYC's high memory bandwidth (up to 12 channels of DDR5) and extensive PCIe lanes (up to 128) reduce the need for additional accelerators, cutting cloud AI costs by improving performance per watt and enabling fewer nodes for training or serving.[89][90] Enterprises report OPEX reductions from EPYC's role in consolidating AI infrastructure, as its core density allows hyperscalers to handle surging demand with optimized power budgets rather than proportional hardware scaling.[91]Variants and adaptations
Embedded and edge computing variants
AMD develops EPYC Embedded processors specifically for applications requiring long product lifecycles, such as industrial control, networking appliances, storage systems, and edge inference, with availability guarantees extending up to 10 years to support embedded deployments.[92] These variants leverage the same Zen-based microarchitectures as mainstream EPYC server processors but incorporate optimizations like configurable TDP, enhanced reliability features, and support for ruggedized systems to meet the demands of non-data-center environments.[93] The inaugural EPYC Embedded 3000 series, released in 2019 and based on the first-generation Zen core, targets single-socket embedded systems with models ranging from 4 to 16 cores, base clocks up to 2.14 GHz, and configurable TDPs from 45W to 180W.[93] It supports dual- or quad-channel DDR4-2666 ECC memory up to 1 TB, up to 128 PCIe 3.0 lanes, and integrated features like dual 10GbE MACs for networking efficiency, making it suitable for storage controllers and telecom edge nodes.[94] Later revisions in 2020 added models like the 16-core EPYC Embedded 3451 with 32 threads and 64 MB L3 cache, emphasizing power efficiency for industrial applications.[95] Subsequent embedded variants align with EPYC server generations for scalability. The EPYC Embedded 7002 series (Zen 2, Rome) introduced higher core densities up to 64 cores, PCIe 4.0 support, and improved per-core performance for edge analytics and real-time processing.[92] The fourth-generation EPYC Embedded 9004 series (Zen 4, Genoa) added DDR5 memory channels and up to 96 cores, enhancing bandwidth for AI inference at the edge while maintaining enterprise-grade RAS (reliability, availability, serviceability) features like advanced error correction.[96] In 2025, AMD launched the fifth-generation EPYC Embedded 9005 series (Zen 5, Turin), scaling from 8 to 192 cores with up to 512 MB L3 cache and 160 PCIe 5.0 lanes, optimized for compute-intensive embedded tasks like industrial AI and high-frequency networking.[97] Complementing this, the EPYC Embedded 4005 series, announced on September 16, 2025, focuses on low-power edge computing with up to 16 cores, energy-efficient designs under 100W TDP, AM5 socket compatibility for easier integration, and low-latency optimizations for real-time data processing in compact appliances.[98][99] For broader edge deployments, AMD positions the EPYC 8004 series (Zen 4C, Siena) as a dense, power-optimized option with up to 64 cores in single-socket configurations, delivering cost-effective performance for GPU-accelerated edge workloads while supporting up to 6 TB DDR5 memory.[100] These variants collectively enable edge computing by providing scalable x86 performance in thermally constrained, space-limited environments, outperforming prior embedded solutions in throughput per watt for tasks like video analytics and 5G baseband processing.[101]Dense and specialized server variants
AMD EPYC processors feature dense variants tailored for high core-density deployments in scale-out data center environments, prioritizing core count over per-core performance to maximize throughput in virtualized and cloud-native workloads. The Bergamo subfamily within the 4th-generation EPYC 9004 series employs Zen 4c cores, which are physically smaller than standard Zen 4 cores while retaining the same instruction set and microarchitecture features, enabling up to 128 cores and 256 threads per socket on the SP5 platform.[102][103] Launched in 2023, these processors support 12-channel DDR5 memory and 128 PCIe 5.0 lanes, facilitating configurations that consolidate workloads onto fewer nodes, thereby reducing rack space, power draw, and operational costs compared to prior generations.[35] Specialized server variants extend this with optimizations for memory-intensive or latency-sensitive applications. The Genoa-X processors, also in the EPYC 9004 series, incorporate stacked 3D V-Cache technology to expand L3 cache capacity to over 1 GB per socket—specifically up to 1.152 GB in models like the EPYC 9684X—accelerating data access in high-performance computing, in-memory databases, and simulation tasks where cache misses dominate bottlenecks.[104] These differ from standard Genoa by trading some core count for cache density, with up to 96 cores but enhanced hit rates that AMD claims deliver up to 2x performance in select HPC benchmarks.[35] Additionally, frequency-optimized SKUs such as the EPYC 9754F and 9575F variants boost base and boost clocks to 3.0–4.1 GHz for low-latency transactional processing, while maintaining compatibility with dense multi-socket setups up to eight sockets in supported systems.[37] The EPYC 8004 series (Siena), introduced in 2023, offers specialized dense options for cost-sensitive, power-constrained server designs with Zen 4c cores scaled to 64 cores maximum and TDP ratings from 70 W to 225 W, supporting up to four sockets for compact rack deployments in telco or regional data centers.[105] This series uses a reduced I/O die configuration with six DDR5 channels and 96 PCIe 5.0 lanes, targeting efficiency in generalized compute without the full-scale resources of 9004 models, as evidenced by benchmarks showing competitive performance per watt in virtualization tests.[35] Across these variants, AMD emphasizes chiplet-based scalability, with empirical data from independent tests confirming 20–50% gains in core-density workloads over Intel equivalents in equivalent power envelopes.[106]Region-specific modifications
AMD's joint venture with Chinese firms, including Hygon Information Technology, enables the production of region-specific EPYC-compatible processors under license for the Chinese market. These Dhyana-series CPUs, such as the Hygon Dhyana C86-7395, replicate the core Zen 1 microarchitecture of first-generation EPYC (Naples) processors but incorporate modifications primarily in the integrated I/O die to utilize domestically sourced components and circumvent U.S. export restrictions on advanced semiconductor technology.[107][108] The alterations ensure compliance with local manufacturing mandates while maintaining pin-compatibility with the SP3 socket, allowing deployment in standard EPYC server platforms without hardware changes. This approach differs from global EPYC offerings by prioritizing self-reliance in I/O subsystems, potentially at the cost of optimized interconnect performance compared to AMD's standard designs. Production began around 2018, targeting enterprise and government sectors restricted from importing high-performance U.S.-made chips.[107] Subsequent developments have seen limited updates to these licensed variants, with Hygon largely adhering to Zen 1 equivalents due to licensing constraints, while Chinese firms pivot toward indigenous architectures for newer server needs. No equivalent hardware modifications exist for other regions, such as Europe or Asia-Pacific markets outside China, where standard EPYC SKUs prevail without regional adaptations.[108]Criticisms and challenges
Hardware errata and reliability issues
AMD EPYC processors feature comprehensive reliability, availability, and serviceability (RAS) mechanisms, including advanced error correction and fault isolation, contributing to low field failure rates comparable to competing Intel Xeon processors in data center environments.[109][110] However, like other high-density server CPUs, EPYC generations include documented silicon errata—deviations from specifications that can lead to hangs, resets, or reduced reliability under specific conditions.[111] These are detailed in AMD's official revision guides, with most addressed via BIOS, firmware, or software workarounds rather than silicon fixes, as no hardware revisions are planned for production parts.[112] In the first-generation EPYC (Naples, Zen 1), systems could experience hangs or crashes after approximately 1044 days of uptime due to a core failing to exit low-power states properly, similar to issues in later generations.[113] Production errata were less publicly highlighted compared to successors, though early prototypes faced booting challenges resolved prior to volume shipment.[114] Second-generation EPYC (Rome, Zen 2) processors exhibited several errata prone to system hangs or resets. Erratum 1474 causes a core to fail exiting CC6 low-power state after roughly 1044 days from last reset, potentially hanging the system depending on spread spectrum clocking and workload; mitigation involves periodic reboots or disabling CC6 via MSR programming.[111][115] Other issues include Erratum 1140, where Data Fabric transaction loss leads to hangs (mitigated by fabric register programming), Erratum 1290 causing GMI link hangs from retraining failures after CRC errors, and Erratum 1315 triggering hangs in dual-socket 3-link configurations.[111] These primarily affect I/O and interconnect reliability in multi-socket setups. Third-generation EPYC (Milan, Zen 3) introduced errata such as ID 1446, where improper on-die regulator initialization during power-up results in permanent boot failure, rendering the processor inoperable.[112] ID 1431 permits core hangs during bus locks with SMT enabled, potentially causing watchdog-induced resets, while ID 1441 risks DMA write data corruption in memory.[112] ID 1462 hinders reboot or shutdown after fatal errors, exacerbating recovery in error-prone scenarios.[112] Some Milan systems reported random OS shutdowns or soft resets, attributed to underlying hardware sensitivities.[116] Fourth-generation EPYC (Genoa, Zen 4) errata include hangs from CXL.mem transaction timeouts (no workaround) and system instability with poisoned PCIe data lacking error logs.[117] Erratum 1560 risks hangs when Data Fabric C-states interact with CXL Type 1 devices (mitigated by disabling DF C-states), and Erratum 1483 generates unexpected fatal errors on uncorrectable DRAM ECC faults.[117] Claims of systemic memory subsystem redesign needs were refuted by AMD, with no confirmed widespread reliability degradation.[118] Fifth-generation EPYC (Turin, Zen 5) have fewer publicized functional errata to date, though an RDSEED instruction flaw affects random number generation reliability for cryptographic seeding, potentially impacting security-dependent workloads.[119] Overall, EPYC errata do not indicate higher aggregate failure rates than peers, with third-party testing confirming robust long-term stability in enterprise deployments.[120][109]Security vulnerabilities
AMD EPYC processors have been affected by several hardware-level security vulnerabilities, primarily side-channel attacks exploiting speculative execution and microarchitectural features, as well as flaws in virtualization technologies like Secure Encrypted Virtualization (SEV). These issues, common to modern x86 CPUs, enable potential data leakage across processes or virtual machines, though exploitation often requires local access or specific privileges. AMD has addressed most through microcode updates and firmware patches, distributed via BIOS vendors or operating systems, with varying performance impacts.[121] A notable early vulnerability was Zenbleed (CVE-2023-20593), disclosed on July 24, 2023, affecting second-generation EPYC Rome processors based on Zen 2 architecture. This flaw stems from improper clearing of vector registers (YMM) during speculative execution, allowing cross-process information leaks of up to 30-40 bytes per iteration, potentially exposing sensitive data like passwords or keys. Researchers from Google Project Zero demonstrated practical exploitation, prompting AMD to release a microcode patch (AMD-SB-7008); however, applying it reduced performance by up to 15% in vector-heavy workloads on affected EPYC systems.[122][123] EPYC platforms are also vulnerable to Spectre variants (e.g., Spectre v1 and v2), which leverage branch prediction and speculative execution for unauthorized memory access, though AMD chips show lower exploitability for Meltdown compared to Intel due to architectural differences. Mitigations, including retpoline and microcode updates, were rolled out starting January 2018, with ongoing refinements; EPYC users in data centers were advised to apply them to prevent kernel-to-user data leaks in virtualized environments.[121][124] In August 2024, the Sinkclose vulnerability (CVE-2024-56161 et al.) was revealed, affecting multiple AMD architectures including EPYC by bypassing System Management RAM (SMRAM) protections via flawed caching mechanisms, enabling deep code execution in privileged firmware regions. This required physical access but posed risks in server maintenance scenarios; AMD mitigated it via microcode and recommended restricting physical access.[125] Server-specific concerns include SEV-related flaws: In February 2025, a firmware bug (AMD-SB-3009) allowed privileged attackers to read unencrypted guest memory, compromising confidential computing isolation on EPYC with SEV enabled, fixed via updated SEV firmware. Another SEV issue (AMD-SB-3019), due to improper microcode signature verification, permitted malicious microcode loading, potentially undermining encryption guarantees; patches were issued concurrently.[126][127] July 2025 brought Transient Scheduler Attack (TSA) vulnerabilities (CVEs TBD), disclosed by Microsoft researchers and affecting EPYC across generations via combined timing side-channels in scheduler operations, enabling chained info disclosure akin to Spectre/Meltdown. Individually low-severity, their exploitation could leak data in multi-tenant servers; AMD recommended software mitigations and promised microcode fixes.[128][129] August 2025 advisories (AMD-SB-3014) highlighted IOMMU and SEV-SNP weaknesses in EPYC platforms, potentially allowing DMA attacks or nested paging bypasses in virtualized setups, with patches focusing on enhanced memory isolation. AMD maintains a product security incident response team (PSIRT) for ongoing disclosures, emphasizing that while no widespread exploits have been reported for EPYC-specific cases, virtualization-heavy deployments warrant prompt patching to preserve trust in server ecosystems.[130]Ecosystem and compatibility limitations
Despite its adherence to the x86 instruction set architecture, ensuring binary compatibility with software developed for Intel processors, the AMD EPYC platform presents ecosystem challenges stemming from its multi-chiplet design and historical market position. This architecture divides each socket into multiple NUMA (Non-Uniform Memory Access) domains—configurable via BIOS settings like Nodes per Socket (NPS) modes (1, 2, 4, or 8)—which can lead to suboptimal performance in workloads not explicitly tuned for such topologies. For instance, untuned applications may incur higher latency due to remote memory access across chiplets, necessitating manual optimizations such as NUMA-aware scheduling or pinning, as outlined in AMD's tuning guides for EPYC 9004 series processors.[74] Dell Technologies documentation highlights that while NPS configurations mitigate inter-domain bandwidth penalties, they require workload-specific validation to avoid up to 20-30% performance degradation in latency-sensitive tasks compared to monolithic designs.[131] In virtualization environments, EPYC's NUMA complexity exacerbates compatibility hurdles. Hypervisors like VMware ESXi initially faced NUMA-related inefficiencies on early EPYC generations, such as improper vNUMA exposure leading to VM migration failures or reduced throughput, though patches like adjusted locality/weight affinity settings resolved many issues by ESXi 6.7.[132] Mixed AMD-Intel clusters encounter live migration incompatibilities due to differing CPU models and microarchitectures; for example, in Epic EHR deployments on vSphere, migrating VMs between EPYC and Xeon hosts is unsupported, complicating high-availability setups and requiring homogeneous clusters.[133] Similar constraints appear in Proxmox VE, where assigning EPYC-specific CPU models to VMs can trigger startup errors unless host passthrough or custom topologies are configured.[134] Enterprise software certification lags contribute to adoption barriers, as many legacy applications—optimized over decades for Intel's ecosystem—undergo delayed validation for EPYC. Independent analyses note that while EPYC supports all major OSes and hypervisors post-certification, Intel maintains an edge in "certified for Intel" badges for thousands of business-critical apps, simplifying procurement for risk-averse IT departments and reducing validation timelines.[135] For Red Hat Enterprise Linux 7, AMD EPYC Zen 3 (Milan) processors lack official support, forcing upgrades to RHEL 8 or later despite binary compatibility.[136] Hardware ecosystem limitations include fewer validated third-party peripherals at launch compared to Xeon, though AMD's PCIe validation program has expanded compatibility; early generations saw sporadic issues with add-in cards assuming single-die topologies.[137] These factors, while diminishing with EPYC's market share growth—reaching over 30% in servers by 2024—persist in conservative sectors prioritizing seamless Intel interoperability over EPYC's core-count advantages.[138]Processor generations
First generation (Naples, Zen 1)
The first-generation AMD EPYC processors, designated the 7000 series and codenamed Naples, marked AMD's re-entry into the server CPU market following a decade-long absence. Built on the Zen 1 microarchitecture and fabricated on GlobalFoundries' 14 nm process node, these processors were officially launched on June 20, 2017, after an announcement at Computex in May 2017.[139][140] They utilized Socket SP3 and supported dual-socket configurations, targeting data center workloads with emphasis on core density and I/O bandwidth.[141] Naples employed a multi-chip module (MCM) design consisting of four core complex dies (CCDs)—each containing two core complexes (CCXs) for a total of eight cores per die—interconnected via AMD's Infinity Fabric for on-package communication.[140] This chiplet-like approach enabled scalability to 32 cores and 64 threads per socket, with 32 MB of L2 cache and 64 MB of L3 cache distributed across the dies, though it introduced non-uniform memory access (NUMA) domains that could impact latency-sensitive applications.[140] Each processor supported eight channels of DDR4-2666 memory (up to 2 TB total capacity) and 128 lanes of PCIe 3.0, providing substantial bandwidth for storage and networking peripherals.[142] The lineup spanned 19 SKUs, from entry-level 8-core models like the EPYC 7251 (base clock 2.1 GHz, TDP 90 W) to high-end 32-core variants such as the EPYC 7601 (base 2.2 GHz, boost up to 3.2 GHz, TDP 180 W) and single-socket-optimized EPYC 7601P.[141] Thermal design power ranged from 90 W to 225 W across models, with pricing starting at around $300 for lower-end parts and reaching $4,200 for flagship 32-core units at launch.[142] These processors competed directly with Intel's Xeon Scalable lineup by offering higher core counts at lower per-core costs, though initial benchmarks revealed mixed results in single-threaded performance due to Zen 1's IPC limitations compared to contemporary Intel architectures.[140]Second generation (Rome, Zen 2)
The second-generation AMD EPYC processors, codenamed Rome, utilize the Zen 2 microarchitecture and were released on August 7, 2019.[143] These server CPUs employ a multi-chiplet design comprising up to eight 7 nm compute chiplet dies (CCDs), each with eight cores, interconnected via a central 14 nm I/O die that handles memory controllers, PCIe lanes, and inter-die communication through Infinity Fabric links.[144] [145] This architecture enables scalability to 64 cores and 128 threads per socket while maintaining compatibility with the SP3 socket used in the prior Naples generation.[146] The EPYC 7002 series supports eight channels of DDR4-3200 memory, accommodating up to 4 TB capacity, and provides 128 lanes of PCIe 4.0, doubling the bandwidth of PCIe 3.0 in the first generation.[31] Cache hierarchy includes up to 256 MB of shared L3 cache across chiplets, with each core featuring 512 KB L2 cache and improved branch prediction and floating-point units in Zen 2.[147] Thermal design power (TDP) ranges from 120 W to 225 W for most models, with select dual-socket variants rated up to 280 W.[143] Relative to the Zen 1-based Naples processors, Rome delivers enhanced single-threaded performance through approximately 15-20% higher instructions per clock (IPC) on average, with AMD reporting up to 29% uplift in specific integer and floating-point workloads at iso-frequency, alongside better power efficiency from the smaller compute node.[148] [149] The shift to PCIe 4.0 and faster memory reduces I/O bottlenecks, enabling up to 2x overall socket performance in bandwidth-sensitive tasks like HPC simulations and database operations.[150] Security enhancements include AMD Infinity Guard, featuring hardware root of trust and memory encryption capabilities.[31] The lineup comprises 19 models, spanning 8 to 64 cores, such as the 64-core EPYC 7742 at 2.25 GHz base (boost to 3.4 GHz) and the 32-core EPYC 7502 at 2.5 GHz base (boost to 3.35 GHz), targeting diverse workloads from cloud virtualization to technical computing.[151] [152] Independent benchmarks confirmed leadership in multi-threaded throughput, with Rome systems outperforming Intel counterparts in SPEC CPU2017 rates by up to 50% in certain configurations.[153]Third generation (Milan, Zen 3)
The third-generation AMD EPYC processors, codenamed Milan and based on the Zen 3 microarchitecture, were launched on March 15, 2021.[154] These processors maintained the maximum of 64 cores and 128 threads per socket from the prior Rome generation while introducing architectural enhancements such as a unified 32 MB L3 cache per eight-core chiplet, improved branch prediction, and higher instructions per clock (IPC) uplift of approximately 19%.[155] Manufactured on TSMC's 7 nm process, Milan processors support eight-channel DDR4-3200 memory and 128 lanes of PCIe 4.0, with thermal design power (TDP) ratings ranging from 180 W to 280 W depending on the model.[156] Key models in the EPYC 7003 series include the flagship EPYC 7763 with 64 cores at a 2.45 GHz base frequency and 3.50 GHz boost, alongside options like the EPYC 7713 (64 cores, 2.00 GHz base, 3.67 GHz boost, 225 W TDP) and lower-core variants such as the EPYC 72F3 (8 cores, optimized for frequency).[157] Independent benchmarks demonstrated average performance gains of 14-17.5% over equivalent Rome models in compute-intensive workloads, attributed to per-core efficiency rather than core count increases.[157] In dual-socket configurations, Milan delivered up to 1.43x better results in fluid dynamics simulations compared to contemporary Intel counterparts.[158] A variant line, the EPYC 7003X "Milan-X" series, extended the architecture with stacked 3D V-Cache technology, adding up to 768 MB of L3 cache per socket for enhanced bandwidth in memory-sensitive applications; the top-end EPYC 7773X features 64 cores and demonstrated 20% average improvements over standard Milan in cache-dependent tasks.[159] These processors emphasized security features like AMD Infinity Guard, including hardware-based memory encryption, while maintaining compatibility with SP3 sockets and existing EPYC ecosystems.[154]| Model | Cores/Threads | Base/Boost Freq. (GHz) | L3 Cache (MB) | TDP (W) |
|---|---|---|---|---|
| EPYC 7763 | 64/128 | 2.45/3.50 | 256 | 280 |
| EPYC 7713 | 64/128 | 2.00/3.67 | 256 | 225 |
| EPYC 7773X (Milan-X) | 64/128 | 2.20/3.50 | 768 | 280 |