Fact-checked by Grok 2 weeks ago

Epyc


EPYC is a brand of multi-core x86-64 server microprocessors designed and marketed by Advanced Micro Devices (AMD) for enterprise data centers, cloud computing, and high-performance computing applications, utilizing the company's Zen microarchitecture in a chiplet-based design that enables high core counts and scalability.
Introduced in 2017 with the first-generation 7001 series (codename Naples) based on Zen cores, EPYC processors have evolved through multiple generations, including the second-generation 7002 series (Rome) in 2019, third-generation 7003 series (Milan) in 2021, fourth-generation 9004 series (Genoa) in 2022, and fifth-generation 9005 series (Turin) launched in October 2024, each iteration incorporating advancements such as increased core densities up to 192 cores per socket, higher clock speeds reaching 5 GHz boosts, support for DDR5 memory with up to 12 channels, and expanded PCIe lanes for enhanced I/O connectivity.
Key defining characteristics include AMD's Infinity Fabric interconnect for multi-chiplet coherence, which facilitates cost-effective scaling of compute resources while maintaining performance, and optimizations for workloads like AI inference, virtualization, and technical computing, where EPYC has demonstrated leadership in benchmarks with over 250 world records in areas such as throughput, energy efficiency, and total cost of ownership reductions compared to prior generations and competitors.

History

Origins and development

The origins of AMD EPYC trace back to the Zen microarchitecture project, initiated in the early 2010s as a response to the commercial and performance shortcomings of AMD's prior Bulldozer and Piledriver architectures, which had eroded market share against Intel's offerings. AMD rehired CPU architect Jim Keller in August 2012 to lead the effort, resulting in a ground-up redesign emphasizing higher instructions per clock through features like wider execution units, improved branch prediction, and a 4-wide out-of-order pipeline. Lisa Su's ascension to CEO in October 2014 marked a pivotal refocus, with divesting non-core assets and channeling resources into Zen's completion amid financial pressures, including near-bankruptcy risks. This commitment yielded in late 2015 and initial Zen-based consumer launches in March 2017, validating the architecture's ~52% uplift over cores in independent tests. EPYC's server-specific development diverged by adopting a (MCM) with integration from inception, motivated by yield limitations on monolithic dies at ' 14 nm node and the need for scalable core counts beyond 8-16. Each EPYC "Naples" processor combined up to four 8-core chiplets with a centralized I/O die via Infinity Fabric links and a silicon interposer, enabling configurations from 8 to 32 cores at TDPs up to 225 W. This design, finalized for demands like NUMA-aware and PCIe 3.0 , culminated in the EPYC 7001 series launch on June 20, 2017.

Initial launch and market entry

The first-generation AMD EPYC processors, codenamed Naples and branded as the EPYC 7000 series, were previewed by AMD on March 7, 2017, as part of its re-entry into the high-performance server market following years of diminished presence against Intel's Xeon dominance. These processors utilized the Zen microarchitecture, fabricated on GlobalFoundries' 14 nm process, and supported up to 32 cores per socket with dual-socket configurations enabling up to 64 cores in a system. AMD positioned EPYC as offering superior per-socket core density and Infinity Fabric interconnect for scalable multi-chip performance compared to contemporaneous Intel offerings. Official availability began on June 20, 2017, with initial pricing ranging from approximately $400 for entry-level 8-core models like the EPYC 7251 (2.1 GHz base, 120W TDP) to over $4,000 for high-end 32-core variants such as the EPYC 7601 (2.2 GHz base, 180W TDP). Launch-day support came from major original equipment manufacturers (OEMs) including , (HPE), , and , which introduced compatible server platforms emphasizing EPYC's advantages in , database, and HPC workloads. Independent benchmarks at launch highlighted EPYC's competitiveness, with single-socket systems outperforming dual-socket Scalable equivalents in SPECrate2013_int_base by up to 50% in some configurations, though real-world adoption was tempered by ecosystem maturity and 's entrenched market position exceeding 95% share in x86 servers. Market entry faced challenges from limited initial validation cycles and supply constraints, but EPYC gained traction in cost-sensitive segments like cloud and , with early adopters citing lower (TCO) due to higher core-per-dollar ratios. By late 2017, volume shipments ramped through OEM channels and hyperscalers, marking AMD's first meaningful server revenue resurgence since the era, though it captured under 5% market share in the initial year amid Intel's response with new launches.

Evolution through generations

The first-generation AMD EPYC processors, codenamed and marketed as the EPYC 7001 series, launched on June 20, 2017. Built on the Zen 1 microarchitecture using a node, these processors supported up to 32 cores and 64 threads per socket on the SP3 socket, with eight DDR4 memory channels and 128 PCIe 3.0 lanes. This debut marked AMD's re-entry into the server market after the line, emphasizing a chiplet-based design to scale core counts cost-effectively compared to monolithic dies. The second generation, EPYC 7002 series codenamed , arrived on August 7, 2019. Shifting to the microarchitecture with 7 nm compute chiplets (CCDs) atop a 14 nm I/O die, it doubled maximum core counts to 64 cores and 128 threads per socket while retaining the SP3 socket. Key advancements included higher clock speeds, improved Infinity Fabric interconnect for better multi-chiplet coherence, and sustained support for eight DDR4 channels and 128 PCIe 4.0 lanes, yielding up to 1.8 times the performance of in certain workloads. Third-generation EPYC 7003 processors, codenamed , launched March 15, 2021, incorporating the on refined 7 nm CCDs. Retaining SP3 compatibility, they maintained up to 64 cores but delivered 19% instructions per clock (IPC) uplift over , alongside unified L3 cache per for reduced latency. A variant, Milan-X, introduced in 2022 with 3D V-Cache stacking up to 768 MB per socket, targeted cache-sensitive applications like databases and HPC simulations.
GenerationCodenameLaunch DateMicroarchitectureMax Cores/ThreadsSocketKey Process Nodes
1stNaplesJune 20, 2017Zen 132/64SP314 nm
2ndRomeAugust 7, 2019Zen 264/128SP37 nm CCD / 14 nm IOD
3rdMilanMarch 15, 2021Zen 364/128SP37 nm
4thGenoa/BergamoNovember 10, 2022 (Genoa); June 13, 2023 (Bergamo)Zen 4 / Zen 4c96/192 (Genoa); 128/256 (Bergamo)SP55 nm CCD / 6 nm IOD
5thTurinOctober 10, 2024Zen 5192/384SP54 nm / 3 nm variants
The fourth generation transitioned to the EPYC 9004 series with a new SP5 socket, launching Genoa on November 10, 2022, using Zen 4 on 5 nm CCDs and a 6 nm I/O die for up to 96 cores and 12 DDR5 channels. Bergamo, released June 13, 2023, employed compact Zen 4c cores for density-optimized workloads, achieving 128 cores per socket. These models expanded to 128 PCIe 5.0 lanes and integrated data accelerators for AI inference. Fifth-generation EPYC 9005 series, codenamed , debuted October 10, 2024, leveraging for up to 16% gains and scaling to 192 cores using denser configurations on advanced nodes including 4 nm and 3 nm elements. Supporting 12 DDR5 channels and enhanced Infinity Fabric, emphasizes , cloud, and HPC efficiency, with top models reaching 5 GHz boosts. Each iteration has refined the paradigm, prioritizing scalable parallelism, lower power per core, and compatibility over raw monolithic scaling.

Architecture and design

Core microarchitecture

The core of AMD EPYC processors is derived from the company's family of designs, optimized for server workloads with emphases on per-core performance, multi-threading via (SMT), and scalability in multi-socket configurations. Each generation introduces iterative improvements in instructions per clock (), branch prediction, cache hierarchies, and execution pipelines, enabling higher throughput for compute-intensive tasks such as , databases, and (HPC). First-generation EPYC processors (7001 series, codenamed Naples), introduced in June 2017, employed the initial Zen microarchitecture on a 14 nm process node, supporting up to 32 cores and 64 threads per socket with 512 KB L2 cache per core and shared 8 MB L3 cache per chiplet. These cores featured a 4-wide decode/rename stage, 6-execution pipelines (including 4 integer and 3 floating-point), and a 96-entry reorder buffer, marking AMD's return to competitiveness in server CPUs through balanced integer and FP performance. Second-generation EPYC (7002 series, ), launched in August 2019, adopted cores fabricated on a 7 nm node, doubling L3 cache to 16 MB per and introducing chiplet-based scaling for up to 64 cores per socket. enhanced by approximately 15% over Zen 1 via wider dispatch (up to 5 operations per cycle), improved branch prediction with a larger 20-bit global history predictor, and doubled the floating-point throughput with dual 256-bit FMA units, while supporting PCIe 4.0 for better I/O integration. Third-generation EPYC (7003 series, Milan), released in March 2021, utilized cores on a refined , unifying L3 across chiplets into a 32 MB per-core complex for reduced in cross-core access. Key advancements included a unified scheduler for and floating-point operations, doubled branch target buffer size to 1K entries, and up to 19% uplift, enabling configurations up to 64 cores with sustained boosts exceeding 3.5 GHz in high-core-count variants. Fourth-generation EPYC (9004 series, ), announced in November 2022, incorporated cores on a 5 nm node, supporting up to 96 cores per socket with variants including dense Zen 4c cores featuring reduced cache (1 MB L2, 4 MB L3 per core) for higher core density in cost-sensitive workloads. delivered around 13% gains through an expanded front-end (6-wide fetch/decode), support with double-pumped 512-bit execution, and enhanced matrix operations for AI inferencing, alongside full PCIe 5.0 compatibility. Fifth-generation EPYC (9005 series, ), unveiled in October 2024, employs cores on TSMC's 4 nm process, achieving up to 17% improvement for general workloads and up to 37% for specific /HPC tasks via optimizations in the out-of-order engine, larger 8-wide dispatch, and advanced accelerators like FP4/FP6 support. Dense c variants scale to 192 cores per socket with traded-off cache for density, maintaining compatibility with prior Fabric interconnects while prioritizing efficiency in large-scale and deployments.

Chiplet-based multi-chip module

AMD EPYC processors from the second generation (Rome) onward utilize a chiplet-based multi-chip module (MCM) architecture, integrating multiple specialized dies into a single SP5 socket package. This design separates compute logic into core complex dies (CCDs) and system interfaces into a dedicated I/O die (IOD), connected via AMD Infinity Fabric links operating at up to 40 GT/s in recent implementations. The approach enables scalable core counts by stacking additional CCDs, reaching up to 12 CCDs in fifth-generation Turin processors for configurations exceeding 128 cores. Each CCD typically houses eight full Zen cores with 32 MB of L3 cache in standard variants, or denser Zen c cores with reduced cache per core for higher thread density, fabricated on leading-edge nodes such as TSMC's 5 nm for fourth-generation or 3 nm-class for . The central IOD, produced on more mature processes like 12 nm () or 6 nm (), integrates eight DDR5 memory controllers, up to 128 PCIe 5.0 lanes, and connectivity for CXL interfaces, exposing these resources uniformly across the package to minimize NUMA penalties. Inter-die communication relies on the Memory Interconnect (GMI), a subset of Infinity Fabric, with each linking directly to the IOD via dedicated high-bandwidth ports, ensuring low-latency access to shared resources. This modular structure improves manufacturing yields by allowing defective CCDs to be discarded without scrapping the entire processor, as smaller dies have higher defect-free probabilities per first-principles physics. It also facilitates heterogeneous , mixing process technologies to optimize cost and performance—advanced nodes for core density and efficient I/O on cost-effective nodes—resulting in processors like the EPYC 9755 with 128 cores at a 500 W TDP. Empirical data from deployments show this yields superior scalability over monolithic designs, enabling 96-core parts from yields unattainable in single-die equivalents. The design's causal efficacy stems from disaggregating functions to exploit specialized fabrication, though it introduces minor overheads in inter-chiplet traffic, mitigated by Infinity Fabric's mesh topology and clock domain synchronization.

Interconnect and fabric technology

AMD EPYC processors employ Infinity Fabric, a scalable interconnect that facilitates high-bandwidth, low-latency data transfer and coherency across chiplets within a single socket and between sockets in multi-processor configurations. This fabric replaces traditional on-die buses with a modular, die-to-die linking system derived from earlier technologies like , using embedded sensors to dynamically manage control and data flow. In EPYC's chiplet-based design, Infinity Fabric connects up to 12 Core Complex Dies (CCDs)—each housing multiple cores and shared L3 —to a central I/O Die (IOD) responsible for memory controllers, PCIe interfaces, and other peripherals, creating a unified NUMA per . Links operate with asymmetric bandwidth, providing 32 bytes per cycle for reads and 16 bytes for writes at the Infinity Fabric clock (FCLK) frequency, which typically ranges from 1.0 to 1.8 GHz depending on configuration and cooling. Intra-socket uses a star-like structure with the IOD as the hub, minimizing hops to at most two for data access between CCDs. Early implementations in 1st-generation EPYC (Naples, 2017) delivered point-to-point die bandwidth of 10.65 GB/s, scaling to aggregate throughputs of 41.4 GB/s per die in dual-socket setups. Subsequent generations enhanced performance: 2nd-generation EPYC (Rome, Zen 2) maintained similar per-link specs but optimized FCLK ratios to memory clocks for better efficiency; 3rd-generation (Milan, Zen 3) introduced Infinity Fabric 3.0 with support for coherent GPU integration; 4th-generation (Genoa, Zen 4) doubled CPU-to-CPU connectivity speeds to up to 36 Gb/s per link (using PCIe Gen 5 physical layers with custom protocols) and reduced NUMA latency variances compared to predecessors; 5th-generation (Turin, Zen 5) further boosts inter-die throughput with dual links per CCD in optimized models, reaching 72 Gb/s aggregate per die. For multi-socket scalability, external Infinity Fabric links (xGMI) enable direct communication, bypassing full overheads, with up to four 32 Gbps links per in 4th- and 5th-generation models, supporting configurations like 2P systems with 512 GB/s theoretical inter-socket across four links. These advancements prioritize workload balance in data centers, though effective depends on NUMA-aware software tuning to mitigate remote access penalties.

Memory subsystem and I/O capabilities

The memory subsystem of EPYC processors utilizes integrated controllers within the central I/O die (IOD) to provide high-bandwidth access via multiple channels per , optimized for memory-intensive server workloads such as and . First- through third-generation models ( 7001, 7002, and 7003 series, based on , , and cores respectively) support eight channels of at speeds up to 3200 MT/s, with maximum capacities of 4 TB per using registered s (RDIMMs) or load-reduced s (LRDIMMs). This configuration delivers aggregate bandwidth exceeding 200 GB/s in balanced populations, with each channel supporting up to two s for flexibility in 1DPC (one per channel) or 2DPC setups. Fourth-generation EPYC 9004 series processors (, Zen 4-based) advanced to twelve channels of DDR5-4800 memory, boosting per-socket bandwidth by approximately 50% over prior DDR4 setups and supporting up to 6 TB capacity with support for 3D-stacked options. Fifth-generation EPYC 9005 series (, Zen 5-based, launched October 2024) further enhances this with DDR5-6400 speeds on the same twelve channels, enabling up to 9 TB per socket and aggregate bandwidths approaching 500 GB/s in optimal configurations, while maintaining compatibility with prior DDR5 RDIMMs, LRDIMMs, and NVDIMMs. Across generations, recommends balanced population across channels to minimize and maximize throughput, with options for fine-tuning NUMA domains and prefetch behaviors. I/O capabilities center on the IOD's extensive PCIe , providing up to 128 s per socket for to GPUs, , and networking devices, with support down to x1 for flexible endpoint allocation. First-generation processors offered 128 PCIe 3.0 lanes at 8 GT/s, while second- and third-generation models upgraded to PCIe 4.0 at 16 GT/s for doubled per lane. Fourth- and fifth-generation processors deliver 128 PCIe 5.0 lanes at 32 GT/s, with dual-socket systems accessing up to 160 lanes via additional fabric links, enabling high-density accelerator deployments. Starting with the fourth generation, support for (CXL) 1.1+ allows up to 48 dedicated lanes for coherent memory pooling and fabric-attached devices, facilitating disaggregated memory expansion beyond local limits. Embedded variants like EPYC 4004/8004 series scale down to 28-96 lanes while retaining Gen5 compatibility for edge and dense server use cases.

Performance and efficiency

Benchmark comparisons

The AMD EPYC processor family has shown marked advantages over processors in multi-threaded benchmarks, driven by higher core counts, improved via architectures, and efficient designs. In SPEC CPU 2017 integer rate tests, the 5th generation EPYC 9965 (192 cores) achieves a peak score of over 1,000 in multi-threaded configurations, surpassing comparable 6 series results submitted to SPEC. Similarly, the EPYC 9754 from the 4th generation lineup set records with integer rate peaks around 1,780, representing a 2.8-fold improvement over Intel's prior Ice Lake baselines in equivalent setups. Phoronix Test Suite evaluations, encompassing compilation, encoding, simulation, and database workloads, further highlight EPYC's edge. The EPYC 9965 Turin variant delivers geometric mean performance uplifts of 30-50% against the Intel Xeon 6980P in over 140 multi-threaded tests on Ubuntu platforms, with particular dominance in AVX-512 accelerated tasks like scientific simulations. In AWS EC2 cloud instances, EPYC Turin-powered m8a configurations outperform Intel Granite Rapids m8i equivalents in CPU-bound Phoronix benchmarks, yielding higher throughput per vCPU despite EPYC's physical-core-only threading model.
Benchmark SuiteEPYC Model (Generation)Comparable XeonPerformance Edge (EPYC)
SPEC CPU 2017 Integer Rate (Peak)9654 (, 4th)Ice Lake Platinum~2.8x
Phoronix Geometric Mean (Multi-Threaded)9965 (, 5th)6980P ( refresh)30-50%
AWS EC2 CPU-Bound Tasks m8aGranite Rapids m8iHigher throughput per core
These results stem from EPYC's scalable core scaling and Infinity Fabric interconnect, though Intel retains leads in select single-threaded or latency-sensitive scenarios not emphasized in aggregate server benchmarks. Earlier generations like 3rd gen also exceeded 's 3rd gen Scalable by up to 50% in SPEC integer rates, establishing a trend of generational in core-heavy environments. TPC-style transaction benchmarks remain less commonly published for direct head-to-heads, but EPYC's multi-core prowess aligns with superior TPC-C throughput in vendor-submitted configurations.

Power consumption and thermal characteristics

AMD EPYC processors exhibit (TDP) ratings that scale with core count and generational advancements, generally ranging from 65 W in low-power variants like the EPYC 4004 series to 500 W in high-end 5th-generation models such as those in the 9005 lineup. Earlier generations, including the 4th-generation 9004 series, feature TDPs from around 200 W for mid-range SKUs like the 64-core EPYC 8534P up to 400 W for dense configurations. These ratings represent the maximum sustained dissipation under typical workloads, influencing for budgeting and cooling . Actual power draw often deviates from TDP, with benchmarks revealing average consumption under load ranging from 221 W to peaks exceeding 355 W for models like the EPYC 9554 in performance-oriented modes. Idle power consumption is notably higher than in consumer AMD Ryzen counterparts, typically 50–110 W for EPYC systems depending on generation, BIOS settings, and peripheral load, due to the server-oriented prioritizing over minimal quiescent draw. Successive generations demonstrate improved performance-per-watt metrics; for instance, 5th-generation EPYC processors achieve up to 37% better instructions per clock in HPC workloads compared to 4th-generation equivalents, enabling lower overall energy use for equivalent throughput. Thermal management relies on AMD's Infinity Power Management system, integrated via the System Management Unit (SMU), which dynamically adjusts voltage, frequency, and power allocation across chiplets while monitoring die temperatures to prevent throttling. Operating junction temperatures are designed to sustain up to 95°C under load for standard models, with extended variants like the EPYC 8004PN supporting ambient environments from -5°C to 85°C for edge deployments. Cooling requirements accommodate air-cooled heatsinks for most configurations, as validated by third-party tests with Noctua TR4-SP3 compatible coolers maintaining sub-throttle temperatures on 1st- and later-generation EPYC dies. However, high-TDP SKUs approaching or exceeding 400 W, particularly in dense multi-socket setups, may necessitate liquid cooling to manage heat densities over 700 W per node when paired with accelerators. Future iterations, such as the anticipated EPYC Venice, could surpass 1,000 W, pushing reliance on advanced direct-to-chip liquid solutions beyond traditional air cooling limits.

Workload-specific optimizations

AMD EPYC processors feature architectural enhancements and BIOS-configurable parameters tailored to (HPC) workloads, including support for up to 192 cores per socket in the 9005 series, which deliver a geometric mean (IPC) uplift of 1.369x over prior generations across select HPC benchmarks. These optimizations leverage Infinity Fabric interconnects for low-latency scaling across chiplets, enabling efficient in simulations and scientific modeling, with BIOS settings like NUMA nodes per socket (NPS) configured to NPS1 for compute-bound tasks to minimize remote memory access latency. For (AI) and inference, frequency-optimized variants such as the EPYC 9575F prioritize clock speeds over core density, achieving over 10x lower latency in serving models compared to equivalent processors in latency-constrained environments. Integration with accelerators via up to 160 PCIe Gen5 lanes supports GPU offloading, while ZenDNN libraries enhance vectorized operations for and tasks; (SMT) can boost throughput by 30-60% in thread-parallel inference workloads when enabled in . Database and analytics workloads benefit from high via 12-channel DDR5 support and large last-level s, with V-Cache-equipped models (denoted by 'X' suffix) providing up to 768 MB L3 per socket to reduce misses in query-intensive operations. BIOS tuning recommends NPS4 for memory-bound databases to align NUMA domains with data locality, improving throughput in OLTP and OLAP scenarios by optimizing data placement across chiplet-based dies. Virtualization environments leverage EPYC's high core counts and (SEV) for isolated VM execution, with options enabling maximum performance mode and all cores active to support dense VDI deployments. For network-intensive virtualization, Throughput Intensive profiles adjust Infinity Fabric clocking to sustain high packet rates without dynamic downclocking. Financial services workloads further optimize via compiler flags for utilization and library tuning, yielding measurable gains in risk modeling and on 9005 series processors.

Market reception and impact

Adoption by enterprise and hyperscalers

AMD EPYC processors have seen rapid adoption among hyperscalers, driven by their performance in AI inference, cloud workloads, and cost efficiency. Major providers including Meta, Google, Amazon, and Oracle expanded EPYC-based instances by approximately 27% in 2024, exceeding 1,000 instances across their platforms as they scaled data center operations. Meta, a leading adopter, has deployed over 1.5 million EPYC CPUs, utilizing them for training, fine-tuning, and running inference on large models such as its 405-billion-parameter Llama 3.1. In the second quarter of 2025, the largest hyperscalers introduced more than 100 new AMD-powered instances, reflecting sustained momentum in EPYC integration for high-density computing. OVHcloud, a hyperscale cloud provider, leverages EPYC for flexible, high-performance platforms supporting cutting-edge workloads. Enterprise adoption has similarly accelerated, with EPYC enabling efficiency gains in private data centers, virtualization, and AI applications. Kakao Enterprise, a South Korean cloud provider, reduced its data center footprint by 50% while increasing performance by 30% after migrating to EPYC CPUs. Cybersecurity firm integrated 5th Gen EPYC processors across its data security platform in June 2025 to enhance AI-ready cloud deployments. Partnerships with OEMs have broadened access: Dell Technologies offers PowerEdge R6715 and R7715 servers with 5th Gen EPYC, delivering up to 37% more drive capacity; HPE ProLiant Gen11 servers incorporate EPYC for AI, HPC, and virtualization; and Supermicro expanded its MicroBlade portfolio with EPYC 4005 series in October 2025 for dense edge and enterprise configurations. Governments and organizations worldwide select EPYC-based servers for analytics and secure processing. This uptake correlates with EPYC capturing a record 41% revenue share in the CPU market during Q2 2025, according to Mercury Research data, up from near-zero in and reflecting a shift toward designs offering superior counts and for enterprise-scale deployments. Enterprise adoption tripled year-over-year in 2024, fueled by EPYC's optimizations for virtual machines, databases, and hybrid environments.

Competition dynamics with

AMD EPYC processors entered the market in June with the first-generation Naples series, challenging 's longstanding dominance in the x86 CPU segment, where held over 95% share prior to AMD's re-entry. AMD's -based architecture enabled higher counts at competitive prices, disrupting 's model sustained by limited competition. By offering up to 32 cores per socket initially—surpassing 's then-maximum of 28—EPYC targeted parallel workloads prevalent in and HPC, where cores directly correlated with throughput gains. Subsequent generations amplified this advantage: second-generation (2019) reached 64 cores, third-generation (2021) added Zen 3 IPC improvements, and fourth-generation (2022) scaled to 96 cores with Zen 4 efficiency. Intel responded with delayed monolithic designs like Ice Lake (2021, 40 cores max) and (2023, 60 cores), hampered by process node struggles that limited density and power efficiency. AMD's 5nm-class processes in later EPYC iterations delivered superior performance per watt, often 1.5-2x in multi-threaded benchmarks against equivalent SKUs, driven by modular chiplets allowing cost-effective scaling without monolithic die yield issues. This forced Intel into pricing adjustments, with 6 series (launched late 2024) seeing MSRP cuts of up to 30% by January 2025 to counter EPYC 9005 Turin's 192-core density and lower . Market share dynamics reflect these technical edges: AMD's server CPU revenue share climbed from under 10% in 2018 to approximately 33% by June 2025, eroding Intel's from over 90% to 62%, per Mercury Research estimates, with hyperscalers favoring for cost-sensitive, core-heavy deployments. In Q1 2025, AMD hit 39.4% unit share, up 6.5% quarter-over-quarter, fueled by 's adoption in training and where thread-parallelism trumps single-thread . Intel retained leads in latency-critical apps via optimized libraries and broader maturity, but AMD's —higher cores at 20-50% lower effective pricing—shifted toward , prompting Intel's hybrid E-core approaches in for density matching. By mid-2025, both vendors discounted flagship models by up to 50% amid softening demand, underscoring intensified rivalry.

Influence on data center economics and AI workloads

AMD EPYC processors have driven significant cost reductions in operations through higher core densities and improved compared to competing offerings, enabling greater workload consolidation and reduced (TCO). For instance, the chiplet-based design supports up to 192 cores per socket in the 5th generation EPYC (), allowing operators to achieve equivalent performance with fewer , which lowers capital expenditures on hardware and space while decreasing and cooling demands. Independent analyses indicate that EPYC deployments can consolidate such that refreshed servers require 31% fewer cores for the same workloads, contributing to improvements and operational savings. Hyperscale providers have accelerated , with over 100 new AMD-powered instances launched in Q2 2025 alone, reflecting a of 36.5% for AMD's x86 CPUs that year, driven by these economic advantages. Specific case studies underscore these benefits: (now X) reported a 25% TCO reduction after deploying 2nd generation EPYC processors across its data centers in 2019, primarily from enhanced and reduced hardware footprint. In environments, EPYC has enabled up to 42% lower licensing costs through superior density, yielding CapEx payback periods of approximately two months. AMD's TCO estimator tools further quantify potential savings, showing EPYC systems offsetting costs via reduced energy consumption and emissions compared to equivalents, with full data center builds potentially paying for themselves through gains. For AI workloads, EPYC processors enhance economics by providing a scalable CPU foundation that optimizes GPU utilization, data preparation, and serving, often at lower draw than alternatives. The 4th and 5th models deliver over 10x better performance in latency-sensitive compared to , balancing CPU-GPU ecosystems to minimize idle resources and operational expenses. This efficiency supports "everyday " tasks like analytics and preprocessing, where EPYC's high (up to 12 channels of DDR5) and extensive PCIe lanes (up to 128) reduce the need for additional accelerators, cutting cloud costs by improving and enabling fewer nodes for training or serving. Enterprises report OPEX reductions from EPYC's role in consolidating infrastructure, as its core density allows hyperscalers to handle surging demand with optimized budgets rather than proportional hardware scaling.

Variants and adaptations

Embedded and edge computing variants

AMD develops EPYC Embedded processors specifically for applications requiring long product lifecycles, such as industrial control, networking appliances, storage systems, and edge inference, with availability guarantees extending up to 10 years to support deployments. These variants leverage the same Zen-based microarchitectures as mainstream EPYC server processors but incorporate optimizations like configurable TDP, enhanced reliability features, and support for ruggedized systems to meet the demands of non-data-center environments. The inaugural EPYC Embedded 3000 series, released in 2019 and based on the first-generation core, targets single-socket systems with models ranging from 4 to 16 cores, base clocks up to 2.14 GHz, and configurable TDPs from 45W to 180W. It supports dual- or quad-channel DDR4-2666 up to 1 TB, up to 128 PCIe 3.0 lanes, and integrated features like dual 10GbE MACs for networking efficiency, making it suitable for controllers and nodes. Later revisions in 2020 added models like the 16-core EPYC Embedded 3451 with 32 threads and 64 MB L3 , emphasizing power efficiency for industrial applications. Subsequent embedded variants align with EPYC server generations for scalability. The EPYC Embedded 7002 series (, ) introduced higher core densities up to 64 cores, PCIe 4.0 support, and improved per-core performance for edge analytics and real-time processing. The fourth-generation EPYC Embedded 9004 series (, ) added DDR5 memory channels and up to 96 cores, enhancing bandwidth for inference at the edge while maintaining enterprise-grade (reliability, availability, serviceability) features like advanced error correction. In 2025, launched the fifth-generation EPYC Embedded 9005 series (, ), scaling from 8 to 192 cores with up to 512 MB L3 cache and 160 PCIe 5.0 lanes, optimized for compute-intensive embedded tasks like industrial and high-frequency networking. Complementing this, the EPYC Embedded 4005 series, announced on September 16, 2025, focuses on low-power with up to 16 cores, energy-efficient designs under 100W TDP, AM5 socket compatibility for easier integration, and low-latency optimizations for real-time data processing in compact appliances. For broader edge deployments, AMD positions the EPYC 8004 series (Zen 4C, ) as a dense, power-optimized option with up to 64 cores in single-socket configurations, delivering cost-effective performance for GPU-accelerated edge workloads while supporting up to 6 TB DDR5 memory. These variants collectively enable by providing scalable x86 performance in thermally constrained, space-limited environments, outperforming prior embedded solutions in throughput per watt for tasks like video analytics and baseband processing.

Dense and specialized server variants

AMD EPYC processors feature dense variants tailored for high core-density deployments in scale-out environments, prioritizing core count over per-core performance to maximize throughput in virtualized and cloud-native workloads. The subfamily within the 4th-generation EPYC 9004 series employs cores, which are physically smaller than standard cores while retaining the same instruction set and features, enabling up to 128 cores and 256 threads per socket on the SP5 platform. Launched in 2023, these processors support 12-channel DDR5 memory and 128 PCIe 5.0 lanes, facilitating configurations that consolidate workloads onto fewer nodes, thereby reducing rack space, power draw, and operational costs compared to prior generations. Specialized server variants extend this with optimizations for memory-intensive or latency-sensitive applications. The Genoa-X processors, also in the EPYC 9004 series, incorporate stacked V-Cache technology to expand L3 capacity to over 1 GB per —specifically up to 1.152 GB in models like the EPYC 9684X—accelerating data access in , in-memory databases, and simulation tasks where misses dominate bottlenecks. These differ from standard by trading some core count for density, with up to 96 cores but enhanced hit rates that claims deliver up to 2x performance in select HPC benchmarks. Additionally, frequency-optimized SKUs such as the EPYC 9754F and 9575F variants base and clocks to 3.0–4.1 GHz for low-latency transactional processing, while maintaining compatibility with dense multi- setups up to eight sockets in supported systems. The EPYC 8004 series (), introduced in 2023, offers specialized dense options for cost-sensitive, power-constrained server designs with Zen 4c cores scaled to 64 cores maximum and TDP ratings from 70 W to 225 W, supporting up to four sockets for compact deployments in or regional data centers. This series uses a reduced I/O die configuration with six DDR5 channels and 96 PCIe 5.0 lanes, targeting efficiency in generalized compute without the full-scale resources of 9004 models, as evidenced by benchmarks showing competitive in tests. Across these variants, emphasizes chiplet-based scalability, with empirical data from independent tests confirming 20–50% gains in core-density workloads over equivalents in equivalent power envelopes.

Region-specific modifications

AMD's joint venture with Chinese firms, including Hygon Information Technology, enables the production of region-specific EPYC-compatible processors under for the Chinese market. These Dhyana-series CPUs, such as the Hygon Dhyana C86-7395, replicate the core Zen 1 microarchitecture of first-generation EPYC () processors but incorporate modifications primarily in the integrated I/O die to utilize domestically sourced components and circumvent U.S. restrictions on advanced technology. The alterations ensure compliance with local manufacturing mandates while maintaining pin-compatibility with the SP3 socket, allowing deployment in standard EPYC server platforms without hardware changes. This approach differs from global EPYC offerings by prioritizing in I/O subsystems, potentially at the cost of optimized interconnect performance compared to AMD's standard designs. Production began around , targeting and sectors restricted from importing high-performance U.S.-made . Subsequent developments have seen limited updates to these licensed variants, with Hygon largely adhering to Zen 1 equivalents due to licensing constraints, while Chinese firms pivot toward indigenous architectures for newer server needs. No equivalent hardware modifications exist for other regions, such as Europe or Asia-Pacific markets outside China, where standard EPYC SKUs prevail without regional adaptations.

Criticisms and challenges

Hardware errata and reliability issues

AMD EPYC processors feature comprehensive reliability, availability, and serviceability (RAS) mechanisms, including advanced error correction and fault isolation, contributing to low field failure rates comparable to competing Intel Xeon processors in data center environments. However, like other high-density server CPUs, EPYC generations include documented silicon errata—deviations from specifications that can lead to hangs, resets, or reduced reliability under specific conditions. These are detailed in AMD's official revision guides, with most addressed via BIOS, firmware, or software workarounds rather than silicon fixes, as no hardware revisions are planned for production parts. In the first-generation EPYC (Naples, Zen 1), systems could experience hangs or crashes after approximately 1044 days of uptime due to a core failing to exit low-power states properly, similar to issues in later generations. Production errata were less publicly highlighted compared to successors, though early prototypes faced challenges resolved prior to volume shipment. Second-generation EPYC (Rome, Zen 2) processors exhibited several errata prone to system hangs or resets. Erratum 1474 causes a core to fail exiting CC6 low-power state after roughly 1044 days from last reset, potentially hanging the system depending on clocking and workload; mitigation involves periodic reboots or disabling CC6 via MSR programming. Other issues include Erratum 1140, where Data Fabric transaction loss leads to hangs (mitigated by fabric register programming), Erratum 1290 causing GMI link hangs from retraining failures after CRC errors, and Erratum 1315 triggering hangs in dual-socket 3-link configurations. These primarily affect I/O and interconnect reliability in multi-socket setups. Third-generation EPYC (Milan, ) introduced errata such as ID 1446, where improper on-die regulator initialization during power-up results in permanent boot failure, rendering the processor inoperable. ID 1431 permits core hangs during bus locks with enabled, potentially causing watchdog-induced resets, while ID 1441 risks DMA write data corruption in memory. ID 1462 hinders reboot or shutdown after fatal errors, exacerbating recovery in error-prone scenarios. Some Milan systems reported random OS shutdowns or soft resets, attributed to underlying hardware sensitivities. Fourth-generation EPYC (Genoa, Zen 4) errata include hangs from CXL.mem transaction timeouts (no workaround) and system instability with poisoned PCIe data lacking error logs. Erratum 1560 risks hangs when Data Fabric C-states interact with CXL Type 1 devices (mitigated by disabling DF C-states), and Erratum 1483 generates unexpected fatal errors on uncorrectable DRAM ECC faults. Claims of systemic memory subsystem redesign needs were refuted by AMD, with no confirmed widespread reliability degradation. Fifth-generation EPYC (Turin, Zen 5) have fewer publicized functional errata to date, though an RDSEED instruction flaw affects reliability for cryptographic seeding, potentially impacting security-dependent workloads. Overall, EPYC errata do not indicate higher aggregate failure rates than peers, with third-party testing confirming robust long-term in deployments.

Security vulnerabilities

AMD EPYC processors have been affected by several hardware-level security vulnerabilities, primarily side-channel attacks exploiting and microarchitectural features, as well as flaws in technologies like Secure Encrypted Virtualization (SEV). These issues, common to modern x86 CPUs, enable potential data leakage across processes or virtual machines, though exploitation often requires local access or specific privileges. AMD has addressed most through updates and patches, distributed via vendors or operating systems, with varying performance impacts. A notable early vulnerability was Zenbleed (CVE-2023-20593), disclosed on July 24, 2023, affecting second-generation EPYC Rome processors based on architecture. This flaw stems from improper clearing of vector registers (YMM) during , allowing cross-process information leaks of up to 30-40 bytes per iteration, potentially exposing sensitive data like passwords or keys. Researchers from Google Project Zero demonstrated practical exploitation, prompting to release a patch (AMD-SB-7008); however, applying it reduced performance by up to 15% in vector-heavy workloads on affected EPYC systems. EPYC platforms are also vulnerable to variants (e.g., Spectre v1 and v2), which leverage branch prediction and for unauthorized memory access, though AMD chips show lower exploitability for Meltdown compared to due to architectural differences. Mitigations, including retpoline and updates, were rolled out starting January 2018, with ongoing refinements; EPYC users in data centers were advised to apply them to prevent kernel-to-user data leaks in virtualized environments. In August 2024, the Sinkclose vulnerability (CVE-2024-56161 et al.) was revealed, affecting multiple architectures including EPYC by bypassing System Management (SMRAM) protections via flawed caching mechanisms, enabling deep code execution in privileged regions. This required physical access but posed risks in server maintenance scenarios; mitigated it via and recommended restricting physical access. Server-specific concerns include SEV-related flaws: In February 2025, a bug (AMD-SB-3009) allowed privileged attackers to read unencrypted guest memory, compromising isolation on EPYC with SEV enabled, fixed via updated SEV . Another SEV issue (AMD-SB-3019), due to improper signature verification, permitted malicious loading, potentially undermining guarantees; patches were issued concurrently. July 2025 brought Transient Scheduler Attack (TSA) vulnerabilities (CVEs TBD), disclosed by researchers and affecting EPYC across generations via combined timing side-channels in scheduler operations, enabling chained info disclosure akin to /Meltdown. Individually low-severity, their exploitation could leak data in multi-tenant servers; recommended software mitigations and promised microcode fixes. August 2025 advisories (AMD-SB-3014) highlighted IOMMU and SEV-SNP weaknesses in EPYC platforms, potentially allowing attacks or nested paging bypasses in virtualized setups, with patches focusing on enhanced memory isolation. maintains a product incident response team (PSIRT) for ongoing disclosures, emphasizing that while no widespread exploits have been reported for EPYC-specific cases, virtualization-heavy deployments warrant prompt patching to preserve trust in server ecosystems.

Ecosystem and compatibility limitations

Despite its adherence to the x86 instruction set architecture, ensuring binary compatibility with software developed for processors, the AMD EPYC platform presents ecosystem challenges stemming from its multi-chiplet design and historical market position. This architecture divides each socket into multiple (Non-Uniform Memory Access) domains—configurable via settings like Nodes per Socket (NPS) modes (1, 2, 4, or 8)—which can lead to suboptimal performance in workloads not explicitly tuned for such topologies. For instance, untuned applications may incur higher latency due to remote memory access across chiplets, necessitating manual optimizations such as NUMA-aware scheduling or pinning, as outlined in AMD's tuning guides for EPYC 9004 series processors. documentation highlights that while NPS configurations mitigate inter-domain bandwidth penalties, they require workload-specific validation to avoid up to 20-30% performance degradation in latency-sensitive tasks compared to monolithic designs. In virtualization environments, EPYC's NUMA complexity exacerbates compatibility hurdles. Hypervisors like initially faced NUMA-related inefficiencies on early EPYC generations, such as improper vNUMA exposure leading to VM migration failures or reduced throughput, though patches like adjusted locality/weight affinity settings resolved many issues by ESXi 6.7. Mixed AMD-Intel clusters encounter incompatibilities due to differing CPU models and microarchitectures; for example, in Epic EHR deployments on vSphere, migrating VMs between EPYC and hosts is unsupported, complicating high-availability setups and requiring homogeneous clusters. Similar constraints appear in Proxmox VE, where assigning EPYC-specific CPU models to VMs can trigger startup errors unless host passthrough or custom topologies are configured. Enterprise software certification lags contribute to adoption barriers, as many legacy applications—optimized over decades for 's ecosystem—undergo delayed validation for EPYC. Independent analyses note that while EPYC supports all major OSes and hypervisors post-certification, maintains an edge in "certified for " badges for thousands of business-critical apps, simplifying procurement for risk-averse IT departments and reducing validation timelines. For 7, AMD EPYC () processors lack official support, forcing upgrades to RHEL 8 or later despite binary . Hardware ecosystem limitations include fewer validated third-party peripherals at launch compared to , though AMD's PCIe validation program has expanded ; early generations saw sporadic issues with add-in cards assuming single-die topologies. These factors, while diminishing with EPYC's growth—reaching over 30% in servers by 2024—persist in conservative sectors prioritizing seamless over EPYC's core-count advantages.

Processor generations

First generation (Naples, Zen 1)

The first-generation AMD EPYC processors, designated the 7000 series and codenamed , marked AMD's re-entry into the server CPU market following a decade-long absence. Built on the Zen 1 and fabricated on ' 14 nm process node, these processors were officially launched on June 20, 2017, after an announcement at in May 2017. They utilized Socket SP3 and supported dual-socket configurations, targeting workloads with emphasis on core density and I/O bandwidth. Naples employed a (MCM) design consisting of four core complex dies (CCDs)—each containing two core complexes (CCXs) for a total of eight cores per die—interconnected via AMD's Infinity Fabric for on-package communication. This chiplet-like approach enabled scalability to 32 cores and 64 threads per socket, with 32 MB of L2 cache and 64 MB of L3 cache distributed across the dies, though it introduced (NUMA) domains that could impact latency-sensitive applications. Each processor supported eight channels of DDR4-2666 memory (up to 2 TB total capacity) and 128 lanes of PCIe 3.0, providing substantial for storage and networking peripherals. The lineup spanned 19 SKUs, from entry-level 8-core models like the EPYC 7251 (base clock 2.1 GHz, 90 W) to high-end 32-core variants such as the EPYC 7601 (base 2.2 GHz, boost up to 3.2 GHz, 180 W) and single-socket-optimized EPYC 7601P. ranged from 90 W to 225 W across models, with pricing starting at around $300 for lower-end parts and reaching $4,200 for flagship 32-core units at launch. These processors competed directly with Intel's Scalable lineup by offering higher core counts at lower per-core costs, though initial benchmarks revealed mixed results in single-threaded performance due to 1's limitations compared to contemporary architectures.

Second generation (Rome, Zen 2)

The second-generation AMD EPYC processors, codenamed , utilize the microarchitecture and were released on August 7, 2019. These server CPUs employ a multi-chiplet design comprising up to eight 7 nm compute chiplet dies (CCDs), each with eight cores, interconnected via a central 14 nm I/O die that handles memory controllers, PCIe lanes, and inter-die communication through Infinity Fabric links. This architecture enables scalability to 64 cores and 128 threads per socket while maintaining compatibility with the SP3 socket used in the prior Naples generation. The EPYC 7002 series supports eight channels of DDR4-3200 memory, accommodating up to 4 TB capacity, and provides 128 lanes of PCIe 4.0, doubling the bandwidth of PCIe 3.0 in the first generation. includes up to 256 MB of shared L3 cache across chiplets, with each core featuring 512 KB L2 cache and improved branch prediction and floating-point units in Zen 2. (TDP) ranges from 120 W to 225 W for most models, with select dual-socket variants rated up to 280 W. Relative to the Zen 1-based Naples processors, Rome delivers enhanced single-threaded performance through approximately 15-20% higher instructions per clock (IPC) on average, with AMD reporting up to 29% uplift in specific integer and floating-point workloads at iso-frequency, alongside better power efficiency from the smaller compute node. The shift to PCIe 4.0 and faster memory reduces I/O bottlenecks, enabling up to 2x overall socket performance in bandwidth-sensitive tasks like HPC simulations and database operations. Security enhancements include AMD Infinity Guard, featuring hardware root of trust and memory encryption capabilities. The lineup comprises 19 models, spanning 8 to 64 cores, such as the 64-core EPYC 7742 at 2.25 GHz base (boost to 3.4 GHz) and the 32-core EPYC 7502 at 2.5 GHz base (boost to 3.35 GHz), targeting diverse workloads from cloud virtualization to technical computing. Independent benchmarks confirmed leadership in multi-threaded throughput, with Rome systems outperforming Intel counterparts in SPEC CPU2017 rates by up to 50% in certain configurations.

Third generation (Milan, Zen 3)

The third-generation AMD EPYC processors, codenamed and based on the microarchitecture, were launched on March 15, 2021. These processors maintained the maximum of 64 cores and 128 threads per socket from the prior Rome generation while introducing architectural enhancements such as a unified 32 MB L3 per eight-core , improved branch prediction, and higher instructions per clock (IPC) uplift of approximately 19%. Manufactured on TSMC's , Milan processors support eight-channel DDR4-3200 memory and 128 lanes of PCIe 4.0, with (TDP) ratings ranging from 180 W to 280 W depending on the model. Key models in the EPYC 7003 series include the flagship EPYC 7763 with 64 cores at a 2.45 GHz base frequency and 3.50 GHz boost, alongside options like the EPYC 7713 (64 cores, 2.00 GHz base, 3.67 GHz boost, 225 W TDP) and lower-core variants such as the EPYC 72F3 (8 cores, optimized for frequency). Independent benchmarks demonstrated average performance gains of 14-17.5% over equivalent Rome models in compute-intensive workloads, attributed to per-core efficiency rather than core count increases. In dual-socket configurations, Milan delivered up to 1.43x better results in fluid dynamics simulations compared to contemporary Intel counterparts. A variant line, the EPYC 7003X "Milan-X" series, extended the with 3D V-Cache , adding up to 768 MB of L3 per for enhanced in memory-sensitive applications; the top-end EPYC 7773X features 64 cores and demonstrated 20% average improvements over standard in cache-dependent tasks. These processors emphasized security features like AMD Infinity Guard, including hardware-based memory encryption, while maintaining compatibility with SP3 sockets and existing EPYC ecosystems.
ModelCores/ThreadsBase/Boost Freq. (GHz)L3 Cache (MB)TDP (W)
EPYC 776364/1282.45/3.50256280
EPYC 771364/1282.00/3.67256225
EPYC 7773X (Milan-X)64/1282.20/3.50768280

Fourth generation (Genoa, Bergamo, Siena; Zen 4/Zen 4c)

The fourth-generation AMD EPYC processors, launched between late 2022 and 2023, are built on the Zen 4 microarchitecture and its density-optimized variant, Zen 4c, utilizing TSMC's 5 nm process for compute cores and supporting DDR5 memory with up to 12 channels in high-end models. These processors emphasize scalability for data center workloads, featuring up to 128 cores and 256 threads, PCIe 5.0 support with 128 lanes, and integrated AVX-512 instructions for enhanced vector processing efficiency. Independent benchmarks indicate Zen 4 delivers approximately 13% higher instructions per clock (IPC) over Zen 3 in integer workloads, with floating-point gains up to 96% in optimized scenarios, though real-world uplift varies by application. The series (EPYC 9004) represents the flagship variant, announced on November 10, 2022, and featuring up to 96 cores across 12 dies (CCDs), each with 8 cores. Configurations range from 16 to 96 cores, with thermal design powers (TDPs) from 155 W to 400 W, and base clocks starting at 1.9 GHz for high-core-count models like the EPYC 9654, boosting to 3.7 GHz. supports dual-socket SP5 platforms and excels in general-purpose computing, with reviews showing up to 2x throughput over prior generations in memory-bound tasks due to doubled bandwidth from DDR5-4800. However, latency-sensitive applications may experience variability from the design's NUMA , mitigated by configurable NUMA-per-socket (NPS) modes. Bergamo (EPYC 97x4 subset of 9004 series) introduces cores, a compact derivative with identical to but 35% smaller die area per core, enabling 128 cores via eight 16-core CCDs at lower clock speeds (base up to 2.0 GHz, boost to 3.7 GHz) and TDPs up to 360 W. Launched in June 2023, it targets cloud-native and high-density workloads, offering up to 2.8x in Java-based applications compared to third-generation EPYC, per benchmarks, while maintaining power efficiency through reduced sizes (1 MB per core vs. 1 MB in Zen 4). avoids hybrid "big.LITTLE" designs by preserving full feature parity, including , but prioritizes core count over per-core frequency for scalable throughput in containerized environments. Siena (EPYC 8004 series), released in September , adapts Zen 4c for edge and space-constrained deployments on the new single-socket SP6 platform, with up to 64 cores, 6 DDR5 channels, and TDPs from 70 W to 200 W. Models like the EPYC 8434P deliver performance comparable to Intel's 32-core in multi-threaded tasks at lower power, suitable for and systems with NEBS-compliant variants offering extended temperature ranges. The SP6 socket reduces footprint and cost versus SP5, supporting fewer PCIe lanes (96) but retaining full Zen 4c capabilities for efficient and general compute.

Fifth generation (Turin, Grado; Zen 5/Zen 5c)

The fifth-generation AMD EPYC processors, released starting in late 2024, incorporate the and Zen 5c microarchitectures to deliver enhanced instructions per clock performance, with the Zen 5 cores providing up to 17% higher for enterprise and cloud workloads relative to the prior generation. The lineup includes the high-end EPYC 9005 series (codenamed ) for datacenter-scale deployments on the SP5 socket and the entry-level EPYC 4005 series (codenamed Grado) for smaller server environments on the AM5 socket. These processors emphasize scalability for inference, with up to 2x throughput improvement over previous generations in certain workloads, alongside support for dense core configurations via Zen 5c variants optimized for and higher thread counts. The EPYC 9005 Turin series, launched on October 10, 2024, supports up to 128 cores using standard Zen 5 chiplet dies or up to 192 cores with Zen 5c for high-density applications, enabling configurations with as many as 384 threads per socket. Representative models include the EPYC 9965 (192 Zen 5c cores, 384 threads, 2.25 GHz base clock, 3.70 GHz max boost, 500 W TDP, 384 MB L3 cache) for maximum density and the EPYC 9755 (128 Zen 5 cores, 256 threads, 2.70 GHz base, 4.10 GHz boost, 500 W TDP, 512 MB L3 cache) for balanced performance; high-frequency options like the EPYC 9575F offer 64 cores at up to 5.00 GHz boost within a 400 W TDP. TDP ratings span 125 W to 500 W across the family, with support for 12 DDR5-6400 memory channels (up to 6 TB capacity in 2DPC configurations) and up to 128 PCIe 5.0 lanes in single-socket setups. The architecture maintains compatibility with the SP5 platform from prior generations, facilitating upgrades while introducing optimizations for AI data preprocessing and inference acceleration. The EPYC 4005 Grado series, introduced on May 13, 2025, targets cost-sensitive entry-level servers with up to 16 cores and 32 threads per processor, using the consumer-oriented AM5 socket for easier integration into compact systems. These models, such as the 16-core variants, operate at TDPs around 65 W to 115 W, supporting dual-channel DDR5 memory and emphasizing power efficiency for small to medium business workloads like and . Unlike the series, Grado processors lack the multi-chiplet scaling of SP5 but leverage the same IPC gains for superior per-core performance in lighter-duty environments.

References

  1. [1]
    AMD EPYC™ Processors
    AMD EPYC Server CPUs are the Best CPU for Enterprise AI ... In the cloud and on-premises, in large and small deployments, AMD EPYC Server CPUs offer power- ...EPYC 9005 (5th Gen)EPYC 9004 (4th Gen)
  2. [2]
    AMD EPYC™ Processor Overview
    AMD EPYC processors are a family of server CPUs for every level of business, from starting infrastructure to large enterprises.
  3. [3]
    2nd Gen AMD EPYC™ Processors Set New Standard for the ...
    Aug 7, 2019 · AMD EPYC™ 7002 Series processors set 80 performance world records1, provide 2X the performance compared to the previous generation2 and ...
  4. [4]
    Offering Unmatched Performance, Leadership Energy Efficiency and ...
    Nov 10, 2022 · 4 th Gen AMD EPYC processors are designed to deliver optimizations across market segments and applications, while helping businesses free data center resources.
  5. [5]
    AMD Launches 5th Gen AMD EPYC CPUs, Maintaining Leadership ...
    Oct 10, 2024 · The new 5 th Gen AMD EPYC processors provide leading performance and capabilities for the broad spectrum of server workloads driving business IT today.
  6. [6]
    [PDF] 5th Gen AMD EPYC Processor Architecture
    Oct 10, 2024 · The 5th Gen AMD EPYC architecture addresses increased demand for enterprise applications and the impact of AI, with optimized performance and ...
  7. [7]
    AMD EPYC™ Servers are the Foundation for Data Center AI
    AMD EPYC servers offer high performance, efficiency, and 3x inference throughput, enabling workload consolidation and supporting GPUs for AI.Ai Technologies Bring Broad... · Amd Powers The Full Range Of... · Advancing Ideal Ai SolutionsMissing: key | Show results with:key
  8. [8]
    Zen - Microarchitectures - AMD - WikiChip
    Zen (family 17h) is the microarchitecture developed by AMD as a successor to both Excavator and Puma. Zen is an entirely new design, built from the ground ...
  9. [9]
    AMD on Why Chiplets—And Why Now - The Next Platform
    Jun 9, 2021 · AMD's lead architects to reroute around the once-expected cadence of new technology development by pursuing a chiplet approach.
  10. [10]
  11. [11]
    AMD EPYC™ Datacenter Processor Launches with Record-Setting ...
    Jun 20, 2017 · “The launch of the AMD EPYC processor signifies an important milestone in the industry,” said Victor Peng, chief operating officer at Xilinx.
  12. [12]
    [PDF] AMD CHIPLET ECOSYSTEM
    Dec 9, 2024 · In 2019, AMD's 2.5D chiplet technology was introduce with the AMD Ryzen and AMD EPYC processors. In 2023, AMD released the Instinct MI300X AI ...
  13. [13]
    AMD Previews "Naples" High-Performance Server Processor ...
    Mar 7, 2017 · The first processors are scheduled to be available in Q2 2017, with volume availability building in the second half of the year through OEM and ...
  14. [14]
    AMD muscles in on Xeon's turf as it unveils Epyc - Ars Technica
    Jun 20, 2017 · AUSTIN—Today, AMD unveiled the first generation of Epyc, its new range of server processors built around its Zen architecture.
  15. [15]
    Competition Returns To X86 Servers In Epyc Fashion
    Jun 20, 2017 · AMD has successfully got its first X86 server chip out the door with the launch of the “Naples” chip, the first in a line of processors that will carry the ...
  16. [16]
    AMD EPYC™ Datacenter Processor Launches with Record-Setting ...
    “The AMD EPYC processor powered one-socket server can significantly increase our datacenter computing efficiency, reduce TCO and lower energy ...
  17. [17]
    AMD EPYC Server CPU Launches With Broad OEM, ODM, ISV, IHV ...
    Jun 21, 2017 · AMD's new EPYC datacenter processors range between 8 CPU cores and 2.1 GHz at 120W TDP (EPYC 7251) to 32 Cores and 2.2 GHz at 180W TDP (EPYC ...
  18. [18]
    AMD Sets Launch Date for Next-Generation Processor | TOP500
    May 17, 2017 · AMD has revealed that servers based on its future x86 datacenter processors, codenamed “Naples,” will be available in June. The company has ...<|separator|>
  19. [19]
    AMD Launches Epyc Rome, First 7nm CPU - HPCwire
    Aug 8, 2019 · The new AMD Epyc 7002 series is a follow-on to the first-gen 14nm Epyc Naples CPUs, released in June 2017. The announcement marks a significant ...
  20. [20]
  21. [21]
  22. [22]
    AMD EPYC 7763 Specs - CPU Database - TechPowerUp
    It is part of the EPYC lineup, using the Zen 3 (Milan) architecture with Socket SP3. ... Release Date: Mar 15th, 2021. Launch Price: $7890. Part#:, 100 ...
  23. [23]
  24. [24]
    AMD Launches 4th Gen EPYC "Genoa" Zen 4 Server Processors
    Nov 10, 2022 · ... AMD is readying a different class of processor, codenamed "Bergamo," which is plans to launch later. In 2023, the company will launch the "Genoa ...
  25. [25]
  26. [26]
    AMD EPYC Bergamo Launched 128 Cores Per Socket and 1024 ...
    Jun 13, 2023 · AMD EPYC Bergamo Launched 128 Cores Per Socket. Setting the stage, the AMD EPYC “Bergamo” or the EPYC 97×4 series uses a core called “Zen 4c”.
  27. [27]
    AMD Launches 5th Gen AMD EPYC CPUs, Maintaining Leadership ...
    Oct 10, 2024 · The 5th Gen AMD EPYC CPUs use "Zen 5" architecture, offer 8-192 cores, up to 5GHz boost, 12 DDR5 channels, and up to 17% better IPC for ...
  28. [28]
    AMD EPYC 'Turin' 9005 Series - we benchmark 192-core Zen 5 chip ...
    Oct 10, 2024 · AMD launched its fifth-gen EPYC 'Turin' processors here in San Francisco at its Advancing AI 2024 event, whipping the covers off the deep-dive ...Specs and Pricing · Our Turin vs Intel Granite... · HPC and Scalability Benchmarks
  29. [29]
    AMD's Turin: 5th Gen EPYC Launched - Chips and Cheese
    Oct 11, 2024 · The 9575F can pull around 52 GB/s of memory read bandwidth, 48 GB/s of memory write bandwidth, and 95 GB/s of memory add (Read-Modify-Write) bandwidth.<|separator|>
  30. [30]
    AMD EPYC - Thomas-Krenn-Wiki-en
    Aug 5, 2025 · EPYC processors are based on the zen microarchitecture and were initially introduced in June 2017. The processors are especially suitable for ...<|separator|>
  31. [31]
    AMD EPYC™ 7002 Series Processors
    AMD EPYC™ 7002 Series Processors, featuring the “Zen 2” core, deliver optimized performance per-watt, large L3 cache for low latency access to data, and ...
  32. [32]
    [PDF] 4th Gen AMD EPYC Processor Architecture
    the 'Zen 4' and 'Zen 4c' core design goals Both cores use 256-bit data paths internally, including 256-bit floating-point units, helping to reduce core size ...
  33. [33]
    AMD EPYC™ 7003 Series Processors
    AMD EPYC 7003 series offers proven performance, energy efficiency, security with Infinity Guard, and cost-effective solutions for data centers.AMD EPYC™ 7773X · AMD EPYC™ 7763 · AMD EPYC™ 7713
  34. [34]
    Overview of AMD EPYC 7003 Series Processors Microarchitecture ...
    Overview of AMD EPYC 7003 Series Processors Microarchitecture (70619) - Document 70619 - 70619. overview-amd-epyc7003-series-processors-microarchitecture.pdf.
  35. [35]
    AMD EPYC™ 4th Gen 9004 & 8004 Series Server Processors
    These processors include up to 128 “Zen 4” or “Zen 4c” cores with exceptional memory bandwidth and capacity.AMD EPYC™ 9754 · EPYC 9534 · AMD EPYC™ 9754S · AMD EPYC™ 9124
  36. [36]
    4th Gen AMD EPYC™ Processor Architecture
    This white paper describes the processor architecture that supports 4th Gen AMD EPYC™ processors and future enhancements that enable you to branch out and ...Missing: history | Show results with:history
  37. [37]
    5th Generation AMD EPYC™ Processors
    Purpose built to accelerate data center, cloud, and AI workloads, the AMD EPYC 9005 series of processors are driving new levels of enterprise computing ...AMD EPYC™ 9965 · AMD EPYC™ 9175F · AMD EPYC™ 9575F · Document 70353Missing: Naples Rome Milan Bergamo
  38. [38]
    [PDF] AMD EPYC™ 9005 PROCESSOR ARCHITECTURE OVERVIEW
    This generation of AMD EPYC processors feature AMD's latest “Zen 5” based compute cores, next-generation I/O Die, enhanced security features, and increased ...
  39. [39]
    [PDF] AMD EPYC™ 8004 Series Architecture Overview
    This document provides a high-level technical overview of the 4th Gen AMD EPYC™ 8004 Series Processor architecture, including internal IP, processor layout,  ...Missing: timeline launch<|separator|>
  40. [40]
    Interconnect Design for Heterogeneous Integration of Chiplets in the ...
    AMD introduced such multidie/chiplet-based SoC designs in our EPYC server processors, enabling successive deployments of 32, 64, 96, and now even 128-core ...
  41. [41]
    Pushing AMD's Infinity Fabric to its Limits - Chips and Cheese
    Nov 24, 2024 · CCX-es access the rest of the system through AMD's Infinity Fabric, a flexible interconnect that lets AMD adapt system topology to their needs.
  42. [42]
    [PDF] 4th Gen AMD EPYC Processor Architecture
    Improvements over the 'Zen 3' core include 1 MB L2 private cache per core ... In AMD EPYC 7001 Series processors, memory controllers were located on ...
  43. [43]
    The Heart Of AMD's Epyc Comeback Is Infinity Fabric
    Jul 12, 2017 · Infinity Fabric is a coherent high-performance fabric that uses sensors embedded in each die to scale control and data flow from die to socket to board-level.<|separator|>
  44. [44]
    AMD Rome Processors - HECC Knowledge Base
    Jul 12, 2022 · Zen 2 microarchitecture: The EPYC 7742 Rome processor has a base CPU clock of 2.25 GHz and a maximum boost clock of 3.4 GHz. · Hybrid multi-die ...
  45. [45]
    Configuring AMD xGMI Links on the Lenovo ThinkSystem SR665 V3 ...
    Nov 16, 2023 · AMD EPYC 9004 Series Processors support up to four xGMI links with speeds up to 32Gbps.
  46. [46]
    [PDF] AMD EPYC 7003 Processors (Data Sheet)
    fast, inexpensive DDR4 memory and up to 128 lanes of high-throughput PCIe Gen 4 I/O. With strong performance across the portfolio and attractive pricing ...<|separator|>
  47. [47]
    [PDF] Memory Population Guidelines for AMD EPYC™ 7003 Series ...
    Memory Bandwidth​​ EPYC 7003 processors have eight memory channels designated A, B, C, D, E, F, G, and H. Each channel supports up to two DIMMs. Systems can be ...Missing: specifications | Show results with:specifications
  48. [48]
    [PDF] AMD EPYC 9005 Series Processors
    All processors in the series have support for up to 12 DDR5-6400 memory channels9xx5-083A, 128 PCIe® Gen 5 I/O lanes (up to 160 in 2-socket servers), and the ...
  49. [49]
    [PDF] AMD EPYC™ 9004 Series Memory Population Recommendations
    The memory subsystem of AMD EPYC processors plays a significant role in overall server performance. Proper configuration maximizes bandwidth and minimizes ...
  50. [50]
    [PDF] amd epyc™ 4005 series processors
    Each processor supports two DDR5-5600 channels, up to 28 PCIe® Gen 5 lanes, and an AMD Secure Processor. ... I/O capacity and memory bandwidth to balance the ...
  51. [51]
    SPEC CPU®2017 Integer Rate Result
    Benchmark result graphs are available in the PDF report. Hardware · CPU Name: AMD EPYC 9965. Max MHz: 3700. Nominal: 2250.Results Table · Platform Notes · Compiler Version Notes
  52. [52]
    [PDF] Leadership SPEC CPU® 2017 Performance on AMD EPYC™ 9754 ...
    Jun 13, 2023 · Published results are available at https://www.spec.org/cpu2017/results/. The 2P Ampere results presented in this. Performance Brief are ...
  53. [53]
    SPEC CPU 2017 Results - SPEC.org
    The following are sets of available results since the announcement of the benchmark in June 2017. All Results All SPEC CPU 2017 results published by SPEC.
  54. [54]
  55. [55]
    AMD EPYC Turin vs. Intel Xeon 6 Granite Rapids vs. Graviton4 Benchmarks With AWS M8 Instances Review - Phoronix
    ### Summary of Benchmark Results: AMD EPYC Turin vs. Intel Xeon Granite Rapids vs. Graviton4 on AWS EC2 M8 Instances
  56. [56]
    AMD EPYC Genoa Gaps Intel Xeon in Stunning Fashion
    Nov 10, 2022 · AMD EPYC 9004 "Genoa" gaps Intel Xeon in stunning fashion with 96 cores, PCIe Gen5, and DDR5 and massive performance gains.
  57. [57]
    AMD EPYC Genoa/Genoa-X & Bergamo vs. Intel Xeon Sapphire ...
    Nov 22, 2023 · The EPYC 9554 delivered better performance-per-Watt than the Xeon Platinum 8490H and the other Bergamo and Genoa(X) parts were only slightly behind the SPR CPU ...
  58. [58]
    5th Gen AMD EPYC™ Processors Lead Enterprise and Cloud ...
    Oct 10, 2024 · The 5th Gen AMD EPYC lineup presents a wide array of options, featuring processors with 8 to 192 cores and TDP ratings ranging from 155 to 500 ...Server Side Java · Relational Database... · Virtualization And...
  59. [59]
    Driving Growth: AMD EPYC™ 4004 Series Processors Deliver High ...
    Oct 4, 2024 · Notably, these processors are designed with low TDP ratings, ranging from 65 to 170 watts. All AMD EPYC 4004 models utilize the reliable AM5 ...
  60. [60]
    4th Gen AMD EPYC™ Processors – Architectures Optimized for ...
    Nov 20, 2023 · AMD EPYC 9684X processors can deliver up to a 2.44x performance ... 1P servers: EPYC 8534P (64-core, 200W TDP) scoring 27,342 overall ...Missing: ratings | Show results with:ratings
  61. [61]
    AMD EPYC 9554 & EPYC 9654 Benchmarks - Phoronix
    Nov 10, 2022 · The EPYC 9554 in its default (performance determinism) mode had an average power draw of 221 Watts with a peak of 355 Watts, compared to the ...
  62. [62]
    Ryzen vs Epyc idle power consumption - Level1Techs Forums
    Jun 25, 2023 · Ryzen CPUs idle around 25-40w, while EPYC can range from 50-110w depending on generation, with some reported around 48-63.5w.
  63. [63]
    Leadership HPC Performance with 5th Generation AMD EPYC ...
    Jan 22, 2025 · 5 th Gen AMD EPYC is providing up to 37% average core IPC (instructions per clock) improvement for HPC & AI workloads vs the 4 th Generation.
  64. [64]
    Benchmarks: Excellent Power Efficiency With 5th Gen AMD EPYC ...
    Feb 21, 2025 · The combined CPU power consumption of the dual EPYC 9755 128-core processors dropped from a peak of 560~577 Watts down to 535 Watts.
  65. [65]
    [PDF] AMD EPYC 9004 and 8004 Series CPU Power Management
    When you choose AMD EPYC™ processors, you start with leadership performance and efficiency for a broad set of data center workloads .
  66. [66]
  67. [67]
    Cooling AMD EPYC With Noctua Coolers: NH-U9 TR4-SP3, NH ...
    Mar 12, 2018 · To no surprise at all, the NH-U14S TR4-SP3 led to the coolest EPYC 7551 temperatures. Noctua EPYC Cooling. Monitoring the CPU temperature while ...
  68. [68]
    Power Consumption and Thermal Characteristics of AMD EPYC ...
    Total Heat Output: Combined, the EPYC 7742 and A100 SXM5 can generate over 700W under full load, requiring high-efficiency cooling solutions. · Airflow ...
  69. [69]
    AMD Epyc Venice CPUs may exceed the 1000W barrier and require ...
    Aug 29, 2025 · AMD Epyc Venice CPUs may exceed the 1,000W barrier and require advanced cooling. Air cooling is no longer cutting it, so advanced liquid ...Missing: characteristics | Show results with:characteristics
  70. [70]
    [PDF] AMD EPYC™ 9005 BIOS & WORKLOAD TUNING GUIDE
    • “Workload-Specific BIOS Settings” on page 23 presents sample workloads and recommended BIOS settings. ... Tuning Guide discuss the BIOS options as they relate ...
  71. [71]
    Maximizing AI Performance: The Role of AMD EPYC 9575F CPUs in ...
    Jun 12, 2025 · AMD EPYC 9575F CPUs improve latency-constrained inference serving, achieving over 10x better performance than Intel Xeon CPUs, and help balance ...
  72. [72]
    [PDF] AI Inferencing with AMD EPYC Processors
    AMD EPYC processors are efficient for AI inferencing, used in areas like computer vision, and accelerated by the ZenDNN plug-in.
  73. [73]
    Realizing an “Untapped Value” of AMD EPYC Processors with SMT
    Feb 18, 2025 · SMT allows a single core to execute multiple threads, boosting performance by 30-60% in some workloads, and can be toggled on/off.
  74. [74]
    [PDF] tuning guide - amd epyc 9004
    AMD EPYC 9004. Series Processors support up to 4 xGMI (or G-links) with speeds up to 32Gbps. The IOD exposes DDR5 memory channels,. PCIe® Gen5, CXL 1.1+, and ...<|separator|>
  75. [75]
    Making Virtualization Work for Business: AMD EPYC™ Processors
    AMD EPYC™ processors offer a range of solutions that stand ready to solve the problems facing the modern data center, optimizing customer infrastructure for the ...
  76. [76]
    AMD EPYC 7003 Milan Workload Profile NIC Throughput Intensive
    Apr 13, 2023 · The workload profile setting NIC Throughput Intensive in the BIOS can remedy this. This setting deactivates the dynamic adjustment of the Infinity Fabric P- ...Missing: specific | Show results with:specific<|separator|>
  77. [77]
    [PDF] Financial Services Industry Tuning Guide for AMD EPYC™ 9005 ...
    This section discusses varies compiler and library tuning options for improving FSI workload performance on 5th Gen AMD EPYC processors. AMD tested the FSI ...
  78. [78]
    Intel Xeon Prices Fall Sharply to Head Off AMD Epyc | Extremetech
    Jan 30, 2025 · Intel took an axe to Xeon 6 CPU prices, cutting some MSRPs by as much as 30%. That's a startling change for processors that launched late last year.
  79. [79]
    Intel server CPU share shrinks to 62% — AMD still trails, but gap ...
    Jun 27, 2025 · Currently (June 2025), AMD holds about 33% of the server CPU market, and that number continues to grow. In contrast, Intel's share has dropped to around 62%.
  80. [80]
    Server CPU Market Dynamics in 2025 - Fusion Worldwide
    Jun 30, 2025 · In Q1 2025, AMD's server market share reached 39.4%, marking a 6.5% quarter-over-quarter increase. EPYC Milan and Rome series processors ...Missing: 2024 | Show results with:2024
  81. [81]
    AMD EPYC vs Intel Xeon in 2025: Are Intels days numbered for ...
    Aug 16, 2025 · But by 2025, AMD's EPYC line has stepped up in a big way with core counts that make Xeon look lean, while simultaneously cutting power ...
  82. [82]
    Retailers quietly slash prices of AMD's and Intel's latest EPYC and ...
    Aug 23, 2025 · Despite soaring demand for server chips, AMD's EPYC 9005 and Intel's Xeon 6 CPUs are selling at U.S. retailers for up to 50% below their ...
  83. [83]
    [PDF] Harness Increased Performance, Efficiency, and Lower TCO with ...
    Figure 1 | Dell PowerEdge servers and 4th Gen AMD EPYC processors can help consolidate your data center footprint4. The refreshed server uses 31% fewer cores ...Missing: economics | Show results with:economics
  84. [84]
    AMD Rides on Accelerating Data Center Growth: A Sign of More ...
    Sep 2, 2025 · Adoption of EPYC by the largest cloud hyperscalers is increasing significantly. In the second quarter of 2025, more than 100 new AMD-powered ...
  85. [85]
    AMD's Server Market Surge: A Structural Shift in Semiconductor ...
    Aug 6, 2025 · - AMD's x86 server CPU market share surged to 36.5% in 2025, driven ... hyperscalers for 2026. This forward-looking strategy contrasts ...
  86. [86]
    2nd Gen AMD EPYC™ Processors Set New Standard for the ...
    Aug 7, 2019 · Twitter announced it will deploy 2nd Gen AMD EPYC processors across its datacenter infrastructure later this year, reducing TCO by 25%; ...<|separator|>
  87. [87]
    [PDF] Modernize Data Center Virtualization with AMD EPYC processors
    Sep 16, 2024 · VMware licenses, and resulting in a reduction of ~42% in licensing costs, which can lead to a CapEx payback period of only about two months  ...
  88. [88]
    [PDF] Infographic: An AMD Data Center Can Pay for Itself
    May 19, 2025 · The AMD Server & Greenhouse Gas Emissions TCO (total cost of ownership) Estimator Tool - version 1.3, compares the selected AMD EPYC™ and Intel®.
  89. [89]
    AMD Leads with Results in the AI Era
    Sep 23, 2025 · Industry leaders trust AMD EPYC CPUs to deliver industry-leading core density and energy efficiency that help enterprises lower total cost of ...Missing: economics | Show results with:economics
  90. [90]
    How Global Organisations Depend on AMD EPYC™ Servers for AI ...
    Sep 25, 2025 · Performance is only part of the picture. AMD EPYC™ processors consistently delivers better performance per watt, meaning fewer servers are ...
  91. [91]
    Optimizing Enterprise AI in the Cloud - Paid Program
    Optimize cloud AI by choosing the right CPU, like AMD EPYC, to reduce costs, improve performance, and cut power usage. This can reduce operating expenses.Optimizing Enterprise Ai In... · Why Cpu Choice Matters · The Power Of X86...
  92. [92]
    AMD EPYC™ Embedded Family
    Model Specifications ; EPYC™ Embedded 7000 Series. “Zen 2” & “Zen 3”. 8 to 64. Up to 256 MB. Up to 128 Gen 4 ; EPYC™ Embedded 4005 Series. “Zen 5”. 6 to 16. Up to ...
  93. [93]
    AMD EPYC Embedded 3000 Series
    The AMD EPYC Embedded 3000 series uses the 'Zen' CPU, has advanced I/O, up to 1TB memory, and is for networking, storage, and industrial applications.
  94. [94]
    AMD EPYC 3000 Line Gets Updated Adding and Dropping Models
    Mar 19, 2020 · It offers a differentiated feature set that spans somewhere between the Intel Xeon D-1500 / D-1600 and Xeon D-2100 series but with unique twists ...
  95. [95]
    AMD Epyc Embedded 3000-Series Revamped With Configurable ...
    Mar 20, 2020 · The Epyc Embedded 3451 has 16 cores, 32 threads and 32MB of L3 cache. The processor comes with a 2.14 GHz base clock and a 2.45 GHz all-core ...
  96. [96]
    Introducing 4th-Generation AMD EPYC™ Embedded Pr - Advantech
    Nov 1, 2024 · AMD EPYC™ Embedded 9004 Series processors are equipped with the latest and fastest DDR5 memory channels. Enterprise-Class Reliability: Provide ...
  97. [97]
    AMD Launches the EPYC Embedded 9005 "Turin" Family of Server ...
    Mar 11, 2025 · The AMD EPYC Embedded 9005 "Turin" series scales between 8-core to 192-core, with a maximum L3 cache of 512 MB, maximum PCIe Gen 5 lane count of 160, and ...
  98. [98]
    AMD Introduces EPYC™ Embedded 4005 Processors for Low ...
    Sep 16, 2025 · AMD EPYC™ Embedded 4005 Series processors are purpose-built to address the rising demand for real-time compute performance and cost ...
  99. [99]
    AMD Launches Epyc Embedded 4005 Processors for Edge ... - ITdaily.
    Sep 17, 2025 · AMD launches Epyc Embedded 4005 processors for edge apps—low latency, energy-efficient, long lifecycle, AM5 compatibility, up to 16 cores.
  100. [100]
    AMD EPYC 8004 Exxact Platforms | Efficient Edge Solutions
    Sep 24, 2024 · The EPYC 8004 series offers a cost-effective, power-efficient, and space-friendly alternative for general-purpose GPU workloads.
  101. [101]
    AMD introduces 5th Gen EPYC Embedded processors for edge ...
    Mar 14, 2025 · AMD EPYC Embedded 9005 Series CPUs are designed for embedded markets, combining compute performance with features that support product longevity, reliability, ...
  102. [102]
    AMD's EPYC 'Bergamo' and Zen 4c Detailed: Same as Zen 4, But ...
    Jun 7, 2023 · AMD's EPYC 'Bergamo' processor packs 128 cores and sits in the same Socket SP5 as the 96-core EPYC 'Genoa' CPU and has a similar 12-channel DDR5 ...
  103. [103]
    Testing AMD's Bergamo: Zen 4c Spam - Chips and Cheese
    Jun 22, 2024 · To hit those markets, Bergamo uses a density focused variant of AMD's Zen 4 cores. Zen 4's architecture remains untouched, but Zen 4c uses a ...
  104. [104]
    AMD's Data Center Roadmap: EPYC Genoa-X, Siena Announced ...
    Jun 9, 2022 · AMD revealed that it will have a version of its upcoming EPYC Genoa processors that will come armed with up to 1+ GB of L3 cache.
  105. [105]
    AMD Siena Shown at Hot Chips 2023 A Smaller EPYC for Telco and ...
    Aug 28, 2023 · We get up to only 64 cores with 6x DDR5 DRAM channels. Siena is going to scale much lower than Genoa with a 70W to 225W TDP, albeit not as low ...Missing: specialized | Show results with:specialized
  106. [106]
    Analysis of the EPYC 145% performance gain in Cloudflare Gen 12 ...
    Oct 15, 2024 · For the 4th generation AMD EPYC Processors, AMD offers three architectural variants: ... AMD EPYC 9754 (Bergamo), AMD EPYC 9684X (Genoa-X). Key ...
  107. [107]
    China Begins Domestic Production of AMD Server CPUs - TOP500
    Jul 9, 2018 · The Chinese-designed "Dhyana" x86 processors are said to be essentially identical with AMD's own Zen-based EPYC processor, differing only in the ...Missing: compliant | Show results with:compliant
  108. [108]
    AMD EPYC's Chinese Counterpart, Hygon C86, Takes The No.1 ...
    Oct 22, 2019 · The Hygon Dhyana C86 processor, AMD's Chinese counterpart for the EPYC processor is currently holding the no. 1 position for SiSoft's ...
  109. [109]
    AMD vs. Intel Failure Rate Comparison: Which is More Reliable?-Jtti
    Sep 28, 2025 · Long-term testing by multiple third-party organizations has shown that the failure rates of the latest AMD EPYC 7003 and 9004 series processors ...
  110. [110]
    What is the MTBF of AMD EPYC CPUs compared to Intel Xeon CPUs?
    Both AMD EPYC and Intel Xeon CPUs exhibit very low failure rates in controlled data center conditions. Differences in MTBF are often marginal and context ...
  111. [111]
    [PDF] Revision Guide for AMD Family 17h Models 30h-3Fh Processors
    • AMD EPYC™ 7002 Series Processors ... Occasionally, AMD identifies product errata that cause the processor to deviate from published specifications.
  112. [112]
    [PDF] Revision Guide for AMD Family 19h Models 10h-1Fh Processors
    May 1, 2023 · An erratum is defined as a deviation from the product's specification, and as such may cause the behavior of the processor to deviate from the ...
  113. [113]
    ESXi system may crash or hang after 1044 days uptime
    May 28, 2025 · The issue is caused by AMD erratum 1474; please refer Revision Guide for AMD Family 17h Models 30h-3Fh Processors .
  114. [114]
    AMD's struggle with unbootable Epyc Naples and Rome prototype ...
    Mar 2, 2024 · The two execs publicly discussed some early teething troubles with booting Epyc Naples and Rome prototype chips for the first time.Missing: errata | Show results with:errata
  115. [115]
    AMD's EPYC Rome Chips Crash After 1044 Days of Uptime
    Jun 2, 2023 · AMD's EPYC 7022 chips can hang after 1044 days due to an errata that AMD posted to its revision guide.
  116. [116]
    AMD-based systems with EPYC 7003 CPU experience random OS ...
    AMD-based systems with AMD EPYC 7003 series Milan processors experience random OS shutdown and system soft reset. Some servers have a Bluescreen of Death ...
  117. [117]
    [PDF] Revision Guide for AMD Family 1Ah Models 00h-0Fh Processors
    Mar 2, 2025 · Product Errata provides a detailed description of product errata, including potential effects on system operation and suggested workarounds. An ...<|separator|>
  118. [118]
    AMD Shoots Down EPYC Genoa Memory Bug Claims, Says Update ...
    Mar 16, 2023 · AMD repudiated a claim that its EPYC Genoa chips suffer from a bug in the memory subsystem that will require a redesign of the processor.
  119. [119]
    Meta Uncovers RDSEED Architectural Issue In AMD Zen 5 CPUs
    Oct 16, 2025 · It turns out the newest AMD EPYC 5th Gen "Turin" processors have a new RDSEED issue. RDSEED is principally used for seeding software psuedo ...
  120. [120]
    Puget Systems Most Reliable Hardware of 2024
    Jan 14, 2025 · AMD EPYC Server-class CPUs ... The storage drives we use are currently fairly reliable, and 2024 only saw a 1.6% failure rate on average.<|control11|><|separator|>
  121. [121]
    AMD Product Security
    New Variant of Spectre v1 – referred by researchers as a Meltdown variant ... We are advising customers running AMD EPYC™ processors in their data centers ...AMD SEV Confidential... · AMD CPU Microcode... · AMD SMM Vulnerabilities
  122. [122]
    Cross-Process Information Leak - AMD
    A register in “Zen 2” CPUs may not be written to 0 correctly. This may cause data from another process and/or thread to be stored in the YMM register.
  123. [123]
    AMD Zenbleed Vulnerability Fix Tested: Some Apps Drop 15 ...
    Sep 7, 2023 · The Zenbleed flaw (CVE-2023-20593) spans the entire Zen 2 product stack, including AMD's EPYC data center processors and the relevant Ryzen 3000 ...
  124. [124]
    Microsoft finds Spectre-like flaws in AMD Epyc server CPUs
    Jul 10, 2025 · Microsoft researchers have uncovered a series of security flaws in AMD processors that could allow attackers to access confidential data.
  125. [125]
    'Sinkclose' Flaw in Hundreds of Millions of AMD Chips Allows Deep ...
    Aug 9, 2024 · Researchers warn that a bug in AMD's chips would allow attackers to root into some of the most privileged portions of a computer.
  126. [126]
    AMD Server Processor Vulnerabilities – February 2025
    Feb 11, 2025 · Potential vulnerabilities in the AMD Secure Processor (ASP), AMD Secure Encrypted Virtualization (SEV), AMD Secure Encrypted Virtualization – Secure Nested ...
  127. [127]
    AMD SEV Confidential Computing Vulnerability
    The AMD SEV vulnerability, due to improper signature verification, could allow an attacker to load malicious microcode, potentially causing loss of SEV-based ...
  128. [128]
    AMD discloses new CPU flaws that can enable data leaks via timing ...
    Jul 10, 2025 · Four newly revealed vulnerabilities in AMD processors, including EPYC and Ryzen chips, expose enterprise systems to side-channel attacks.<|separator|>
  129. [129]
    AMD Warns of New Transient Scheduler Attacks Impacting a Wide ...
    Jul 10, 2025 · AMD reveals new Transient Scheduler vulnerabilities in CPUs, exposing sensitive data risks across multiple Ryzen and EPYC models.
  130. [130]
    AMD Server Vulnerabilities – August 2025
    Aug 12, 2025 · Potential vulnerabilities in AMD EPYC™ Processor platforms that affect IOMMU, AMD Secure Encrypted Virtualization – Secure Nested Paging ...
  131. [131]
    [PDF] NUMA Configurations for AMD EPYC 2nd Generation Workloads - Dell
    AMD offers a variety of settings to help limit the impact of NUMA. One of the key options is called Nodes per Socket (NPS).
  132. [132]
    AMD EPYC on ESXi 6.5-6.7 NUMA issues: Mostly Resolved - Reddit
    Oct 12, 2018 · On the host open the advanced config and Change "Numa. Locality/weightActionAffinity" from the Default 180 and set to 0.New AMD Epyc server, recommendations on the best setup ... - RedditHow does NUMA node count impacts performance (if at all) - RedditMore results from www.reddit.comMissing: software | Show results with:software
  133. [133]
    Epic Infrastructure: AMD vs. Intel for Healthcare IT - EchoStor
    Apr 30, 2025 · While both AMD and Intel support the x86 architecture, mixing platforms within a virtualized Epic environment can introduce limitations. Live ...
  134. [134]
    Issue with Starting VM When Assigning AMD EPYC CPU Model ...
    Feb 6, 2025 · I'm facing an issue when trying to assign an AMD EPYC processor model to a virtual machine in Proxmox. The VM fails to start and I receive the following error ...<|separator|>
  135. [135]
    AMD EPYC Turin vs Intel Xeon (Granite Rapids & Sierra Forest)
    Jul 2, 2025 · In tests, the 192-core EPYC 9965 drew about 32% more power than the prior-gen Genoa (going from ~208W to 275W average), yet it produced 1.55 ...
  136. [136]
    Is the AMD EPYC Zen3 (Milan) CPU supported on Red Hat ...
    Nov 2, 2023 · AMD EPYC Zen3 (Milan) Processors are not supported on Red Hat Enterprise Linux 7. Systems that have had their CPUs updated from Zen2 (Rome) to ...
  137. [137]
    [PDF] EPYC Offers x86 Compatibility | AMD
    For hardware compatibility, AMD's validation program includes testing a broad range of third-party PCI Express add-in cards. These include cards and host ...Missing: issues | Show results with:issues
  138. [138]
    AMD EPYC Supported OS and Hypervisor Compatibility Matrix at ...
    Jun 27, 2017 · We take a snapshot of which OSes are supported on AMD EPYC as of its Q2 2017 launch. The list of OS and hypervisors will likely grow over ...
  139. [139]
    AMD EPYC (codenamed 'Naples') server processors to launch June ...
    May 31, 2017 · At the COMPUTEX 2017 press conference today, AMD announced that the EPYC server processors will officially start retailing from June 20th, 2017.
  140. [140]
    AMD EPYC 7000 Series Architecture Overview for Non-CE or EE ...
    Jun 20, 2017 · Using four small die per package allows AMD to keep manufacturing costs low while also delivering 32 cores / 64 threads, 16MB L2 cache and 64MB ...
  141. [141]
    AMD EPYC 7601 Specs - CPU Database - TechPowerUp
    The AMD EPYC 7601 is a server/workstation processor with 32 cores, launched in June 2017. It is part of the EPYC lineup, using the Zen (Naples) architecture ...Missing: first | Show results with:first
  142. [142]
    AMD EPYC 7000 Series Server Processor Lineup Specs, Prices ...
    Jun 15, 2017 · AMD EPYC 7000 Series Server CPU Specifications, Performance and Pricing Detailed – Up To 32 Cores, 2 TB Memory Support, 128 PCIe Lanes and 3.2 ...
  143. [143]
    AMD EPYC 7002 Series Rome Delivers a Knockout - ServeTheHome
    Aug 7, 2019 · The AMD EPYC 7002 series spans 19 public launch SKUs from 8 cores to 64 cores with up to 32 billion transistors · AMD has made its I/O die the ...
  144. [144]
    A Deep Dive Into AMD's Rome Epyc Architecture - The Next Platform
    Aug 15, 2019 · We have been itching to get into the architectural details of the new “Rome” Epyc server chips, which we covered at the launch last week with ...Missing: timeline | Show results with:timeline
  145. [145]
    [PDF] AMD Rome review - chpc.utah.edu
    Other features of note inculde PCI-Express generation 4 support, up to 128 lanes, eight-channel memory controller on CPU each socket, and DDR4 memory speed up ...
  146. [146]
    Detailed Specifications of the AMD EPYC "Rome" CPUs - Microway
    Aug 7, 2019 · Up to 64 processor cores per socket (with options for 8-, 12-, 16-, 24-, 32-, and 48-cores) · Improved CPU clock speeds up to 3.1GHz (with Boost ...<|separator|>
  147. [147]
    [PDF] AMD EPYC 7002 Series CPU PowerEdge Server Performance - Dell
    This new CPU family features up to 64 cores, 256 MB of last level caching and (8) 3200 MT/s DDR4 memory channels.
  148. [148]
    AMD "Zen 2" IPC 29 Percent Higher than "Zen" | TechPowerUp
    Nov 12, 2018 · The next-generation CPU architecture provides a massive 29 percent IPC uplift over the original "Zen" architecture.
  149. [149]
    AMD clarifies reports of Zen 2's 29% IPC boost over Zen - OC3D
    Nov 14, 2018 · Anyone expecting Zen 2 to offer a 29.4% performance boost in all workloads over Zen at the same clocks is expecting too much, and needs to ...
  150. [150]
    AMD Unveils Zen 2 EPYC 7nm CPU With 2X Performance Per ...
    Nov 6, 2018 · AMD Unveils Zen 2 EPYC 7nm CPU With 2X Performance Per Socket, Zen 3 Set For 2020 - Updated: Benchmarks · 2X Performance per Socket · 4X Floating ...
  151. [151]
    AMD EPYC 7742 Specs - CPU Database - TechPowerUp
    The AMD EPYC 7742 is a server/workstation processor with 64 cores, launched in August 2019. It is part of the EPYC lineup, using the Zen 2 (Rome) architecture ...
  152. [152]
    AMD EPYC 7502 Specs - CPU Database - TechPowerUp
    The AMD EPYC 7502 is a server/workstation processor with 32 cores, launched in August 2019. It is part of the EPYC lineup, using the Zen 2 (Rome) architecture ...Amd Epyc 7502 · Performance · Architecture
  153. [153]
    AMD Doubles Down – And Up – With Rome Epyc Server Chips
    Aug 7, 2019 · We know that IPC increased by 50 percent moving from the last Opteron core (Excavator) to the first Zen 1 core, and that the jump from Zen 1 in ...
  154. [154]
    AMD EPYC™ 7003 Series CPUs Set New Standard as Highest ...
    Mar 15, 2021 · Available immediately, AMD EPYC 7003 Series Processors have up to 64 “Zen 3” cores per processor and introduce new levels of per-core cache ...
  155. [155]
    The Third Time Charm Of AMD's Milan Epyc Processors
    Mar 15, 2021 · With the Milan design, the core complex is unified and eight Zen3 cores all have their dedicated L2 caches and they all share a single 32 MB L3 ...
  156. [156]
    AMD EPYC™ 7713
    Free 30-day returnsLaunch Date: 03/15/2021. Connectivity. PCI Express® Version: PCIe® 4.0 x128. System Memory Type: DDR4. Memory Channels: 8. System Memory Specification: Up to ...
  157. [157]
    AMD EPYC 7003 "Milan" Linux Benchmarks - Superb Performance
    Mar 15, 2021 · The current top-end SKU is the EPYC 7763 that is 64 cores with a 2.45GHz base frequency and 3.50GHz boost frequency while having a 280 Watt TDP.
  158. [158]
    [PDF] amd epyc™ 7003 processors continue ansys® cfx® performance ...
    The 32-core AMD EPYC 75F3 (see Table 1) outperforms the Intel baseline system in all benchmarks by an average of approximately 1.43x across all tests. •. The 32 ...<|control11|><|separator|>
  159. [159]
    AMD Epyc 7773X Milan-X review: Zen 3 bows out on a high | Club386
    Rating 4.0 Aug 16, 2022 · Moving from Zen 2 to Zen 3 affords an extra 20 per cent improvement when evaluated using chips housing the same number of cores and threads. The ...<|separator|>
  160. [160]
    AMD Genoa Detailed – Architecture Makes Xeon Look Like A ...
    Nov 10, 2022 · With Genoa, AMD's per core performance leadership is usually ~50% on integer workloads and as high as 96% on floating point! Much of the ...
  161. [161]
    AMD Officially Confirms 4th Gen EPYC Genoa "Zen 4" CPU Unveil ...
    Oct 24, 2022 · AMD Officially Confirms 4th Gen EPYC Genoa “Zen 4” CPU Unveil on 10th November. AMD has officially confirmed the unveiling of its 4th Gen EPYC ...
  162. [162]
    AMD Epyc 9654 Genoa review: different dimension performance
    Rating 5.0 Nov 10, 2022 · The initial 4th Generation thrust is provided by standard Genoa using the Zen 4 architecture scaling up to 96 cores and 192 threads.
  163. [163]
    [PDF] AMD EPYC™ 9004 Series Architecture Overview
    AMD 3D Chiplet architecture stacks L3 cache tiles vertically to provide up to 96MB of L3 cache per die (and up to 1 GB L3 Cache per socket) while still ...
  164. [164]
    AMD Finishes Out The Zen 4 Server CPUs With Edgy “Siena”
    Sep 18, 2023 · With today's launch of the “Siena” Epyc 8004 processors, based on the Zen 4c cores, AMD has completed the set, and soon all attention will turn to the “Turin” ...
  165. [165]
    AMD EPYC™ 4005 Series Processors
    AMD EPYC™ processors power a wide range of workloads, from entry-level servers to high-performance computing. AMD EPYC™ 9000, 8000, and 7000 series ...AMD EPYC™ 4585PX · AMD EPYC™ 4545P · AMD EPYC™ 4565P
  166. [166]
    Zen 5 comes to small businesses: AMD unveils EPYC 4005-series ...
    May 13, 2025 · AMD's EPYC 4005-series 'Grado' CPUs come in an AM5 form factor and feature up to 16 cores and 32 threads in a bid to offer maximum performance ...
  167. [167]
    AMD EPYC 4005 Grado is Great and Intel is Exposed
    May 13, 2025 · With Zen 5 the AMD EPYC 4005 "Grado" effectively doubles what Intel Xeon offers in the entry server market now with a 16 core 65W TDP ...Missing: 5c dense
  168. [168]
    Performance & Power Of The Low-Cost EPYC 4005 "Grado" vs ...
    Jul 1, 2025 · The recently-launched AMD EPYC 4005 "Grado" series have been quite fun to benchmark. These Zen 5 processors designed for affordable/entry-level ...Missing: codename | Show results with:codename