Fact-checked by Grok 2 weeks ago

Graphcore


Graphcore Limited is a British semiconductor company founded in 2016 in Bristol, United Kingdom, by serial entrepreneurs Nigel Toon and Simon Knowles, specializing in the design and production of Intelligence Processing Units (IPUs), a type of parallel processor architected specifically for accelerating artificial intelligence and machine learning workloads.
The company's IPUs emphasize massive on-chip parallelism, with each unit featuring thousands of independent processing cores and integrated memory to handle complex AI models more efficiently than traditional GPUs for certain tasks, supported by the proprietary Poplar software stack for model training and inference.
Graphcore raised significant venture funding, including a $32 million Series A round led by Robert Bosch Venture Capital in 2017, achieving unicorn status amid the AI hardware boom, before being acquired by SoftBank Group as a wholly owned subsidiary to bolster its global AI compute capabilities.
In 2025, Graphcore announced plans to invest up to £1 billion over the next decade in India, establishing an AI Engineering Campus in Bengaluru to create 500 semiconductor jobs and expand research in AI infrastructure.

Founding and Early Development

Inception and Founders

Graphcore was founded on 14 November 2016 in , , by serial entrepreneurs Nigel Toon and Simon Knowles, who respectively assumed the roles of and . The company emerged from a development phase that began around late 2013, with formal incorporation aimed at creating specialized processors to address limitations in workloads beyond conventional GPUs and CPUs. The inception of Graphcore traces to January 2012, when Toon and Knowles met at the Marlborough Tavern in to brainstorm opportunities following the exits from their prior ventures in design. Both founders brought extensive experience in processor innovation: Toon had served as CEO of two venture-backed firms, picoChip (acquired by in 2012) and , focusing on multicore and embedded processing technologies. Knowles, a and engineer with over 40 years in the field, had co-founded and exited two fabless companies, including Icera (acquired by in 2011), and contributed to 14 production chips, including early domain-specific architectures for . This partnership leveraged Bristol's engineering heritage, rooted in hardware innovation since the , to pioneer the Intelligence Processing Unit (IPU), a optimized for inference and through massive on-chip memory and parallelism. Initial seed funding in , led by and including early backers like the founders' networks, enabled prototyping amid a nascent competitive landscape dominated by general-purpose accelerators.

Initial Technology Focus and Prototyping

Graphcore's initial technology efforts concentrated on designing the Intelligence Processing Unit (IPU), a processor architecture optimized for machine intelligence applications, distinguishing it from graphics processing units (GPUs) by integrating the full machine learning model on-chip to minimize data transfer bottlenecks. Founded in 2016 by hardware engineers Nigel Toon and Simon Knowles—veterans of Icera, which they sold to Nvidia in 2011—the company targeted the inefficiencies of existing processors in managing AI's graph-like, probabilistic computations through a massively parallel, MIMD-based structure comprising thousands of lightweight processing threads. This approach prioritized low-precision arithmetic to accelerate inference and training tasks requiring rapid iteration over vast parameter spaces, rather than high-precision numerical simulations. Prototyping commenced in 2016 following the company's incorporation in , , with seed investments enabling the fabrication of early IPU silicon to validate the architecture's scalability and performance for workloads. These prototypes emphasized on-chip hierarchies and interconnects to support synchronous parallelism across processing elements, addressing issues inherent in off-chip model on GPUs. By mid-2017, this work culminated in the announcement of the Colossus GC2, Graphcore's inaugural IPU—a 16 nm device with 1,472 independent processor tiles delivering mixed-precision floating-point operations at scale. Concurrently, the team co-developed the software stack to facilitate model mapping onto the hardware, ensuring prototypes could demonstrate end-to-end acceleration.

Core Technology

Intelligence Processing Unit Architecture

The Intelligence Processing Unit (IPU) employs a , MIMD comprising thousands of independent processing tiles, each integrating compute and memory to minimize data movement latency inherent in traditional designs. Unlike GPUs, which rely on hierarchical caches and global , the IPU distributes on-chip directly within tiles, enabling explicit, high-bandwidth data exchange without implicit caching overhead. This tile-based structure supports (BSP) execution, sequencing compute phases with collective synchronization and exchange operations across the fabric. Each tile features a single multi-threaded processor capable of running up to six worker s alongside a supervisor for , with vectorized floating-point units and dedicated matrix multiply engines delivering 64 multiply-accumulate operations per cycle in half-precision. In the second-generation IPU (GC200), the chip integrates 1,472 such s, providing nearly 9,000 parallel s and 900 MB of aggregate In-Processor-Memory () at 624 KB per , yielding aggregate bandwidths exceeding 45 TB/s for local access with latencies around 3.75 ns at 1.6 GHz clock speeds. First-generation IPUs () featured 1,216 s with 304 total , scaling performance to 124.5 TFLOPS in mixed precision. The IPU's exchange hierarchy facilitates all-to-all communication via an on-chip with 7.7 TB/s throughput and sub-microsecond latencies for operations like gathers (0.8 µs across the IPU), enabling efficient handling of irregular, graph-like data flows common in models. Off-tile scaling occurs through IPU-Links (64 GB/s bidirectional) and host interfaces, supporting multi-IPU clusters without relying on PCIe bottlenecks. This contrasts with GPU SIMT models, where thread divergence and memory coalescing limit efficiency on non-uniform workloads; IPUs excel in fine-grained parallelism and small-batch inference by partitioning models across tiles with explicit messaging, achieving up to 3-4x speedups over GPUs in graph neural networks.

Key Innovations in Parallel Processing

Graphcore's Intelligence Processing Unit (IPU) introduces a tile-based optimized for machine intelligence workloads, featuring 1,472 independent processing tiles per second-generation (MK2) IPU, each capable of executing multiple threads. This design enables nearly 9,000 concurrent independent program threads, supporting a (MIMD) execution model where tiles operate with autonomous control flows, contrasting with the more rigid SIMD paradigms in traditional GPUs. A core innovation lies in the (BSP) , which structures computation into discrete phases of local tile processing, global , and inter-tile data exchange via an on-chip all-to-all fabric. This approach minimizes overhead in highly parallel AI tasks, such as graph-based computations, by enforcing synchronous execution across all tiles per step while allowing thread scheduling within tiles to hide latencies. Complementing this, each tile integrates local (624 KB per tile, totaling approximately 900 MB of In-Processor-Memory across the IPU), which colocates compute and data to drastically reduce memory access bottlenecks inherent in architectures. Further enhancements include specialized hardware for vectorized floating-point operations (e.g., FP16 and FP32 with multiply-accumulate units performing operations per ) and high-bandwidth collective communication primitives, enabling efficient scaling to pod-level systems interconnecting up to ,000 IPUs. Microbenchmarking reveals that this parallelism yields superior throughput for irregular, data-intensive workloads like , though performance is bounded by exchange fabric contention under unbalanced loads. These elements address the parallelism demands of large-scale models by prioritizing fine-grained, graph-oriented over sequential bottlenecks.

Software Stack and Ecosystem

Graphcore's software stack is anchored by the Poplar SDK, a comprehensive co-designed with the Intelligence Processing Unit (IPU) to facilitate -based programming for machine intelligence workloads. Released as the world's first dedicated framework for IPU software, Poplar encompasses a compiler, runtime environment, and supporting libraries that map computational s onto IPU tiles, enabling fine-grained parallelism across thousands of processing elements. Developers can program directly in C++ or , expressing algorithms as directed acyclic s that leverage IPU-specific features like in-memory computation and bulk synchronous parallelism. The SDK integrates with established frameworks to broaden accessibility. It provides IPU-enabled backends for (including PyTorch Geometric for graph neural networks) and /, allowing users to train and infer models with minimal code modifications via directives like @ipu_model. PopART, a core component, supports ONNX import/export for model portability, while Poplibs deliver optimized, low-level operations such as tensor manipulations and custom kernels. These integrations have been updated iteratively, with Poplar SDK 3.1 (December 2022) adding 1.13 support and enhanced sparse tensor handling. Complementary tools enhance development and optimization. PopVision suite includes the Graph Analyser for visualizing IPU graph execution, tile-level performance metrics, and memory usage, alongside the System Analyser for host-IPU interaction profiling. These enable of large-scale models distributed across IPU-POD systems. The stack supports containerized environments through Hub images, certified under Docker's Verified Publisher Program since November 2021, facilitating reproducible deployments. The ecosystem fosters scalability via third-party integrations and community resources. Partnerships, such as UbiOps' IPU support added in July 2023, enable dynamic scaling of training jobs in cloud-like setups. Open-source contributions on , including for reusable primitives, encourage custom extensions, though adoption has been critiqued for demanding expert-level tuning to achieve peak efficiency compared to GPU alternatives. Post-SoftBank acquisition in 2024, the stack remains centered on , with ongoing emphasis on large-model support like efficient of billion-parameter transformers.

Products and Hardware Offerings

IPU Generations and Evolution

Graphcore's first-generation Intelligence Processing Unit (IPU), prototyped in 2016 and commercially launched in 2018, introduced a novel architecture designed specifically for workloads, featuring thousands of independent processing tiles interconnected via a custom mesh to handle entire models in on-chip memory, eschewing the data movement bottlenecks of traditional GPUs. This initial design emphasized synchronous parallelism across 1,472 tiles, each with multiple cores, enabling high throughput for graph-based computations central to deep learning. In July 2020, Graphcore unveiled its second-generation IPU, embodied in the IPU-M2000 processor and integrated into systems like the IPU-Machine, which quadrupled on-chip memory to 900 MB per IPU and boosted compute density through refined tile interconnects and enhanced bulk memory management, delivering up to 250 teraFLOPS of 16-bit floating-point performance per unit while supporting scalable pods for exascale training. These advancements addressed limitations in the first generation by improving for large models, with each IPU-Machine housing four IPUs connected via 100 GbE fabric for distributed processing, marking a shift toward production-scale deployments in data centers. The evolution culminated in the Bow IPU, announced in March 2022 and entering shipment shortly thereafter, which applied TSMC's wafer-on-wafer to stack the second-generation GC200 die face-to-face with a dedicated power-delivery die, enabling 40% higher clock speeds, reduced power consumption, and denser integration without redesigning the underlying processor logic. Bow systems, such as the Bow Pod with four IPUs aggregating 5,888 cores and 1.4 petaFLOPS of compute, extended the architecture's efficiency for hyperscale applications, though adoption remained constrained by ecosystem maturity relative to GPU incumbents. This packaging innovation represented Graphcore's focus on incremental hardware refinements amid competitive pressures, prior to its 2024 acquisition by SoftBank, which redirected resources toward integrated infrastructure rather than standalone generational leaps.

Scale-Up Systems like Colossus

Graphcore's scale-up systems, exemplified by configurations like the Colossus IPU clusters, enable datacenter-scale deployment of Intelligence Processing Units (IPUs) through rack-integrated IPU-POD architectures designed for efficient model and . Introduced in December 2018, the initial rackscale IPU-POD utilized first-generation Colossus Mk1 IPUs to deliver over 16 petaFLOPS of mixed-precision compute per 42U rack, with systems of 32 such pods scaling to more than 0.5 exaFLOPS. These systems leverage IPU-Link interconnects for low-latency, high-bandwidth communication, minimizing data movement overhead compared to traditional GPU clusters reliant on PCIe or . The second-generation systems, launched in July 2020, advanced with the IPU-Machine M2000—a 1U housing four Colossus Mk2 GC200 IPUs, providing 1 petaFLOP of compute, up to 900 of in-processor per IPU, and support for up to 450 GB of exchange with 180 TB/s bandwidth. Rack-scale examples include the IPU-POD64, comprising 16 M2000 units for 64 IPUs, and the IPU-POD128 with 32 M2000 units for 128 IPUs, 8.2 TB of total , and enhanced scale-out via 100 GbE fabrics. These configurations support disaggregated host-to-IPU ratios, allowing flexible integration with standard servers from partners like and HPE, and extend to datacenter-scale clusters of up to 64,000 IPUs. Key features of these scale-up systems emphasize massive parallelism for large models, with first-generation Colossus Mk1 supporting up to 4,096 IPUs and optimized topologies for graph-based workloads via the Poplar software stack. Power efficiency is highlighted in configurations like 16 Mk2 IPUs delivering 4 petaFLOPS at 7 kW in a 4U unit, though real-world deployment depends on cooling and interconnect density. By 2021, expanded POD designs like POD128 facilitated training of models exceeding GPT-scale, with bandwidth exceeding 10 PB/s in projected ultra-scale systems.

Integration with Cloud and Software Tools

Graphcore's Poplar SDK serves as the primary software interface for its Intelligence Processing Units (IPUs), enabling seamless integration with popular machine learning frameworks such as (versions 1 and 2, with full support for TensorFlow XLA compilation) and . This co-designed stack facilitates efficient mapping of computational graphs to IPU hardware, supporting features like in-processor memory streaming and parallel execution optimized for AI workloads. Developers can access pre-optimized models and datasets through partnerships, including Hugging Face's Transformers library adapted for IPU acceleration as of May 2022. Containerization support enhances deployment flexibility, with official SDK images available on Hub since November 2021, verified under 's Publisher Program. These images include tools for interacting with IPUs and running applications in isolated environments. integration is provided for orchestration in scale-up systems like IPU-PODs, allowing automated provisioning and management of IPU clusters alongside frameworks such as Slurm and . Additional ecosystem expansions, such as UbiOps platform support added in July 2023, enable dynamic scaling of IPU jobs for training and inference. For cloud deployment, Graphcore IPUs have been accessible via since at least 2020, permitting users to provision IPU instances without on-premises hardware. The company launched its own G-Core Labs IPU service in June 2022, bundling Poplar SDK access for rapid prototyping and production-scale AI tasks. Partnerships with infrastructure providers like for solutions and for further extend IPU usability in hybrid cloud environments, though adoption has remained limited compared to GPU-centric alternatives.

Funding Trajectory and Financial Challenges

Major Investment Rounds

Graphcore secured its Series B funding round of $30 million on July 20, 2017, led by Atomico with participation from investors including Catalyst Fund, Capital, Amadeus Capital Partners, Foundation Capital, Pitango Venture Capital, C4 Ventures, and Robert Bosch Venture Capital. This round supported the development of its Intelligence Processing Unit (IPU) technology for applications. The company followed with a Series C round of $50 million in November 2017, led by Sequoia Capital and including Dell as a participant. In December 2018, Graphcore closed a $200 million Series D round, achieving unicorn status with a post-money valuation of $1.7 billion; key investors included Microsoft, BMW i Ventures, Sofina, Merian Global Investors (now Chrysalis Investments), and Draper Esprit. This funding accelerated IPU production scaling and partnerships for AI hardware deployment. Graphcore extended its Series D with an additional $150 million raised on February 25, 2020, from investors including , Mayfair Equity Partners, and Chrysalis Investments, bringing the total for the round to approximately $350 million and elevating the valuation to $1.95 billion. The final major venture round was Series E, closing at $222 million on December 29, 2020, led by the with support from , , and existing backers, resulting in a $2.77 billion valuation. Across these rounds from 2017 to 2020, Graphcore raised over $700 million in total equity funding to fuel R&D and market expansion amid competition in accelerators.

Revenue Realities Versus Valuation Hype

Graphcore's valuation surged amid the hardware boom, reaching a of $2.77 billion in December 2020 following a $222 million funding round led by and others, positioning it as a high-profile challenger to in specialized processing. This peak reflected investor enthusiasm for its Intelligence Processing Unit (IPU) technology, with earlier rounds including a $200 million Series D in 2018 that elevated it to status at approximately $1.7 billion. However, these valuations were driven more by speculative promise than operational traction, as the company invested heavily in R&D and scaling without commensurate commercial uptake. In stark contrast, Graphcore's revenue remained negligible relative to its funding and hype. For the year ended , 2022—the most recent full-year figures publicly available pre-acquisition—revenue totaled just $2.7 million, a 46% decline from 2021, amid broader challenges in AI chip adoption beyond dominant GPU ecosystems. Pre-tax losses ballooned to $205 million that year, reflecting high operational burn rates from a of around 500 and expansive hardware development, with cash reserves strained despite over $700 million raised cumulatively. These figures underscored a core disconnect: while Graphcore marketed IPUs as superior for certain workloads via massive on-chip memory and parallelism, customer inertia toward established CUDA software stacks limited deployments, resulting in revenue that equated to mere fractions of a percent of its valuation. The valuation-revenue mismatch culminated in SoftBank's 2024 acquisition for an estimated $500-600 million—less than a quarter of the 2020 peak—effectively a down-round that wiped out significant gains and highlighted over-optimism in early-stage bets. Pre-acquisition filings revealed ongoing struggles to convert pilot programs into scalable , with stymied by lock-in and , prompting headcount reductions of over 20% by late 2022. This trajectory exemplifies how in semiconductors often prioritized technological novelty over proven market fit, leading to hype-fueled multiples unsupported by fundamentals.

Acquisition and Strategic Shifts

SoftBank Takeover in 2024

On July 11, 2024, Corp. announced the acquisition of Graphcore, the UK-based developer of Intelligence Processing Units (IPUs) for workloads, converting it into a wholly owned . The financial terms were not officially disclosed, though reports indicated a purchase price ranging from approximately $400 million to over $600 million, a sharp decline from Graphcore's peak valuation of $2.8 billion in 2020. This transaction followed months of speculation, as Graphcore had been seeking buyers since at least February 2024 amid competitive pressures in the chip market dominated by and ongoing financial strains, including just $4 million in revenue for 2023 despite over $700 million in cumulative investments. Graphcore's CEO Nigel Toon described the deal as a "positive outcome" that would enable accelerated development of next-generation compute infrastructure under SoftBank's resources, emphasizing continuity in operations and integration with SoftBank's broader ambitions, including synergies with its subsidiary. SoftBank, led by , positioned the acquisition as part of its strategic push toward (AGI), leveraging Graphcore's IPU technology for scalable training and inference systems. The move marked SoftBank's second major semiconductor investment, following its 2016 purchase of for $32 billion, and reflected a pattern of acquiring distressed hardware innovators to bolster its ecosystem amid global chip shortages and escalating demand for alternatives to GPU-centric architectures. The acquisition faced no major regulatory hurdles and closed promptly, with Graphcore retaining its Bristol headquarters and commitment to UK-based R&D, though it highlighted broader challenges for European AI startups in scaling against US incumbents. Industry analysts noted that while Graphcore's MIMD-based IPUs offered theoretical advantages in certain parallel processing tasks over Nvidia's SIMD GPUs, persistent ecosystem lock-in and slower market adoption had eroded its standalone viability, making SoftBank's deep pockets essential for survival.

Post-Acquisition Expansions and Plans

Following its acquisition by Corp. on July 11, 2024, Graphcore announced intentions to expand hiring in the and globally to bolster its engineering and research capabilities. This included a renewed recruitment drive starting in November 2024, targeting roles in hardware development and software optimization to align with SoftBank's broader artificial intelligence infrastructure goals. A key post-acquisition initiative materialized in October 2025, when Graphcore, as a SoftBank , committed £1 billion (approximately $1.3 billion) to in over the next decade. The investment focuses on scaling chip , including the establishment of an Engineering Campus in as Graphcore's first office in the country. This expansion aims to create up to 500 semiconductor-related jobs, emphasizing design, fabrication support, and integration of Intelligence Processing Units (IPUs) for workloads. The plans integrate with SoftBank's global compute strategy, which includes multi-trillion-dollar commitments to advanced resources, positioning Graphcore's IPU technology as a complementary asset to GPU-dominant ecosystems. No further large-scale geographic expansions or product roadmap shifts have been publicly detailed as of October 2025, though the acquisition has enabled Graphcore to leverage SoftBank's resources for sustained R&D amid prior commercial challenges.

Competitive Landscape

Rivalry with Nvidia and GPU Dominance

Graphcore positioned its Intelligence Processing Units (IPUs) as a direct architectural alternative to 's graphics processing units (GPUs), emphasizing massive on-chip (up to 900 MB SRAM per IPU) and fine-grained parallelism tailored for and , contrasting with 's reliance on high-bandwidth (HBM) and tensor cores. In benchmarks published by Graphcore in December 2020, the IPU-M2000 (four MK2 IPUs) demonstrated up to 60x higher throughput and 16x lower than a single A100 GPU in specific low-latency tasks, such as . Independent evaluations, including a 2021 study on cosmological simulations, showed mixed results: Graphcore's MK1 IPU outperformed 's V100 GPU in some deep scenarios but lagged in others due to software immaturity. These claims highlighted potential IPU advantages in -bound workloads, yet Graphcore's self-reported metrics often compared multi-IPU clusters to single GPUs, drawing skepticism over apples-to-oranges equivalency. Nvidia maintained overwhelming dominance in the AI accelerator , capturing an estimated 86% share of AI GPU deployments by 2025, driven by its software that locked in developers through optimized libraries, vast community support, and seamless integration with frameworks like and . This moat proved insurmountable for Graphcore, whose Poplar SDK required significant porting efforts from codebases, limiting adoption among enterprises reliant on Nvidia's mature tooling and scale. By 2023-2024, Graphcore's remained under $100 million annually despite $700 million in , contrasting Nvidia's trillions in cap fueled by demand, as customers prioritized over raw specs. The rivalry underscored GPU dominance as a barrier to IPU penetration: while Graphcore targeted niches like sparse models or edge inference with claims of 11x better price-performance versus Nvidia's DGX A100 in announcements, real-world scalability issues and Nvidia's iterative GPU advancements (e.g., H100's tensor performance leaps) eroded these edges. Post- SoftBank acquisition, Graphcore pivoted toward hybrid IPU-GPU integrations, implicitly acknowledging Nvidia's entrenched position rather than outright displacement. This dynamic reflected broader causal factors in hardware: software inertia and effects favored incumbents, rendering even superior architectures secondary without equivalent developer mindshare.

Performance Benchmarks and Claims

Graphcore has asserted superior performance for its Intelligence Processing Units (IPUs) in specific workloads, particularly those benefiting from massive parallelism and sparsity handling via MIMD architecture. In December 2020, the company claimed its IPU-M2000 system delivered up to 18x higher training throughput and 600x inference throughput over A100 GPUs in select models like and ResNet-50, based on in-house optimizations with SDK. These assertions emphasized IPU advantages in and tile-based processing for irregular computations, contrasting 's SIMT GPU approach. Participation in standardized MLPerf training benchmarks provided more verifiable data. In MLPerf v1.1 (December 2021), Graphcore reported the fastest single-server time-to-train at 10.6 minutes using an IPU-POD system, while its IPU-POD16 achieved 28.3 minutes for ResNet-50, surpassing A100's 29.1 minutes by 24%—attributed to software refinements in and PopART frameworks. Earlier, in MLPerf v1.0 (June 2021), results were less favorable, with Graphcore's ResNet-50 time at 32.12 minutes versus Nvidia's 28.77 minutes on DGX A100.
MLPerf BenchmarkGraphcore ConfigurationGraphcore Time-to-Train Time-to-TrainNotes
ResNet-50 (v1.0)IPU-POD (unspecified scale)32.12 minutes28.77 minutesClosed division; faster despite similar power envelopes.
ResNet-50 (v1.1)IPU-POD1628.3 minutes29.1 minutes24% edge for Graphcore via software gains; single-server closed.
(v1.1)IPU-POD (single-server)10.6 minutesNot directly compared ( multi-node faster overall)Graphcore's claimed fastest single-server result.
Independent scrutiny reveals limitations in these claims. A 2021 SemiAnalysis evaluation of MLPerf v1.0 data compared 16 IPUs (totaling ~13,000 mm² silicon, 7nm ) against 8 A100s (~6,600 mm²), finding inferior training performance, performance per dollar (1.3-1.6x deficit), and efficiency per mm² for Graphcore, despite matched power consumption (~6-7 kW per server)—issues linked to poor scaling beyond small pods and immature software versus 's maturity. consistently led MLPerf overall, with up to 2.2x gains in subsequent rounds via ecosystem optimizations. Later studies confirm mixed outcomes. A 2024 arXiv evaluation of IPUs alongside GPUs and other accelerators noted IPU strengths in flexible SIMD/SIMT mapping for diverse workloads but no broad throughput superiority in standard or training, where GPUs excelled in optimized scenarios. In graph algorithms, a 2024 paper found IPUs outperforming GPUs in heterogeneous parallel execution times due to independent core control. Absent consistent post-2021 MLPerf submissions, claims of IPU parity or edges remain confined to niche cases, undermined by Nvidia's dominance in scalable, general-purpose via software and market inertia.

Market Adoption Barriers

Graphcore's IPUs encountered substantial market adoption barriers stemming from the immaturity of its software ecosystem relative to Nvidia's platform, which boasts extensive libraries, frameworks, and developer familiarity accumulated over nearly two decades. Porting workloads to Graphcore's SDK often necessitated significant code refactoring and optimization, deterring enterprises reliant on established GPU-optimized tools like and native implementations. This friction was compounded by 's focus on IPU-specific features, such as fine-grained parallelism and sparsity handling, which provided advantages in niche inference tasks but lagged in seamless integration for broad AI pipelines. Architectural divergence from conventional GPUs represented another key impediment, as IPUs' MIMD (multiple instruction, multiple data) design and on-chip memory model required developers to abandon GPU-centric mental models, leading to steeper learning curves and higher initial deployment costs. Early adopters reported challenges in achieving consistent performance across diverse workloads, particularly in large-scale where IPU scaling to thousands of units exposed bottlenecks in inter-chip communication and software . Independent benchmarks occasionally highlighted IPU edges in memory-bound operations, but these were insufficient to overcome the ecosystem lock-in, with major hyperscalers prioritizing Nvidia's plug-and-play compatibility for trillion-parameter models. Customer acquisition hurdles further stalled penetration, as Graphcore targeted research-oriented and edge AI segments initially, missing timely traction in high-volume cloud and datacenter markets dominated by partnerships. High-profile setbacks, including the 2023 loss of a strategic with , eroded confidence among potential buyers wary of risks without proven hyperscale viability. These dynamics manifested in tepid revenue—merely £4.5 million in 2022 against a prior $2.8 billion valuation—reflecting limited commercial deployments beyond pilot programs.

Controversies and Criticisms

In early 2024, cloud provider HyperAI filed a lawsuit against Graphcore in courts, alleging over a failed partnership to develop cloud services powered by Graphcore's Intelligence Processing Units (IPUs). The dispute stemmed from initial discussions in February 2021, when HyperAI approached Graphcore to integrate its Bow POD16 hardware into a cloud platform, paying €121,000 via a intermediary for the licenses, and three years of support. By February 2022, the parties had agreed to collaborate toward a formal cloud partnership, but delays arose from misconfigurations in the shipped hardware (ordered in April 2022 and delivered in August 2022), pushing HyperAI's platform launch to December 2, 2022. HyperAI claimed Graphcore abruptly withdrew three days after the launch on December 5, , reneged on exclusivity commitments, and denied the validity of the sale despite delivery, rendering HyperAI's worthless and halting operations. HyperAI CEO Andrew Foe attributed these actions to Graphcore's pivot to an exclusive deal with G-Core Labs and internal issues like layoffs, describing the behavior as a betrayal that exhausted his personal savings. Graphcore, facing its own financial pressures—including revenue of $2.7 million (down 46% year-over-year) and losses of $204.6 million—responded by stating it "vigorously disputes HyperAI’s meritless claims" and declined further comment on the pending litigation. The case highlighted tensions in early AI hardware partnerships amid Graphcore's struggles to scale amid competition from , though no resolution has been publicly reported as of mid-2024, coinciding with Graphcore's acquisition by SoftBank. No other significant legal disputes with partners were identified in public records.

Management and Strategic Errors

Graphcore's management faced criticism for architectural decisions that prioritized a novel MIMD-based Intelligence Processing Unit (IPU) design, featuring massive on-chip but lacking high-bandwidth memory (HBM), rendering it ill-suited for memory-intensive workloads like training prevalent after 2020. In 2021 benchmarks, systems with 16 IPUs, utilizing twice the silicon area of comparable A100 GPUs (823 mm² vs. 826 mm² per chip), underperformed in MLPerf training tasks such as ResNet-50 and , even after hand-tuning, while matching power draw only to an 8x A100 setup at higher cost per performance. This stemmed from scalability limitations and an underdeveloped software stack, contrasting 's mature ecosystem, which executives like CEO Nigel Toon acknowledged required substantial investment but failed to match in adoption. Commercially, leadership erred in pivoting repeatedly between targeting hyperscalers like —losing a major 2021 deal due to buggy software and abrupt announcements without post-mortems—and smaller startups, leading to inventory mismanagement and sales confusion among staff. Partnership disputes exacerbated issues; in 2023, cloud provider HyperAI accused Graphcore of reneging on a 2021 agreement by prioritizing an undisclosed exclusive with G-Core Labs, delaying POD16 system deliveries ordered in April 2022, and withdrawing support post-layoffs in January 2023, prompting legal action. Such decisions contributed to talent drain, including key executives departing for and by 2023, amid low morale from unfulfilled hype as a "Nvidia rival." Financially, overambitious pursuits like $120 million "brain-scale" supercomputer plans strained resources without commensurate revenue, yielding just $2.7 million in 2022 (a 46% drop from 2021) against $204.6 million in losses, necessitating layoffs that halved headcount from 620 to 418 by late 2023. Inability to secure pension fund backing or additional rounds despite a $2.8 billion peak valuation in 2020 culminated in a July SoftBank acquisition for approximately $500 million—below total funding raised—wiping employee share value and signaling validation failure for core IPU tech over ecosystem lock-in. These missteps reflected broader executive shortcomings in aligning tech innovation with market realities dominated by Nvidia's execution.

Broader Impact

Applications in Specific AI Workloads

Graphcore's Intelligence Processing Units (IPUs) have been applied to (NLP) tasks, enabling efficient training and inference of transformer models through integrations with frameworks like Optimum. In 2022, Graphcore expanded support for a broader range of NLP modalities and tasks, including text classification and generation, by optimizing pre-trained models for IPU execution. Providers such as NLP Cloud deployed IPU-hosted models for AI-as-a-service in 2023, leveraging partners like Gcore for scalable inference. In workloads, IPUs facilitate accelerated image processing and model scaling, as demonstrated by Graphcore's 2021 implementation of EfficientNet on IPU-POD systems, achieving training completion in under two hours for large-scale datasets. This architecture supports higher-accuracy vision models by exploiting IPU parallelism for convolutional operations, outperforming GPUs in memory-bound scenarios according to independent evaluations. Graph Neural Networks (GNNs), used in recommendation systems and , benefit from IPU's fine-grained parallelism and MIMD execution model, enabling breakthroughs in sparse graph computations. Applications extend to , where GNNs model molecular interactions for target identification. Bioinformatics workloads, including DNA and protein , see significant speedups on IPUs; a 2023 study reported 10x over leading GPUs for these tasks, attributed to IPU's high throughput in algorithms. In drug discovery, biotech firm LabGenius utilized IPU-accelerated models in 2022 to reduce experiment turnaround from months to weeks, enhancing for cancer and inflammatory treatments. Genome assembly pipelines also leverage IPUs for faster of protein and DNA molecules, as verified in research. IPUs support hybrid AI-HPC simulations by using machine learning surrogate models to replace compute-intensive bottlenecks, transforming traditional high-performance computing in fields like physics. In particle physics, early evaluations showed IPU potential for event reconstruction and simulation due to efficient handling of irregular data patterns. These applications highlight IPU strengths in workloads requiring massive parallelism and low-latency memory access, though adoption remains limited by ecosystem maturity compared to GPU alternatives.

Contributions Versus Overstated Promises

Graphcore's development of the Intelligence Processing Unit (IPU) represented a significant architectural in , introducing a with up to 1,472 independent cores per chip, 900 MB of on-chip , and specialized support for sparse computations and irregular memory access patterns, which enabled more efficient handling of certain operations compared to GPU architectures reliant on high-bandwidth memory hierarchies. This design facilitated advancements in workloads like graph algorithms and surrogate modeling in (HPC), where IPUs demonstrated superior execution times over GPUs in heterogeneous environments. Additionally, Graphcore contributed to the open-source ecosystem by integrating IPU support into , enabling developers to port and optimize models for its hardware without full rewrites. Despite these technical merits, Graphcore's assertions of broad superiority—such as claims of 11x price-performance gains over Nvidia's DGX A100 systems in scaled configurations—proved overstated in practice, as IPUs underperformed in large-scale training dominated by dense operations, where Nvidia's mature ecosystem and software optimizations maintained dominance. Independent evaluations highlighted IPU strengths in niche tasks like skewed multiplications but revealed limitations in general scaling, contributing to limited commercial traction beyond specialized applications. The company's peak valuation of over $2.8 billion in contrasted sharply with its trajectory, marked by revenue shortfalls and inability to secure major contracts, ultimately leading to acquisition by on July 11, 2024, for a reported $500 million—less than cumulative investor funding—amid struggles to compete in a GPU-centric . This outcome underscored how Graphcore's innovations, while pushing boundaries in parallelism and efficiency for targeted /HPC use cases, were hampered by ecosystem immaturity and failure to disrupt entrenched incumbents, rendering early hype about revolutionizing compute unfulfilled.

References

  1. [1]
    Graphcore - Crunchbase Company Profile & Funding
    Legal Name Graphcore Limited ; Operating Status Active ; Company Type For Profit ; Founders Nigel Toon, Simon Knowles.Missing: achievements | Show results with:achievements
  2. [2]
    Graphcore's staff, investors and mission
    Graphcore is officially founded as a company, with its headquarters in Wine Street, Bristol, UK. Graphcore raises $32m in Series A funding round, led by Bosch.Missing: official sources
  3. [3]
    IPU Processors - Graphcore
    The IPU, or Intelligence Processing Unit, is a highly flexible, easy-to-use parallel processor designed from the ground up for AI workloads.
  4. [4]
    Graphcore IPU Technology Partnership | Pure Storage
    Graphcore helps innovators create new breakthroughs in machine intelligence with their Intelligence Processing Unit hardware and Poplar® software.
  5. [5]
    The Unicorn Shaking Up the AI Semiconductor Industry - Graphcore
    Graphcore provides a hardware solution and software stack that will let companies realise and deploy a new wave of AI algorithms.
  6. [6]
    Graphcore to invest £1bn in India, creating 500 semiconductor jobs
    Graphcore was founded in Bristol, UK in 2016 to develop a complete AI compute stack, including silicon, datacenter infrastructure and software, and was acquired ...Missing: official | Show results with:official<|separator|>
  7. [7]
    Graphcore - a SoftBank Group company - to invest £1bn in India ...
    Oct 9, 2025 · 9, 2025 /PRNewswire/ -- Graphcore, a wholly owned subsidiary of SoftBank Group, is opening a new AI Engineering Campus in Bengaluru ...
  8. [8]
    UK chipmaker Graphcore valued at $2.8bn after it raises $222m
    Dec 30, 2020 · Plans for Graphcore were first started in late 2013 by its chief executive, Nigel Toon, and its chief technology officer, Simon Knowles, both of ...Missing: Marlborough Tavern
  9. [9]
    Nvidia's Arm deal opposed by Graphcore in filing to CMA regulator
    Feb 5, 2021 · The pair formed the initial idea for Graphcore in a small pub called the Marlborough Tavern in Bath in January 2012.
  10. [10]
    Japanese takeover will set up Bath-born tech 'unicorn' Graphcore to ...
    Jul 15, 2024 · The idea for Graphcore was born in 2012 when co-founder Nigel Toon and Simon Knowles met in Bath's Marlborough Tavern to discuss their next ...
  11. [11]
    Nigel Toon, CEO and co-founder, Graphcore | Events | 1E9.community
    Nigel has a background as a technology business leader, entrepreneur and engineer having been CEO at two successful VC-backed processor companies XMOS and ...
  12. [12]
    Simon Knowles - Graphcore
    Simon is co-founder, CTO and EVP Engineering of Graphcore and is the original architect of the “Colossus” IPU. He has been designing original processors for ...
  13. [13]
    Mr Simon Knowles FRS - Fellow Detail Page | Royal Society
    Simon Knowles is a silicon engineer and entrepreneur. His work is in domain-specific processors, yielding 14 production chips over 40 years.
  14. [14]
    Simon Knowles - CTO & Founder @ Graphcore - Crunchbase
    Simon Knowles is the CTO and Founder of Graphcore. He previously worked at Icera and attended the University of Cambridge.
  15. [15]
    Inside Graphcore, the UK unicorn that's about to become the Intel of AI
    Aug 27, 2019 · In fact, Bristol has a strong history as a hub for hardware engineering, which can be traced back to 1978 and £50m of seed investment ...Missing: inception | Show results with:inception
  16. [16]
    Graphcore's AI chips now backed by Atomico, DeepMind's Hassabis
    Jul 21, 2017 · Bristol, UK based Graphcore has just closed a $30 million Series B round, led by Atomico, fast-following a $32M Series A in October 2016. It's ...
  17. [17]
    In Fierce AI Race, Everybody Wants This New Human-Like Chip
    Jun 4, 2019 · Unlike the number-crunching alternatives, British startup Graphcore has developed a brain for computers that excels at guesswork.<|separator|>
  18. [18]
    Move over CPUs and GPUs, the Intelligence Processing Unit is the ...
    Jun 25, 2018 · Graphcore's new chip, an intelligence processing unit (IPU), emphasises graph computing with massively parallel, low-precision floating ...
  19. [19]
    Graphcore pre-IPO - dizraptor.app
    According to Pitchbook, Graphcore's revenue grew by more than 8 times in 2019 and was $10.1M. Nigel Toon, the founder and CEO of Graphcore, told Techcrunch ...
  20. [20]
    2. IPU hardware overview — IPU Programmer's Guide
    The IPU is based on a highly parallel architecture designed to accelerate machine learning applications. It provides very high floating-point performance.
  21. [21]
    Dissecting the Graphcore IPU Architecture via Microbenchmarking
    Dec 7, 2019 · This report focuses on the architecture and performance of the Intelligence Processing Unit (IPU), a novel, massively parallel platform ...
  22. [22]
    Poplar® Software - Graphcore
    The Poplar SDK is a complete software stack, which was co-designed from scratch with the IPU, to implement our graph toolchain in an easy to use and flexible ...
  23. [23]
    1. Introduction — Poplar SDK Overview - Graphcore Documents
    The Poplar® SDK is the world's first complete tool chain specifically designed for creating graph software for machine intelligence applications. Poplar ...
  24. [24]
    Poplar SDK 3.1 now available - Graphcore
    Dec 20, 2022 · Poplar SDK 3.1 brings a range of updates, including support for PyTorch 1.13 and further sparse computation features.
  25. [25]
    PopVision™ Tools - Graphcore
    PopVision tools include Graph Analyser for IPU performance and System Analyser for host-side code analysis, helping to understand IPU performance.
  26. [26]
    Graphcore Poplar SDK Container Images Now Available on Docker ...
    Nov 12, 2021 · Graphcore's Poplar® SDK is available for developers to access through Docker Hub, with Graphcore joining Docker's Verified Publisher Program ...
  27. [27]
    Graphcore expands AI tools ecosystem as UbiOps adds IPU support
    Jul 26, 2023 · Graphcore's software stack is fully integrated with major AI frameworks, including PyTorch and PyTorch Geometric, Tensorflow, Keras and ...
  28. [28]
    graphcore/poplibs: Poplar libraries - GitHub
    The Poplar SDK is a complete software stack for graph programming on the IPU. It includes the graph compiler and supporting libraries.
  29. [29]
    Graphcore Looks Like A Complete Failure In Machine Learning ...
    Jul 1, 2021 · It has a poor software stack and requires a load of hand tuning by very skilled programmers. Even with the selective cherry-picking of ...<|separator|>
  30. [30]
    Supporting Scalable Machine Intelligence Systems with the Poplar ...
    Learn how to maximise the performance and efficiency of your AI applications at scale with Graphcore's IPU and Poplar software stack.<|separator|>
  31. [31]
    Introducing 2nd Generation IPU Systems for AI at Scale - Graphcore
    Jul 15, 2020 · The 2nd gen IPU has greater processing power, more memory, 8x performance increase, 1 PetaFlop per blade, and can scale to 16 ExaFlops.
  32. [32]
    Graphcore Launches Wafer-on-Wafer 'Bow' IPU - HPCwire
    Mar 3, 2022 · Graphcore introduced its AI-focused, PCIe-based Intelligent Processing Units (IPUs) six years ago. Since then, the company has done anything ...Missing: prototype | Show results with:prototype<|control11|><|separator|>
  33. [33]
    Introducing Graphcore Bow IPU - Exxact Corporation
    Apr 14, 2022 · To complement the IPU systems, Graphcore is also offering a complete toolchain designed for machine intelligence software called Poplar SDK.
  34. [34]
    Graphcore Bow IPU Introduces TSMC 3D Wafer-on-Wafer Processor
    Mar 3, 2022 · According to Graphcore, the new Bow Pod systems packing the Bow IPU ... launch by 2024. The Good Computer will be powered by a next-gen Bow ...
  35. [35]
    Introducing the Graphcore Rackscale IPU-POD™
    Dec 4, 2018 · A single 42U rack IPU-Pod delivers over 16 Petaflops of mixed precision compute and a system of 32 IPU-Pods scales to over 0.5 Exaflops of mixed precision ...
  36. [36]
    Graphcore Introduces Larger-Than-Ever IPU-Based Pods - HPCwire
    Oct 22, 2021 · The newly announced IPU-POD128 has 128 Graphcore GC200 IPUs across 32 Graphcore M2000 compute blades and includes 8.2TB of memory, while the ...
  37. [37]
    [PDF] Graphcore IPU Based Systems With Weka Data Platform
    The system was configured in a 6+2 protection scheme, reserving 19 CPU cores for WEKA, utilizing 6 NVMe drives per server and using a single network interface ...<|separator|>
  38. [38]
    [PDF] SCALABLE MACHINE INTELLIGENCE SYSTEMS
    SCALE OUT SUPPORT UP TO A MAXIMUM OF 4096 IPUS WITH FIRST GENERATION COLOSSUS MK1. • IPU LINK™ CAN BE EXTENDED ACROSS DOMAINS. • SUPPORTS OPTIMIZED IPU LINK ...<|separator|>
  39. [39]
    GraphCore details power figures for Mk2 chip and AI system ...
    Aug 27, 2021 · GraphCore has detailed the power performance of its 7nm Mk2 Colossus chip, with 16 chips in a 4U unit providing 4Pflop/s at 7kW.Missing: scale- | Show results with:scale-
  40. [40]
    Graphcore announces roadmap to Ultra Intelligence AI Supercomputer
    Mar 3, 2022 · Over 10 Exa-Flops of AI floating point compute · Up to 4 Petabytes of memory with a bandwidth of over 10 Petabytes/second · Support for AI model ...
  41. [41]
    Graphcore and Hugging Face launch new lineup of IPU-ready ...
    May 26, 2022 · On the hardware front, the Bow IPU—announced in March and now shipping to customers—is the first processor in the world to use Wafer-on-Wafer ( ...<|control11|><|separator|>
  42. [42]
    Graphcore Poplar SDK Container Images Now Available on Docker ...
    Nov 9, 2021 · Graphcore's Poplar SDK is available for developers to access through Docker Hub, with Graphcore joining Docker's Verified Publisher Program.
  43. [43]
    G-Core Labs IPU Cloud service now available - Graphcore
    Jun 22, 2022 · Unleash the IPU advantage today. Customers beginning their IPU journey can do so quickly and easily, using the included Graphcore Poplar SDK.Missing: AWS Azure Google tools
  44. [44]
    Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions ...
    Atos and Graphcore today announce that they have signed a partnership to accelerate performance ...Missing: ecosystem | Show results with:ecosystem<|separator|>
  45. [45]
    Big names in machine intelligence join Graphcore's new $30 million ...
    Graphcore announces Series B $30 million funding round, led by Atomico with several AI pioneers backing the IPU as a transformational machine learning
  46. [46]
    AI chip startup Graphcore closes $200M Series D, adds BMW and ...
    Dec 18, 2018 · So Graphcore's total funding raised to date is circa $312M. In a blog post announcing the Series D, co-founder Nigel Toon says interest in the ...Missing: details | Show results with:details<|control11|><|separator|>
  47. [47]
    Graphcore secures additional $150 million in new capital
    Feb 25, 2020 · This brings the total investment in Graphcore to date to over \$450 million with the most recent valuation at \$1.95 billion.
  48. [48]
    Graphcore raises $222 million in Series E Funding Round
    Graphcore announces Series E funding round of $222m from Ontario Teachers' Pension Fund, Schroders and Fidelity International as well as existing investors.
  49. [49]
    SoftBank buys Graphcore, the British chipmaking startup, to fuel its ...
    Jul 11, 2024 · Graphcore Ltd., the UK-based artificial intelligence chip developer that's hoping to take on Nvidia Corp., has been acquired by the Japanee technology giant ...
  50. [50]
    AI chipmaker Graphcore raises $222M at a $2.77B valuation and ...
    Dec 28, 2020 · The funding, Toon said, gives Graphcore $440 million in cash on the balance sheet and a post-money, $2.77 billion valuation to start 2021. “We' ...
  51. [51]
    Graphcore Stock Price, Funding, Valuation, Revenue & Financial ...
    Graphcore's 2022 revenue was $2.7M. Graphcore's most recent revenue is from 2020. Sign up for a free demo to see revenue data from 2023, 2020 and more.
  52. [52]
    SoftBank Acquires Chip Designer Graphcore On 'Journey' To ... - CRN
    Jul 12, 2024 · As a result, Graphcore's 2022 revenue declined 46 percent to $2.7 million and it slashed head count by 21.7 percent to 494 employees that year, ...
  53. [53]
    SoftBank buys Graphcore - Jon Peddie Research
    Jul 12, 2024 · Ultimately, we only really know that it is a fraction of the company's peak valuation (>$2.5 billion) and a notable loss for existing investors.
  54. [54]
    SoftBank acquires British AI chip designer Graphcore - DCD
    Jul 12, 2024 · In October 2023, Graphcore's 2022 financial statements revealed the company had made pre-tax losses of £161 million ($204m), with revenue ...Missing: performance | Show results with:performance
  55. [55]
  56. [56]
    Graphcore joins SoftBank Group to build next generation of AI ...
    Jul 11, 2024 · Graphcore today announced that the company has been acquired by SoftBank Group Corp. Under the deal, Graphcore becomes a wholly owned subsidiary of SoftBank.
  57. [57]
    Japan's SoftBank acquires British AI chipmaker Graphcore | Reuters
    Jul 11, 2024 · Japan's SoftBank Group has bought artificial intelligence chipmaker Graphcore for an undisclosed sum, ending long-running speculation over ...
  58. [58]
    Graphcore acquired by SoftBank after months of speculation - Sifted
    Jul 11, 2024 · Sources tell Sifted that the sale price of the company is less than the nearly $700m invested in the company.
  59. [59]
    AI Chip Startup Graphcore Acquired by SoftBank - EE Times
    Jul 11, 2024 · Graphcore developed three generations of its IPU chip. The third generation, launched in 2022, was the first processor to be built using 3D ...
  60. [60]
    Graphcore — bought by SoftBank for $600m+ — made $4m ... - Sifted
    bought by SoftBank for $600m+ — made $4m in revenue in 2023. The company's latest financial accounts state that 2023 was a ...
  61. [61]
    SoftBank acquires UK AI chipmaker Graphcore - TechCrunch
    Jul 11, 2024 · U.K. chip company Graphcore has been formally acquired by Japan's SoftBank; terms of the deal were not disclosed.Missing: performance pre-
  62. [62]
  63. [63]
    MoFo Advises SoftBank on its Acquisition of Graphcore
    Jul 12, 2024 · A global deal team from MoFo represented SoftBank throughout the transaction, as Graphcore becomes a wholly owned subsidiary of SoftBank.Missing: details | Show results with:details
  64. [64]
    Graphcore: From Unicorn to SoftBank Acquisition – What Happened?
    Aug 9, 2024 · In 2013, the Graphcore project began in stealth mode, with an official launch in 2016 in Bristol, UK. This place is sometimes called a “Deep ...
  65. [65]
    SoftBank's Graphcore Plans $1.3 Billion Chip Investment in India
    Oct 9, 2025 · Graphcore will invest £1 billion to build out infrastructure in India over the next decade, including a new research hub. The research facility ...
  66. [66]
    Graphcore is hiring again after its SoftBank acquisition - TechRadar
    Nov 18, 2024 · Graphcore has announced plans for a fresh hiring drive just months after its landmark acquisition by SoftBank.<|control11|><|separator|>
  67. [67]
    Softbank-owned chipmaker Graphcore will invest $1.3 billion in ...
    Oct 9, 2025 · Graphcore, a UK chip firm, will invest $1.3 billion in India over the next decade. The company is opening its first office in Bengaluru, hiring ...
  68. [68]
    Graphcore - a SoftBank Group company - to invest £1bn in India ...
    Oct 9, 2025 · Since acquiring Graphcore in 2024, SoftBank Group has announced a series of AI compute infrastructure initiatives, including the $500bn ...<|separator|>
  69. [69]
    SoftBank-owned chipmaker Graphcore plans $1.3b India expansion
    Oct 9, 2025 · Graphcore's plans reportedly include hiring up to 500 people in India over five years. SoftBank Group Corp. acquired Graphcore in 2024 after the ...
  70. [70]
    [News] Softbank Acquired Graphcore, Hinting at a Battle between ...
    Jul 16, 2024 · Although both IPU and GPU can be used in the AI domain, they differ a lot in several aspects, such as computational architecture and memory ...
  71. [71]
    Graphcore sets new AI Performance Standards with MK2 IPU Systems
    Dec 9, 2020 · New performance results for Graphcore MK2 IPU systems with orders of magnitude improvements for some models, outperforming Nvidia A100 DGX ...
  72. [72]
    Comparison of Graphcore IPUs and Nvidia GPUsfor cosmology ...
    Jun 4, 2021 · It presents the benchmark between a Nvidia V100 GPU and a Graphcore MK1 (GC2) IPU on three cosmological use cases: a classical deep neural ...<|separator|>
  73. [73]
    Graphcore Challenges Nvidia With In-House Benchmarks - EE Times
    Dec 20, 2020 · The majority of Graphcore's benchmarks compare the IPU-M2000, a system with four IPU-MK2 chips, against a single Nvidia A100 GPU. The company ...
  74. [74]
    AI Chip Statistics 2025: Funding, Startups & Industry Giants
    Oct 7, 2025 · NVIDIA maintains its industry lead with an estimated 86% share in the AI GPU segment for 2025. Edge AI chips are forecast to reach $13.5 billion ...
  75. [75]
    Nvidia's Dominance in the AI Chip Market - MarketsandMarkets
    Sep 16, 2024 · NVIDIA holds a dominant position in the AI chip market, thanks to its powerful graphics processing units (GPUs), particularly the A100 and H100 models.
  76. [76]
    Top 20+ AI Chip Makers: NVIDIA & Its Competitors
    AI chips enable parallel computing capabilities are increasingly in demand. This article will information to you on 10 popular AI chip makers.
  77. [77]
    [D] Graphcore claims 11x increase in price-performance compared ...
    Aug 13, 2020 · The GC200 is the successor to their initial IPU chip. This new chip has 900mb of on-chip memory and 1472 cores vs 300mb and 1216 cores on ...
  78. [78]
    NVIDIA's Dominance: Why Every AI Company is Rushing to Buy
    Jul 4, 2025 · Why NVIDIA Chips Are in High Demand ; 1. Best-in-Class Performance for AI Training ; 2. CUDA Ecosystem: The Real Moat ; 3. Data Center Demand Is ...
  79. [79]
    Graphcore IPU vs. Nvidia GPUs: How They're Different
    Dec 22, 2020 · Graphcore is claiming significant performance advantages for its second-generation IPU versus state-of-the-art Nvidia GPUs.
  80. [80]
    Performance at Scale: Graphcore's Latest MLPerf Training Results
    Dec 1, 2021 · In fact, in this MLPerf 1.1 training round, Graphcore delivered the fastest single-server time-to-train result for BERT at 10.6 minutes. Fastest ...
  81. [81]
    Graphcore Celebrates a Stunning Loss at MLPerf Training v1.0
    Jul 2, 2021 · “Graphcore highlights its 32.12 minutes to complete the benchmark while the NVIDIA DGX A100 takes 28.77 minutes which is the result 1.0-1059.”
  82. [82]
    Nvidia Dominates Latest MLPerf Results but Competitors ... - HPCwire
    Dec 1, 2021 · MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating.
  83. [83]
    Performance Evaluation of Parallel Graphs Algorithms Utilizing ...
    The Graphcore IPU outperforms other devices for the studied heterogeneous algorithms and, currently, provides best-in-class execution time results.
  84. [84]
    Graphcore. A Cautionary Tale For Would-be NVIDIA Challengers
    Jun 11, 2023 · At first blush, the results look reasonable. For example, on ResNet-50, Graphcore clocks up 37.12 mins compared to NVIDIA's 28.77 mins, but at ...
  85. [85]
    A Look At Graphcore's AI Software - Forbes
    May 26, 2020 · The Graph Compiler lies at the heart of run-time deployment and optimization and has been in development for over 5 years. Graphcore designed ...
  86. [86]
    Graphcore Was the UK's AI Champion—Now It's Scrambling to Survive
    Oct 6, 2023 · Zavrel says that Graphcore may have struggled because its technology is significantly different from the Nvidia GPUs that users are familiar ...
  87. [87]
    Graphcore is struggling — what's gone wrong for the once 'NVIDIA ...
    Oct 6, 2023 · Former employees tell Sifted that the company's woes have resulted from a mix of bad luck and bad strategy · Targeting the wrong customers.
  88. [88]
    Why Etched (probably) won't beat Nvidia - zach's tech blog
    Apr 5, 2024 · Sequoia wrote off its stake in Graphcore after the chip startup lost a major deal with Microsoft. Now, Graphcore is reportedly up for sale.
  89. [89]
    Nvidia Is Soaring. AI Chip Rival Graphcore Can Barely Get Off the ...
    May 31, 2023 · Graphcore's own investors are reportedly skeptical. This year, Sequoia Capital wrote down its internal valuation of the startup to zero, ...
  90. [90]
    Inside HyperAI's lawsuit against Graphcore - Sifted
    Mar 25, 2024 · HyperAI has sued UK chipmaking startup Graphcore over breach of contract, accusing the British startup of walking away from a cloud partnership deal.
  91. [91]
    HyperAI CEO accuses Graphcore of “absurd” behavior during ...
    Mar 11, 2024 · Cloud company HyperAI has accused Graphcore of unfair play, claiming that the British chip designer has reneged on partnership deals and denied selling HyperAI ...Missing: legal | Show results with:legal
  92. [92]
    My personal story on how HyperAI got cheated. - Graphcore - LinkedIn
    Mar 1, 2024 · HyperAI's decision to choose Graphcore's Bow POD16 over Nvidia was based on innovation promise and partnership. This optimism faded when Gautier ...
  93. [93]
    After Arm adventure, SoftBank acquires British AI firm Graphcore
    Jul 12, 2024 · In March, problems for Graphcore intensified. Dutch cloud provider HyperAI filed a lawsuit for a breach of contract. In 2021, its CEO Andrew Foe ...
  94. [94]
    NLP Cloud adding IPU-powered models to its AI-as-a-Service platform
    Jun 9, 2023 · The current implementation makes use of IPUs hosted by Gcore, a Graphcore cloud compute partner. NLP Cloud joins a growing list of AIaaS ...Missing: recommendation | Show results with:recommendation<|separator|>
  95. [95]
    Accelerating Computer Vision: scaling EfficientNet to large IPU-Pod ...
    Dec 16, 2021 · Graphcore Research demonstrate faster image processing at scale, accelerating EfficientNet on hyperscale IPU-POD systems in under 2 hours.
  96. [96]
    Accelerate AI/ML with the new IPU-based cloud platform by G-Core ...
    The Graphcore IPU was tested not just with MLPerf applications but ran through a range of Natural Language Processing, higher accuracy Computer Vision and Graph ...
  97. [97]
    Studying the Potential of Graphcore® IPUs for Applications in ...
    Mar 17, 2021 · This paper presents the first study of Graphcore's Intelligence Processing Unit (IPU) in the context of particle physics applications.
  98. [98]
    and why Graphcore IPUs are great at GNNs
    Jan 31, 2023 · Using Graphcore IPUs, the NUS team achieved speedups of between 3-4X compared to state-of-the-art GPUs – a level of performance that Professor ...
  99. [99]
    IPU delivers 10X acceleration for DNA and protein sequence ...
    Nov 7, 2023 · For the computationally challenging task of DNA and protein sequence alignment, the IPU delivered a 10X speedup over leading GPUs and 4.65X acceleration ...
  100. [100]
    LabGenius speeds up AI-based drug discovery with Graphcore IPUs
    Apr 20, 2022 · LabGenius researchers are using BERT, running on Graphcore IPUs, to identify new treatments for conditions like cancer and inflammatory ...
  101. [101]
    Graphcore IPU 'faster for AI-based drug discovery' than GPUs
    Apr 21, 2022 · With Graphcore, we reduced the turnaround time to about two weeks, so we can experiment much more rapidly, and we can see the results quicker," ...Missing: genomics | Show results with:genomics
  102. [102]
    Processor made for AI speeds up genome assembly
    Nov 1, 2023 · A hardware accelerator initially developed for artificial intelligence operations successfully speeds up the alignment of protein and DNA molecules.
  103. [103]
    AI for Simulation: how Graphcore is helping transform traditional HPC
    Mar 9, 2022 · AI, using surrogate models, speeds up HPC by replacing bottlenecks with machine learning, and Graphcore's IPU is designed for this, enhancing ...
  104. [104]
    (PDF) Studying the Potential of Graphcore IPUs for Applications in ...
    This paper presents the first study of Graphcore's Intelligence Processing Unit (IPU) in the context of particle physics applications.<|separator|>
  105. [105]
    [PDF] On Performance Analysis of Graphcore IPUs: Analyzing Squared ...
    Sep 30, 2023 · The Graphcore Intelligence Processing Unit (IPU) is a highly parallel processor and has been especially designed for a large amount of ...
  106. [106]
    Graphcore Joins the PyTorch Foundation as a General Member
    Sep 6, 2023 · Graphcore has contributed to the PyTorch ecosystem by developing integrations to run on their IPU hardware. These integrations enable ...