Supermicro
Super Micro Computer, Inc. (Supermicro) is a multinational information technology company specializing in the design, development, and manufacture of high-performance, energy-efficient servers, storage systems, and networking solutions tailored for enterprise data centers, cloud computing, artificial intelligence infrastructure, high-performance computing, and edge applications.[1] Founded in 1993 by Charles Liang in San Jose, California—its current global headquarters—the company pioneered modular Server Building Block Architecture to enable customizable, scalable IT systems based on open standards.[1] With manufacturing facilities spanning over 6 million square feet across Silicon Valley, Taiwan, and the Netherlands, Supermicro operates in more than 100 countries and achieved $22 billion in revenue for fiscal year 2025.[1] Ranked as the third-largest server supplier worldwide by IDC, it has become a key enabler of the AI boom, deploying over 100,000 liquid-cooled GPUs to support large-scale AI factories and training workloads.[1] Supermicro's growth has been propelled by demand for its application-optimized systems integrating advanced processors from partners like AMD and NVIDIA, positioning it as a critical supplier for hyperscale data centers and on-premises AI deployments.[2] The company joined the Fortune 500 and S&P 500 indices amid this expansion, reflecting its transition from a niche innovator to a major player in green computing and 5G/edge infrastructure.[1] In 2024, it encountered challenges including the resignation of auditor Ernst & Young over internal control concerns and a delayed annual filing prompted by short-seller allegations of accounting irregularities, though Supermicro asserted no prior financial restatements were required and completed the filing with a new auditor.[3][4][5]
History
Founding and Early Development
Super Micro Computer, Inc., commonly known as Supermicro, was founded in September 1993 by Charles Liang in San Jose, California.[6] Liang, a Taiwanese-born electrical engineer who immigrated to the United States, established the company to develop high-performance server motherboards and systems, capitalizing on the emerging demand for scalable computing solutions in Silicon Valley during the mid-1990s technology expansion.[7][8] The initial operation consisted of a small team of five employees, including Liang's wife, Chiu-Chu Liu (also known as Sara Liu), who served as the company's treasurer.[9] In its early years, Supermicro prioritized customization and reliability in server designs, focusing on energy-efficient architectures and modular components to differentiate from larger competitors like Dell and Hewlett-Packard.[10] The company achieved profitability from its first year of operations, a rarity for startups in the hardware sector, by leveraging Liang's expertise in rapid prototyping and supply chain optimization.[11] Co-founder Yih-Shyan Liaw joined as Senior Vice President of Business Management, contributing to early engineering and operational strategies.[12] By the late 1990s, Supermicro had expanded its product offerings to include complete server systems tailored for enterprise and data center applications, establishing a reputation for quick adaptation to processor advancements from Intel and AMD.[1] A key milestone in early development occurred in 1998 with the opening of a manufacturing facility in the Netherlands, aimed at reducing lead times and serving the European market more effectively amid growing global demand for high-density computing.[13] This move reflected Supermicro's strategy of vertical integration, controlling design, assembly, and testing to maintain quality and cost advantages, while navigating the dot-com boom's volatility without over-reliance on speculative financing.[14] Throughout this period, the company avoided significant debt, sustaining growth through reinvested earnings and partnerships with semiconductor suppliers.[11]Expansion into Global Markets
Super Micro Computer established its first international manufacturing presence in Taiwan in 1996, leveraging the region's supply chain efficiencies for component sourcing and assembly.[15] This move supported early growth in Asia-Pacific markets, where the company expanded operations to include the Supermicro Taiwan Science and Technology Park.[1] By the early 2000s, following its 2007 initial public offering, Supermicro accelerated global distribution, establishing subsidiaries and logistics centers to penetrate Europe and other regions.[16] In Europe, Supermicro opened a subsidiary in the Netherlands to handle system integration, assembly, and distribution, enhancing service responsiveness amid rising demand for high-performance servers.[1] The company now operates manufacturing facilities across the United States (Silicon Valley), Taiwan, and the Netherlands, with total global campus space exceeding 3 million square feet following expansions adding over 1 million square feet in recent years.[1] These sites enable localized production, reducing latency in supply chains for enterprise and data center customers. Over 4,000 employees support operations spanning more than 100 countries, with particular emphasis on growth in EMEA and APAC regions.[1] Recent expansions have been driven by surging demand for AI infrastructure. In 2024, Supermicro announced plans for three additional manufacturing facilities in Silicon Valley and internationally to scale production of liquid-cooled rack-scale solutions.[17] By mid-2025, the company committed to broadening European capacity beyond existing Dutch operations, including enhanced local assembly, service, and logistics to meet AI server needs, amid reports of rapidly growing continental demand.[18] This includes support for NVIDIA Blackwell-based systems tailored for European AI factories.[19] Geographically disaggregated revenue reflects this international focus: for the period ended June 30, 2025, net sales totaled approximately $22 billion, with the United States at $13.05 billion, Asia at $5.49 billion, Europe at $2.73 billion, and other regions at $698 million.[20] While the U.S. remains the dominant market, international segments have shown accelerated growth, underscoring the strategic value of global facilities in diversifying revenue and mitigating regional supply risks.[21]Adaptation to AI and Cloud Demands
Supermicro responded to escalating demands for AI infrastructure by prioritizing the development of GPU-dense server platforms optimized for machine learning training, inference, and high-performance computing workloads. These systems, often incorporating up to multiple NVIDIA Blackwell GPUs per node, enable efficient scaling for large-scale AI clusters, as demonstrated in deployments by partners like Lambda Labs for production-ready AI factories.[22][23] The company's modular building block approach allows rapid customization and deployment of rack-scale solutions, supporting hyperscalers and enterprises transitioning to AI-driven cloud environments.[24] To address thermal challenges in power-intensive AI setups, Supermicro introduced direct liquid cooling technologies integrated into its server designs, capable of handling densities exceeding traditional air cooling limits and reducing energy consumption in data centers. This adaptation aligns with the requirements of next-generation AI hardware, where GPU power draws can surpass 1,000 watts per unit, necessitating advanced thermal management for sustained performance.[25] Collaborations with cooling specialists and chipmakers like NVIDIA and AMD further refined these solutions, enabling Supermicro to deliver pre-integrated systems for edge AI and cloud repatriation scenarios.[26][27] Strategic partnerships have been central to this evolution, including close integration with NVIDIA for validated AI reference architectures and with AMD for EPYC processor-based Instinct GPU servers, broadening compatibility across AI frameworks.[28] These efforts contributed to substantial revenue expansion, with fiscal year 2025 net income reaching $1.0 billion amid AI-fueled demand, though quarterly guidance adjustments reflect supply chain variability in component sourcing.[29][30] By mid-2025, Supermicro's global manufacturing expansions, including new facilities to support AI system assembly, positioned it to capture a larger share of the projected multi-trillion-dollar AI infrastructure market.[30]Products and Technologies
Core Hardware Offerings
Supermicro's core hardware offerings center on modular server building blocks designed for data centers, enterprise, and high-performance computing environments. These include rackmount servers supporting single- or dual-socket configurations with Intel Xeon or AMD EPYC processors, available in form factors from 1U to 4U, optimized for density and scalability.[31][32] The company's Data Center Building Block Solutions (DCBBS) enable customizable systems for cloud, AI, and edge applications, featuring innovations like the Twin family for dual-node efficiency in limited space.[33] GPU-accelerated servers form a key segment, engineered for AI, machine learning, and HPC workloads, supporting NVIDIA GPUs such as the Blackwell HGX B200 and GB200 series for enhanced performance in liquid-cooled racks.[23][34] Storage solutions encompass JBOD, all-flash NVMe systems, and hybrid configurations with up to 90 bays in 4U chassis, targeting enterprise data management and big data analytics.[35] Motherboards and chassis provide foundational components, with high-end boards supporting up to 144-core CPUs, DDR5 memory up to 1TB per socket, and NVMe drives for workstations and servers.[36] Networking hardware, including Super I/O Modules (SIOM), offers flexible Ethernet options from 1Gb/s to 100Gb/s, reducing I/O costs by up to 50% in blade and rack systems.[37] These offerings emphasize energy efficiency and rapid deployment, with quick-ship options for mainstream servers.[38]Specialized Solutions for High-Performance Computing
Supermicro provides a range of high-performance computing (HPC) solutions, including optimized servers, storage systems, and networking infrastructure designed for demanding workloads such as scientific simulations, AI/ML training, and financial modeling.[39] These solutions emphasize rack-scale integration, supporting clusters from hundreds to thousands of CPU cores with plug-and-play deployment to enhance scalability and efficiency.[39] Key server architectures include the SuperBlade® multi-node blade systems, FlexTwin™ dual-processor platforms, and GPU-accelerated SuperServers, which integrate processors from Intel (e.g., 4th Gen Xeon Scalable) and AMD (e.g., EPYC series) alongside accelerators like NVIDIA H100 GPUs with NVLink interconnects or AMD Radeon Instinct GPUs.[39] For instance, the SuperBlade systems power large-scale deployments, such as Osaka University's SQUID supercomputer featuring 27 racks with 1,520 blades and over 120,000 cores for advanced research.[39] GPU servers form a cornerstone of Supermicro's HPC portfolio, supporting configurations with up to eight NVIDIA GPUs per node, such as in 4U or 8U chassis optimized for parallel processing in HPC and AI applications.[23] Recent models incorporate NVIDIA B300 GPUs with up to 21 TB of HBM3e memory and 17 TB of LPDDR5X system memory, enabling high-throughput computations for deep learning and scientific visualization.[23] Networking options leverage high-speed fabrics like Intel Omni-Path Architecture to minimize latency in distributed environments, while storage solutions provide petabyte-scale capacity with redundancy for data-intensive HPC tasks.[39] Examples include Lawrence Livermore National Laboratory's Corona system, which utilizes over 1,000 AMD Radeon Instinct GPUs to achieve 11 petaflops of performance for simulations.[39] Innovations in cooling and efficiency are integral to Supermicro's HPC offerings, particularly rack-scale direct liquid cooling (DLC) solutions that reduce power consumption by up to 40% compared to air cooling, addressing thermal challenges in dense GPU deployments.[40] The FlexTwin architecture, for example, features liquid-cooled multi-node designs purpose-built for HPC at scale, supporting workloads in scientific research and complex modeling.[41] In October 2024, Supermicro introduced a complete liquid cooling ecosystem including coolant distribution units (CDUs), cold plates, and manifolds, tailored for AI and HPC with integration for NVIDIA HGX B200 8-GPU systems.[42] By May 2025, the DLC-2 platform extended this with in-rack CDUs and modular blocks for rapid deployment in energy-efficient data centers.[43] Complementary software for life-cycle monitoring and stress testing ensures reliability, with custom services via Supermicro's Datacenter Professional Services for optimized cluster builds.[39]Innovations in Efficiency and Scalability
Supermicro has developed modular server architectures, such as the MicroBlade platform, which enable high-density computing with significant reductions in cabling, space, and energy use. The 6U MicroBlade system, introduced in October 2025 and powered by AMD EPYC 4005 series processors, achieves up to 95% cable reduction, 70% space savings, and 30% energy savings compared to traditional rack servers, facilitating scalable deployments in data centers.[44] This building-block design supports rapid scaling for cloud service providers by integrating centralized management and built-in networking, minimizing deployment complexity while optimizing resource utilization.[45] In parallel, Supermicro's direct liquid cooling (DLC) technologies address thermal efficiency challenges in high-performance computing (HPC) and AI workloads. The DLC-2 solution, launched on May 14, 2025, reduces data center power consumption by up to 40% relative to air-cooled systems, alongside cuts in water usage, noise, and space, lowering total cost of ownership (TCO) by up to 20%.[46] These rack-scale liquid cooling implementations, capable of supporting over 100,000 GPUs per quarter as of October 2024, integrate with modular cooling towers using energy-efficient EC fan technology for quick deployment.[47] By disaggregating compute, storage, and networking components, Supermicro's green computing designs further enhance energy efficiency, drawing on over two decades of server optimization to minimize environmental impact without compromising performance.[48] For scalability in AI and HPC, Supermicro's SuperCluster architecture supports expansion to thousands of GPU nodes, leveraging NVIDIA accelerated computing for large language model (LLM) training and generative AI inference.[49] This rack-scale approach, including AI-optimized storage solutions introduced in January 2024, accelerates data pipelines by integrating high-throughput storage with compute resources, reducing implementation risks and enabling faster model training.[50] Complementary platforms like SuperBlade provide density-optimized multi-node systems for AI analytics and HPC, with redesigned X14 series servers in September 2024 offering workload-specific form factors for edge-to-cloud scaling.[51] These innovations prioritize open, standards-based interoperability to allow seamless horizontal and vertical scaling, addressing the exponential demands of AI-driven data centers.[39]Operations and Supply Chain
Manufacturing Facilities and Processes
Supermicro operates manufacturing facilities in the United States and Taiwan, emphasizing vertical integration to enable rapid customization and testing of server systems. The company's production shifted away from China in 2019 amid supply chain risks, with expansions in San Jose, California, and Taiwanese sites to bolster capacity for AI and high-performance computing demands.[52] In Silicon Valley, Supermicro maintains multiple campuses in San Jose, its headquarters location since founding in 1993. A third campus expansion was announced on February 28, 2025, projected to reach approximately three million square feet to support U.S.-based manufacturing of complete rack-scale solutions. These facilities incorporate 20 megawatts of power capacity, enabling production exceeding 1,500 direct liquid-cooled racks monthly, with strict inspections including X-ray scanning under the Made-in-the-USA program using vetted domestic personnel. In June 2024, plans for three additional Silicon Valley facilities were revealed to accelerate liquid-cooled AI deployments.[53][54][55][17] Taiwan hosts key production sites, including the Supermicro Science & Technology Park in Taoyuan City's Bade District at No. 1899, Xingfeng Road, established with expansions since 2012. A groundbreaking for an 800,000-square-foot facility occurred in 2019 to enhance R&D and manufacturing efficiency. In December 2024, Supermicro partnered with Taiwan's Guo Rui for a green computing center powered solely by renewables, aligning with sustainable data center builds in New Taipei City. These sites support modular assembly for edge-to-cloud solutions.[56][57][58] Manufacturing processes leverage in-house vertical integration, from component sourcing to full rack-scale assembly and validation, facilitating first-to-market innovations in air- and liquid-cooled architectures. Global capacity reaches 5,000 fully tested air-cooled or 2,000 liquid-cooled racks per month, with direct-to-chip cooling integrated at the rack level for AI workloads. This approach minimizes lead times through building-block modularity, allowing tailored HPC systems while maintaining traceability and quality control.[59][60][30]Supply Chain Management and Sourcing
Super Micro Computer, Inc. (Supermicro) primarily sources key components such as motherboards from suppliers in Taiwan and China, outsourcing production to firms including Wistron, Pegatron, and Universal Scientific Industrial.[61][52] In 2019, amid U.S.-China trade tensions, the company relocated certain production activities away from China to mitigate tariffs and geopolitical risks.[52] To manage supply chain vulnerabilities, Supermicro operates manufacturing and assembly facilities in the United States (primarily Silicon Valley), Taiwan, and the Netherlands, enabling localized production and reduced exposure to regional disruptions.[62] As of February 2025, the company expanded its U.S. footprint with a third Silicon Valley campus dedicated to advanced IT solutions, alongside announcements for additional facilities to meet surging demand for AI-optimized servers.[60] These expansions aim to enhance scalability and resilience, particularly for rack-scale liquid-cooled systems.[17] Supermicro's sourcing strategy includes supplier audits to enforce compliance with anti-trafficking and anti-slavery standards, conducted on a random basis to verify adherence to company policies.[63] The firm also integrates security protocols across the supply chain, from component sourcing through end-of-life management, prioritizing hardware integrity amid global semiconductor dependencies.[64] This approach supports rapid integration of processors from partners like Nvidia, with reports of shipping over 100,000 AI GPUs in 2024 leveraging diversified logistics.[65]Corporate Governance
Leadership and Board Composition
Charles Liang has served as founder, president, chief executive officer, and chairman of the board of Super Micro Computer, Inc. (Supermicro) since its inception on September 1, 1993.[66] Prior to Supermicro, Liang held engineering roles at Micro Center Computer Inc., Chips & Technologies, Inc., and Suntek Information International Group, and he holds an M.S. in electrical engineering from the University of Texas at Arlington.[66] The executive leadership team features several long-tenured members alongside operational specialists. David E. Weigand serves as senior vice president, chief financial officer, company secretary, and chief compliance officer.[12] Chiu-Chu Sara Liu, Liang's wife and co-founder, holds the position of senior vice president.[67] Yih-Shyan Liaw (also known as Wally Liaw), a founding member, is senior vice president of business development and a director.[68] Additional key executives include George Kao as senior vice president of operations and Don Clegg as senior vice president of worldwide sales. In fiscal year 2025, Liang's total compensation was $442,000, comprising primarily non-salary elements.[69] Supermicro's board of directors emphasizes insider continuity with limited independent oversight. As of mid-2025, members include Chairman Charles Liang, Sara Liu, Yih-Shyan Liaw, Robert Blair, and Judy L. Lin.[70] Scott Angel, an independent director with over 37 years in audit and assurance at Deloitte, joined on March 31, 2025.[71] The board's committee structure assigns Tally Liu, Robert Blair, and Scott Angel to the audit committee, reflecting a mix of internal family-linked directors and external financial expertise.[72] Neither Charles Liang nor Sara Liu serves on the boards of affiliated entities like Ablecom Technology, Inc., despite familial ties such as Steve Liang (Charles's brother) leading Ablecom.[73] This composition has drawn scrutiny in regulatory filings for potential conflicts, though the company maintains separation in operations.[67]Financial Performance and Metrics
Super Micro Computer reported revenue of $21.97 billion for fiscal year 2025, ending June 30, 2025, marking a 46.6% increase from $14.99 billion in fiscal year 2024, fueled by surging demand for AI-optimized servers and rack-scale systems.[74][29] This growth followed a 110.4% year-over-year revenue surge in fiscal year 2024, reflecting the company's expansion amid hyperscale data center builds.[74] Net income for fiscal year 2025 totaled $1.05 billion, down 9.0% from $1.15 billion in fiscal year 2024, with diluted earnings per share at $1.68 compared to $1.92.[29][75] Profit margins compressed to 4.8% from 7.7% year-over-year, attributed to higher operating expenses, including research and development costs rising to support product innovation, and gross margins averaging around 11% amid component cost fluctuations and scaling production.[75][76] Trailing twelve-month return on assets stood at 6.57% as of October 2025, indicating moderate efficiency in asset utilization for generating earnings.[77] In the fourth quarter of fiscal year 2025, revenue reached $5.76 billion, up from $5.31 billion in the prior quarter, with net income of $195 million.[78] For the first quarter of fiscal year 2026, ending September 30, 2025, preliminary revenue came in at $5.0 billion, falling short of the company's prior guidance of $6.0–$7.0 billion due to delays in customer project deliveries and supply chain constraints.[79][80] Despite the shortfall, management reiterated full-year fiscal 2026 revenue guidance of at least $33 billion, citing a $12 billion design-win pipeline and sustained AI infrastructure demand.[79][81]| Fiscal Year Ending June 30 | Revenue ($B) | YoY Growth (%) | Net Income ($B) | Profit Margin (%) |
|---|---|---|---|---|
| 2024 | 14.99 | 110.4 | 1.15 | 7.7 |
| 2025 | 21.97 | 46.6 | 1.05 | 4.8 |