TPU is an initialism with several meanings. The most common are in science and technology and other fields:In science and technology:Other uses:
Science and technology
Tensor Processing Unit
The Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) developed by Google, optimized specifically for performing tensor operations that form the core computations in neural networks for machine learning workloads.[1][2] Designed to accelerate both training and inference phases of deep learning models, TPUs emphasize high throughput for matrix multiplications and convolutions while minimizing latency and power consumption compared to general-purpose processors like CPUs or GPUs.[1] This specialization makes them particularly effective for large-scale AI applications in data centers, where they handle the massive parallel computations required for models involving billions of parameters.[3]Google initiated TPU development in response to the growing demands of neural networks in its services, with the first generation deployed internally in 2015 to power inference tasks in products such as Search, Ads, Photos, and Translate.[3][4] The technology was publicly announced at Google I/O in May 2016, marking a shift toward custom hardware for AI acceleration.[4] Subsequent generations expanded capabilities to include training, with public availability through Google Cloud starting in 2018.[3] Over the years, TPUs have evolved through multiple iterations, each improving performance, memory bandwidth, and scalability via larger pods of interconnected chips.Key architectural features of TPUs include a systolic array architecture, which efficiently performs matrix multiplications by streaming data through a grid of processing elements, reducing data movement overhead and enabling high peak throughput.[5] Early versions featured high-bandwidth memory (HBM) integrated on-chip for fast access to weights and activations, with later models incorporating advanced interconnects like optical circuit switches for pod-scale communication.[2][6] TPUs are tightly integrated with the TensorFlow framework, allowing seamless compilation of models into TPU-optimized executables via tools like XLA (Accelerated Linear Algebra), which supports both training and inference workflows.[1]The evolution of TPU versions reflects Google's focus on scaling AI capabilities, with representative specifications summarized below:
Version
Release Year
Key Features and Performance
v1
2015
8-bit integer operations for inference; 92 TOPS peak performance; 40 TOPS/W efficiency; 24 MB HBM; systolic array with 65,536 ALUs.[4][5]
v2
2017
Added bfloat16 floating-point support for training; pods up to 512 chips with custom high-speed interconnects; ~180 TFLOPS floating-point in early configurations.[3]
v3
2018
Liquid-cooled for higher density; 123 TFLOPS bf16 per chip; 32 GB HBM at 900 GB/s; pods up to 1,024 chips delivering 126 petaFLOPS.
v4
2020
275 TFLOPS bf16 or int8 per chip; 32 GB HBM at 1,200 GB/s; optical circuit switches; pods up to 4,096 chips for 1.1 exaFLOPS.[6]
v5e
2023
197 TFLOPS bf16 per chip; 16 GB HBM at 819 GB/s; optimized for cost-efficient inference and fine-tuning; supports up to 256-chip slices.[7]
v5p
2023
459 TFLOPS bf16 per chip; 95 GB HBM3 at 2,765 GB/s; 3D torus interconnect at 4,800 Gbps; pods up to 8,960 chips for training large models like PaLM.[8][9]
TPUs are primarily applied in data centers to accelerate AItraining and inference, enabling faster iteration on models for tasks like natural language processing and computer vision, with power efficiency exemplified by v1's 40 TOPS/W metric that set early benchmarks for specialized hardware.[4][1] As of 2025, TPUs are widely deployed via Google Cloud, powering multimodal AI models such as Gemini and supporting generative AIinference at scale through recent advancements like the sixth-generation Trillium (2024, >4.7x performance over v5e) and seventh-generation Ironwood (announced April 2025, 4,614 TFLOPS per chip with 192 GB HBM for exascale pods).[3][10] Compared to Nvidia GPUs, TPUs offer superior speed for tensor-specific operations in optimized workloads but are less versatile for non-AI tasks.[1] This positions TPUs as a key element in broader AI hardware trends toward domain-specific accelerators.[3]
Thermoplastic polyurethane
Thermoplastic polyurethane (TPU) is a versatile thermoplastic elastomer classified as a linear block copolymer consisting of alternating hard and soft segments. The soft segments, typically derived from long-chain polyols such as polytetramethylene ether glycol (PTMEG), provide flexibility and elasticity, while the hard segments, formed from diisocyanates like methylene diphenyl diisocyanate (MDI) and short-chain diols, contribute strength and rigidity through physical cross-linking via hydrogen bonding.[11][12]The development of TPU traces back to the 1950s, when chemists at Bayer in Germany explored polyurethane systems for elastomeric applications, building on Otto Bayer's earlier 1937 invention of polyurethane chemistry. Commercial advancements followed in the late 1950s, with DuPont introducing spandex fibers based on similar polyurethane elastomers and BF Goodrich (later Lubrizol) launching the first dedicated TPU under the Estane brand in 1959; widespread industrial production began in the 1960s, enabling melt-processable alternatives to vulcanized rubber.[13][14]TPU exhibits a unique combination of properties that bridge plastics and rubbers, including high elasticity with Shore A hardness typically ranging from 60A to 95A, allowing tailoring from soft gels to rigid materials. It demonstrates excellent abrasion resistance, tensile strength up to 50 MPa, and good chemical resistance to oils and solvents, alongside biocompatibility suitable for skin contact. As a thermoplastic, TPU can be melted and reprocessed at temperatures of 180–220°C without chemical degradation, facilitating efficient manufacturing while maintaining durability under dynamic loads.[15][16]Synthesis of TPU involves a polyaddition reaction where diisocyanates react with polyols and chain extenders in a one-shot or prepolymer process, often via reaction injection molding for complex parts or extrusion for films and profiles. The flexibility of the final material is influenced by polyol chain length—longer chains enhance softness and elongation—while controlling the hard-to-soft segment ratio allows customization of mechanical performance.[17][15]Industrial applications of TPU leverage its processability and resilience, including filaments for 3D printing to produce flexible prototypes, midsoles in athletic footwear such as AdidasBoost (using expanded TPU for energy return), seals and hoses in automotive components, medical tubing for catheters, and protective coatings for wires and surfaces. Global production exceeds 500,000 tons annually as of 2025, driven by demand in consumer and industrial sectors.[18][19]Environmentally, TPU supports sustainability through recyclability via melt reprocessing, reducing waste in manufacturing cycles, though challenges persist with slow degradation of discarded particles contributing to microplastic pollution in ecosystems. Emerging bio-based variants, incorporating renewable polyols from plant sources, offer improved biodegradability while preserving performance, with research focusing on enzymatic breakdown to mitigate long-term environmental impacts.[20][21]
Transcranial pulsed ultrasound
Transcranial pulsed ultrasound (TPU), also known as low-intensity transcranial ultrasoundstimulation (LITUS), is a non-invasive neuromodulation technique that employs low-intensity, low-frequency ultrasound waves to stimulate specific brain regions through the intact skull. Typically operating at frequencies between 0.5 and 7 MHz and with spatial-peak pulse-average intensities (I_sppa) below 720 mW/cm², TPU delivers pulsed acoustic energy to modulate neural activity with millimeter-scale resolution.[22][23]This technique emerged in the early 2010s, building on foundational research in focused ultrasound for neuromodulation. Seminal studies, such as the 2010 demonstration of ultrasound-induced action potentials in mouse brain circuits, paved the way, with human applications advancing from 2014 onward, showcasing focal neuromodulation without tissue damage.[24][25]The primary mechanism of TPU involves acoustic pressure waves that exert mechanical effects on neuronal membranes and ion channels, potentially through acoustic streaming, cavitation, or direct membrane deformation, thereby modulating neuronal excitability without relying on thermal or ionizing effects. Key parameters include pulse repetition frequency (PRF) ranging from 1 to 10 kHz, which allows for precise temporal control of stimulation.[26][27]TPU has shown promise in experimental treatments for neurological disorders, including Parkinson's disease, where it enhances motor function and dexterity in clinical trials; depression, targeting mood-regulating circuits; and stroke recovery, promoting neuroprotection and rehabilitation. As of 2025, ongoing clinical trials demonstrate its efficacy in altering blood-oxygen-level-dependent (BOLD) signals observable via functional MRI (fMRI), confirming targeted brain modulation.[28][29][30][31]Regarding safety, TPU operates via non-thermal mechanisms and adheres to FDA diagnostic ultrasound guidelines, maintaining intensities well below thermal damage thresholds, though skullattenuation necessitates input intensities up to several times higher than in soft tissue to achieve effective brain stimulation. It holds investigational device status from the FDA for neuromodulation applications. Recent advancements include integration with neuroimaging modalities like MRI for precise targeting and the development of portable TPU devices to facilitate broader clinical and research use. Additionally, there is potential synergy with AI-driven imaging analysis to optimize stimulation parameters.[32][33][34]===== END CLEANED SECTION =====
Other uses
Tomsk Polytechnic University
Tomsk Polytechnic University (TPU), a leading technical institution in Russia, traces its origins to 1896 when it was established as the Tomsk Technological Institute of Practical Engineers by Emperor Nicholas II, marking the first engineering higher education facility east of the Urals.[35] Dmitri Mendeleev, the renowned chemist and periodic table creator, contributed to its founding efforts despite initial reservations about establishing a separate technical institute in Siberia.[36] The institution underwent several name changes amid Soviet-era reorganizations, becoming the Tomsk Polytechnic Institute in 1944 to reflect its broadened engineering focus, and achieving full university status in 1991.[37] In 2009, TPU was awarded National Research University status, elevating it to a federal research university with enhanced funding for innovation and global collaboration.[37]The main campus is situated in Tomsk, Siberia, encompassing 32 academic and laboratory buildings, 15 residence halls, and advanced facilities such as a nuclear research reactor for materials testing and a supercomputer cluster supporting computational simulations in engineering and physics.[38][37] As of 2025, TPU enrolls over 11,500 students, including more than 3,000 international scholars from 39 countries, and employs 1,177 academic staff, with 256 holding doctoral degrees.[38] This scale underscores its role as a major hub for technical education in Russia's Asian region, fostering a diverse community through academic exchanges in 18 countries.[38]TPU's academic structure features 10 research and engineering schools organized into 21 divisions, spanning disciplines like engineering, physics, information technology, and energy systems.[38] It offers 28 bachelor's, 33 master's, 5 specialist, and 19 postgraduate programs, alongside 395 professional development courses, with flagship offerings in petroleum engineering—ranked 30th globally by QS (2025)—and materials science, emphasizing practical applications in resource extraction and advanced composites.[38][39] These programs integrate hands-on laboratory work and industry partnerships to prepare graduates for high-demand sectors.Among TPU's key achievements are its more than 100,000 alumni, who include Nobel laureate in chemistry Nikolay Semenov and influential engineers shaping Soviet and Russian industry.[40][35] The university collaborates closely with state entities like Rosatom for nuclear technology R&D and engages in international exchanges via programs such as Erasmus+, facilitating student and faculty mobility with over 40 overseas partners.[38] Research productivity is robust, with 1,700 annual publications in Web of Science and Scopus-indexed journals, including 650 in top-quartile outlets, and over 120 patents granted yearly, contributing to innovations in energy and materials.[38][41] By recent years, cumulative intellectual property output bolsters Russia's technological self-sufficiency.[37]In recent years, TPU has prioritized sustainable energy and AI education through specialized initiatives, such as master's programs in environmentally friendly energy conversion technologies using hydrogen and renewables, and advanced engineering schools training AI specialists for the fuel and energy sector.[42][43] Enrollment has expanded to over 11,500 students post-2020, driven by new digital and green technology tracks amid Russia's nationalinnovation priorities.[38] TPU also plays a vital role in training engineers for tech industries, including AI hardware development essential for computational advancements.[44]Since 2022, geopolitical tensions and Western sanctions have strained international funding and collaborations for Russian universities, including TPU, leading to reduced foreign grants and exchange opportunities.[45] Nonetheless, the institution has demonstrated resilience by securing domestic support through federal programs like Priority-2030, which allocated nearly one billion rubles in 2025 for research and infrastructure, enabling continued growth in strategic areas.[46]
Terminal Processor Unit
The Terminal Processor Unit (TPU) is a specialized computer-based system designed for processing terminal flight data in air traffic management (ATM) infrastructure, serving as a core component of systems like the FAA's Terminal Automation System. It handles radar and surveillance inputs to support air traffic controllers in managing aircraft movements near airports.[47]Historically, TPUs emerged in the 1970s and 1980s with the deployment of early automated radar terminal systems, such as the Automated Radar Terminal System (ARTS) III, which was first commissioned in 1971 to process radar data for approach and departure control. These systems evolved significantly in the 2000s through the Standard Terminal Automation Replacement System (STARS), initiated in 1996 to replace aging ARTS hardware and software with modern digital processing capabilities.[48][49]Key functions of the TPU include real-time tracking of aircraft within approximately 40-50 nautical miles of airports, conflict detection to prevent mid-air collisions, integration of weather data for advisories, and generation of displays for controller workstations. In high-volume environments, it can manage up to 1,350 simultaneous aircraft tracks, enabling efficient sequencing and separation.[47][50][51]Technically, TPUs employ redundant server architectures for high availability, ensuring continuous operation during failures, and interface with primary radars like the Airport Surveillance Radar-11 (ASR-11) as well as multilateration systems for precise positioning. The software supports low-latency data processing to meet real-time demands of terminal operations.[52]Deployed at over 500 U.S. facilities by 2025, including 145 Terminal Radar Approach Control (TRACON) centers and 432 air traffic control towers, TPUs enhance aviation safety through automated alerts for potential conflicts and terrain avoidance. International variants appear in EUROCONTROL's ATM systems, such as the ARTAS tracking platform, which performs similar terminal data processing functions.[47]Recent upgrades focus on integrating Automatic Dependent Surveillance-Broadcast (ADS-B) for next-generation surveillance, improving accuracy and capacity while replacing outdated hardware from the 1990s-era ARTS installations. This modernization aligns with broader NextGen initiatives to boost efficiency and reduce delays.[53][54]