Chip
A chip, also known as an integrated circuit (IC) or microchip, is a compact semiconductor device consisting of interconnected electronic components such as transistors, resistors, and capacitors etched onto a thin wafer of silicon or similar material, enabling the performance of specific electrical functions within a minimal space.[1][2][3] The modern chip traces its origins to the late 1950s, when Jack Kilby at Texas Instruments demonstrated the first prototype in 1958 using germanium, followed by Robert Noyce's silicon-based planar process in 1959 at Fairchild Semiconductor, which facilitated scalable manufacturing and supplanted discrete components in electronics.[4][5] This breakthrough catalyzed the semiconductor industry, powering advancements in computing power through exponential transistor density growth—often quantified by Moore's observation of doubling roughly every two years—driving the proliferation of microprocessors, memory, and system-on-chip designs essential to devices like computers, smartphones, and embedded systems.[4][2] Chips underpin contemporary technology's core capabilities, from data processing in artificial intelligence to signal handling in telecommunications, though fabrication demands ultra-precise lithography and cleanroom environments, with production cycles spanning months and involving multilayer patterning at nanoscale resolutions below 5 nanometers in leading nodes.[2][3] Despite achievements in yield and performance, the field grapples with physical scaling limits, prompting shifts toward three-dimensional stacking, novel materials like gallium nitride, and specialized architectures for energy efficiency.[1][6]Computing and electronics
Integrated circuits and microchips
An integrated circuit, also known as a microchip or chip, consists of electronic circuits fabricated on a small wafer of semiconductor material, typically silicon, integrating multiple components such as transistors, resistors, and capacitors into a single unit.[2] This monolithic structure enables compact, efficient processing of electrical signals, forming the basis for modern digital and analog electronics.[7] The integrated circuit was independently invented by Jack Kilby at Texas Instruments, who demonstrated the first working prototype on September 12, 1958, using germanium to interconnect components without discrete wires.[8] Robert Noyce at Fairchild Semiconductor followed in 1959, developing a silicon-based planar process that allowed for reliable mass production, with his patent filed on July 30, 1959, and granted on April 25, 1961.[9] These breakthroughs addressed the limitations of discrete transistors, which were bulky and prone to failure in complex assemblies, paving the way for scalable electronic systems.[10] Fabrication of integrated circuits occurs in specialized cleanroom facilities through a sequence of processes starting with silicon wafer preparation, followed by photolithography to pattern circuits, chemical etching to remove material, ion implantation or diffusion for doping semiconductors to create transistors, and metallization to form interconnects.[11] Each step builds layers of circuitry, often hundreds in advanced chips, with yields optimized through precise control of temperature, pressure, and contamination to achieve densities exceeding billions of transistors per chip.[12] Advancements in microchip density have followed Moore's Law, formulated by Gordon Moore in 1965, which originally predicted the number of components per integrated circuit would double annually, revised to every two years as manufacturing scaled.[13] This exponential growth, driven by reductions in transistor feature sizes from micrometers to nanometers via extreme ultraviolet lithography and high-k metal gate technologies, has increased computing power by over a trillion times since the 1960s, enabling devices from mainframes to smartphones.[14] Microchips underpin computing by allowing vast computational capacity in minimal space, reducing costs per transistor from dollars in early designs to fractions of a cent today, and facilitating innovations in processors, memory, and sensors critical to digital infrastructure.[15]Food
Potato chips and similar snacks
Potato chips, also known as crisps in some regions, consist of thin slices of potato that are deep-fried or baked until crisp and then typically salted or flavored. The snack originated in the United States, with the most widely attributed invention occurring in 1853 at Moon's Lake House restaurant in Saratoga Springs, New York, where chef George Crum, of African-American and Native American descent, reportedly sliced potatoes extremely thin and fried them in response to a customer's complaint that french fries were too thick and soggy.[16] [17] Although the anecdote has been characterized as legendary rather than definitively proven, Crum's "Saratoga chips" gained local popularity and were served in baskets at tables, marking an early commercialization of the product.[18] Industrial production began in the late 19th century, with brands like Lay's emerging in the 1930s, scaling output through mechanized slicing and continuous frying.[19] The manufacturing process starts with selecting high-starch potatoes, which are washed, peeled, and sliced into uniform thicknesses using straight blades for flat chips or serrated blades for ridged varieties to increase surface area and crunch. Slices are then blanched to remove excess starch, fried in vegetable oils at temperatures around 350–375°F (177–190°C) for 1–3 minutes to achieve low moisture content (typically under 2%), and seasoned with salt or flavorings like barbecue or cheese powder applied via oil or dry methods.[20] [21] Approximately four tonnes of raw potatoes yield one tonne of finished chips due to water and oil absorption losses. Baked variants reduce oil use by oven-drying slices, though they comprise a smaller market share.[20] Similar snacks include tortilla chips, made by cutting corn tortillas into triangles and frying them, which gained prominence in the mid-20th century as a staple in Mexican-American cuisine and Tex-Mex dishes, often paired with dips like salsa or guacamole. Corn chips, such as Fritos introduced in 1932, derive from masa dough extruded and fried, offering a denser texture distinct from potato-based crisps. Other analogs encompass plantain chips, sliced and fried from green plantains for a sweeter profile prevalent in Latin American and Caribbean markets, and root vegetable chips from beets or sweet potatoes, which mimic the frying process but substitute tubers for potatoes to vary nutrition and flavor.[22] Global consumption reflects widespread appeal as a convenience food, with the potato chips market valued at approximately USD 56.23 billion in 2025 and projected to reach USD 76.82 billion by 2030, driven by demand for flavored and portion-controlled packs in regions like North America and Europe. Per capita intake in the U.S. averages about 23 calories daily from chips, contributing to their status as a high-volume snack category. Nutritionally, a standard 28-gram serving delivers around 150–160 calories, 10 grams of fat (mostly unsaturated from frying oils but including trans fats in some formulations), and 150–200 milligrams of sodium, alongside negligible fiber or vitamins, rendering them energy-dense with minimal satiety. Frequent consumption links to elevated risks of weight gain, hypertension, and acrylamide exposure—a potential carcinogen formed during high-heat frying—due to their caloric density and processing.[23] [24] [25]Other culinary uses
In British and Irish cuisine, "chips" denote thick-cut, deep-fried potato strips, distinct from the thin, crisp potato chips categorized as snacks. These chips form a core component of the dish fish and chips, typically featuring battered and fried white fish such as cod or haddock served with the potatoes, often accompanied by malt vinegar or tartar sauce. The combined dish emerged in England during the mid-19th century, with the earliest documented fish and chip shops opening around 1860 in London and coastal areas, evolving from separate traditions of fried fish introduced by Jewish immigrants from Portugal and Spain in the 16th century and local potato frying practices.[26][27] Chocolate chips consist of small, uniform morsels of semi-sweet or other varieties of chocolate, engineered to retain shape during baking rather than fully melting. Invented in the 1930s by Ruth Wakefield at the Toll House Inn in Whitman, Massachusetts, they were initially created by chopping solid chocolate bars for her butter drop dough cookie recipe, which gained popularity after publication in a 1938 cookbook. Nestlé began mass-producing dedicated chocolate chips in 1939 following a licensing agreement with Wakefield, standardizing their use in cookies, muffins, and other baked goods to distribute chocolate evenly without altering dough consistency.[28][29] Wood chips, derived from hardwoods like hickory, apple, or mesquite, serve in smoking techniques to infuse meats, vegetables, and cheeses with aromatic flavors during low-heat cooking. These small wood fragments, typically 1-2 inches in size, are soaked in water for 30 minutes to an hour before use to prolong smoldering and smoke production rather than rapid combustion; a handful generates smoke for 20-30 minutes in grills or smokers. Common since the mid-20th century in American barbecue traditions, their selection influences flavor profiles—milder fruitwoods for poultry, stronger varieties for red meats—enhancing taste through phenolic compounds released in the smoke.[30][31]Games and sports
Gaming tokens
Gaming tokens, commonly known as chips, are small discs used primarily in gambling games such as poker, blackjack, and roulette to represent monetary value during play. These tokens facilitate efficient wagering by allowing players to bet standardized amounts without handling cash at the table, reducing handling time and enabling quicker game progression.[32] In casinos, chips are exchanged for currency at the cage or tables, with denominations typically color-coded—white or blue for $1, red for $5, green for $25, black for $100, and purple for $500 or higher—though exact schemes vary by establishment.[33] The origins of poker chips trace to early 19th-century American card games, where players initially used improvised items like bone, wood, or ivory counters due to the lack of standardized currency in frontier settings. By the 1880s, commercial production emerged with clay-based chips molded under high pressure and temperature to create durable, uniform tokens resistant to counterfeiting.[34] Ancient precursors existed in gambling practices dating back millennia, involving items like coins or gold dust, but modern casino chips evolved specifically to deter fraud through intricate edge designs, embedded metals, and, since the 2000s, radio-frequency identification (RFID) technology for value verification and theft prevention.[35] [36] Materials for chips prioritize tactile appeal, weight, and security; traditional clay composites blend clay, sand, and binders for a heft of 10-14 grams per chip and a satisfying "clack" sound, while ceramic variants offer similar feel without fragility.[37] [38] Standard dimensions are approximately 39 mm in diameter and 3-3.3 mm thick, with plastic used for inexpensive home sets and metal cores added for premium durability.[39] Beyond gambling, poker-style chips serve as versatile components in board games, substituting for paper money or resource trackers in titles like Monopoly or Settlers of Catan to enhance handling and immersion, though they lack the regulatory standards of casino tokens.[40] Collectible chips from historic casinos, often made by mints like the Franklin Mint in the 1960s, have developed a secondary market valued for rarity and design.[41]Sports techniques
In golf, a chip shot is a short approach played from off the green, characterized by low trajectory, quick landing, and significant roll toward the hole, distinguishing it from higher-flying pitches. The technique emphasizes a putting-like stroke with minimal wrist hinge, relying primarily on shoulder turn for a one-lever motion that promotes contact with the ball's lower half using a wedge, ensuring the club skims the turf rather than digging in.[42][43] This approach minimizes air time—prioritizing run over rise—to control distance on varying green speeds and lies, such as tight fringes or light rough, where improper execution risks chunking or skulling the ball. Professional instruction highlights ball position back in the stance, weight forward, and a descending blow to compress the ball against the turf for clean contact.[44] In association football, chipping involves lofting the ball via a precise, under-struck contact—typically with the inside or instep of the foot against the ball's lower-middle—to arc it over defenders or goalkeepers, often imparting backspin for a soft drop and reduced forward momentum post-landing. The technique requires planting the non-kicking foot beside the ball, leaning slightly back, and accelerating through impact while lifting the kicking foot upward to generate height without excessive power, making it effective for close-range finishes or evading pressure in tight spaces.[45][46] Players execute chips from static positions or with a short run-up, aiming for parabolic flight that exploits goalkeeper dives, as seen in notable goals where the ball clears outstretched arms before dipping into the net.[47] Accuracy demands timing to avoid over-hitting, which propels the ball too far, or under-lifting, resulting in blocked efforts. In gridiron football, a chip refers to a blocking technique where a receiver or tight end delivers a quick, glancing blow to a linebacker or edge rusher to slow pursuit without full engagement, allowing the blocker to release into a route. This "chip block" uses shoulder or forearm contact to alter the defender's path momentarily, preserving offensive line protection during pass plays. Separately, a "chip-shot field goal" denotes a short, low-pressure kick—typically under 30 yards—from close range, where success rates exceed 95% in the NFL due to minimal distance and wind interference.People and fiction
Real individuals
Chip Wilson (born 1956) founded Lululemon Athletica in 1998, initially as an apparel company focused on yoga wear, expanding it into a global athleisure brand with a single Vancouver store by 2000; he served as chairman until 2015 and remains a major shareholder.[48] In October 2025, Wilson publicly criticized Lululemon's board and management for diluting the brand's identity, stating the company had "lost its soul" amid efforts to replace directors.[49] Chip Kelly (born November 25, 1963) is an American football coach known for pioneering a fast-paced, spread-option offense during his tenure at the University of Oregon, where he led the team to a 46-7 record from 2009 to 2012, including a BCS National Championship game appearance in 2011.[50] After head coaching stints with the Philadelphia Eagles (2013–2015, 26-21 record) and San Francisco 49ers (2016, 2-14 record), he coached UCLA from 2018 to 2024 before joining the Las Vegas Raiders as offensive coordinator in February 2025 under head coach Pete Carroll.[51] Chip Ganassi (born 1958) is an American racing team owner who founded Chip Ganassi Racing in 1989, securing 23 championships and over 250 victories across series like IndyCar, where the team won five Indianapolis 500s, and endurance racing with eight Rolex 24 at Daytona triumphs.[52] The team competes in NTT IndyCar Series events, emphasizing high-performance engineering and driver development.[53] Chip Reese (March 28, 1951 – December 4, 2007) was a professional poker player renowned for cash game dominance in high-stakes mixed games and seven-card stud, earning induction into the Poker Hall of Fame in 1991 at age 40, the youngest at the time.[54] He won three World Series of Poker bracelets, including the 1988 $1,500 Seven Card Stud Split event, and amassed over $3.9 million in tournament earnings, though his true legacy lies in private games estimated to yield tens of millions.[55]Fictional characters
Chip is one of the two anthropomorphic chipmunk protagonists in the Disney duo Chip 'n Dale, debuting in the 1943 short film Private Pluto as antagonists to Pluto.[56] Chip is characterized by a black nose, a logical and organized demeanor, and leadership role within the pair, often wearing a propeller beanie; he contrasts with his brother Dale, who has a red nose and a more impulsive personality.[57] The characters evolved into recurring rivals of Donald Duck and starred in the 1989–1990 animated series Chip 'n Dale: Rescue Rangers, where Chip leads a detective agency solving crimes.[56] Chip Potts appears as a supporting character in Disney's 1991 animated film Beauty and the Beast, portrayed as the young son of housekeeper Mrs. Potts who is enchanted into a chipped teacup by the same curse affecting the castle's staff.[58] Voiced by Bradley Michael Pierce, Chip's name derives from a chip in the teacup's rim, symbolizing his vulnerability and curiosity; he aids Belle and plays a role in pivotal scenes, such as sneaking her into the West Wing.[58] The character reappears in the 2017 live-action remake, voiced by Nathan Mack, maintaining his role as an inquisitive child object restored to human form at the film's resolution.[58] In the 1987 Disney Channel television movie Not Quite Human, Chip Carson is depicted as an advanced android teenager created by scientist Dr. Jonas Carson and portrayed by Jay Underwood.[59] Designed to mimic human behavior for social integration, Chip navigates high school challenges, friendships, and ethical dilemmas about artificial intelligence, blending humor with early explorations of robotics in family entertainment.[59]Biology and medicine
Biochips and lab applications
Biochips, also known as lab-on-a-chip (LOC) devices, consist of miniaturized platforms that integrate biological probes, such as DNA or proteins, onto microfabricated substrates to enable parallel execution of biochemical assays. These systems leverage microfluidics and surface chemistry to manipulate small fluid volumes, typically in the microliter to nanoliter range, facilitating high-throughput analysis with reduced reagent consumption compared to traditional lab methods.[60] Development traces back to foundational work on sensor integration in the 1990s, evolving into versatile tools for point-of-care (POC) and laboratory diagnostics by the 2000s.[60] In laboratory settings, biochips primarily support genomics and proteomics applications, including DNA hybridization for gene expression profiling and protein microarrays for biomarker detection. For instance, DNA biochips enable simultaneous interrogation of thousands of genetic variants, as seen in microarray-based sequencing precursors that accelerated genome-wide association studies.[61] Protein biochips, by contrast, facilitate enzyme-linked immunosorbent assay (ELISA) equivalents on chip surfaces, quantifying analytes with sensitivities down to picomolar levels.[62] These platforms have been instrumental in microbial monitoring, where biochip arrays detect pathogen-specific nucleic acids in environmental or clinical samples, achieving results in hours rather than days.[61] Cell-based biochip applications extend to tissue engineering and drug screening, with microfluidic channels coated in endothelial cells supporting the differentiation of skin or organoid models.[60] Organ-on-a-chip variants simulate physiological microenvironments, such as lung or liver tissues, to evaluate drug toxicity; a 2020 review documented over 100 such models tested for metabolic responses, correlating chip data with in vivo outcomes in 70-80% of cases.[60] In diagnostics, integrated biochips perform microscale polymerase chain reaction (PCR) amplification, enabling portable detection of viral loads, as demonstrated in disposable chips for influenza assays with 95% specificity.[63] Challenges in lab adoption include fabrication scalability and biofouling, though printed circuit board (PCB)-based LOCs have advanced integration since 2017, supporting electrochemical and optical readouts for multiplexed assays.[64] Fully integrated systems minimize manual intervention, processing samples from lysis to detection autonomously, which has streamlined cancer classification by profiling tumor gene signatures across hundreds of patients in cohort studies.[62][61] Patient stratification for clinical trials benefits from biochip-derived pharmacogenomic data, identifying responders to therapies like tyrosine kinase inhibitors with predictive accuracies exceeding 85%.[61]Implantable microchips
Implantable microchips refer to small, passive radiofrequency identification (RFID) or near-field communication (NFC) devices, typically measuring a few millimeters in length, surgically inserted under the skin to store and transmit data wirelessly when scanned by a compatible reader.[65] These chips contain a unique identifier or limited data, such as medical records or payment credentials, powered by the reader's electromagnetic field without requiring an internal battery.[66] The implantation procedure involves a minor incision, usually in the hand between the thumb and index finger, performed by trained professionals under local anesthesia.[67] The first documented human implantation occurred in 1998, when British professor Kevin Warwick received an RFID chip to demonstrate remote control of devices and track his location within a university building.[67] In 2004, the U.S. Food and Drug Administration approved the VeriChip system for medical identification, allowing storage of patient data like allergies and emergency contacts, though the company discontinued human sales in 2010 amid low adoption and privacy backlash.[68] Adoption has since shifted toward consumer and workplace convenience, with Sweden leading in voluntary implants; by 2018, approximately 3,000-4,000 Swedes had chips for accessing offices, gyms, or public transport.[69] Global estimates place the number of chipped individuals between 50,000 and 100,000 as of 2024, primarily in Europe and North America, driven by biohacking communities.[65][70] Applications include keyless entry to buildings, contactless payments via linked bank accounts, and authentication for computers or apps, reducing reliance on physical cards or keys.[71] Companies such as Dangerous Things and BioTeq supply sterile NFC chips like the xNT or NExT models, which users or clinics implant for these purposes.[72][73] In 2017, U.S. firm Three Square Market became the first to offer voluntary implants to its 80 employees for vending machine purchases and door access, with about 50 opting in.[74] Walletmor, a Finnish-Polish company, launched consumer payment chips in 2021, enabling transactions at NFC terminals after linking to Visa or Mastercard.[71] Market projections indicate growth to USD 2.56 billion by 2033, fueled by demand for seamless digital integration.[75] Health risks include infection at the insertion site, chip migration within tissues, and rare allergic reactions to the biocompatible casing, typically glass or polymer.[66] Animal studies have linked implants to tumor formation; a 1997 German study of 4,279 mice found sarcomas at 1% of implant sites, attributed to chronic inflammation from the foreign body, though human extrapolation remains uncertain due to species differences and lack of large-scale longitudinal data.[76][77] No definitive causal evidence of cancer in humans exists, but surveillance concerns persist, as chips could enable tracking if data is compromised via hacking or unauthorized scanning.[78] Privacy advocates highlight risks of data breaches or employer surveillance, prompting legislative bans in U.S. states like Nevada and Wisconsin on mandatory implants.[70] Despite these, proponents argue benefits outweigh risks for voluntary users, citing low complication rates under sterile conditions.[65]Finance
Smart card technology
Smart card technology embeds an integrated circuit, typically a microprocessor or memory chip, into a plastic card to enable secure data storage, processing, and transaction authentication, primarily revolutionizing financial payments by replacing vulnerable magnetic stripes with dynamic cryptographic verification.[79] In payment applications, the chip generates unique transaction codes using algorithms like those in the EMV standard, reducing fraud risks compared to static data on magnetic stripes, as each authorization involves cardholder verification methods such as PIN entry or signature.[80] This contrasts with passive magnetic stripe cards, where data replay attacks were prevalent, leading to estimated annual losses exceeding $1 billion in the U.S. before widespread chip adoption.[81] The foundational patent for smart card concepts emerged in 1968 from German inventors Jürgen Dethloff and Helmut Grötrupp, who envisioned programmable cards with integrated circuits for secure transactions, though practical implementation lagged until the 1970s.[82] French engineer Roland Moreno advanced the technology with a 1974 patent for a memory-based "portable memory device," enabling rudimentary data storage without external power, which laid groundwork for financial uses like prepaid cards.[83] By 1977, Michel Ugon developed the first microprocessor-equipped smart card at Honeywell Bull, incorporating processing capabilities for on-card computations essential for payment authentication.[84] Commercial deployment began in France in the early 1980s with bank cards from BNP, marking the shift toward chip-based debit systems that required terminal interaction for validation.[83] International standards govern interoperability: ISO/IEC 7816 defines physical and electrical interfaces for contact-based smart cards, specifying eight contact points for power, ground, clock, reset, and bidirectional data lines to facilitate secure communication at speeds up to 9600 baud initially, scalable higher in modern implementations.[85] For contactless variants used in finance, ISO/IEC 14443 outlines proximity coupling with radio frequency fields up to 10 cm, enabling near-field communication (NFC) for tap-to-pay without physical insertion.[86] The EMV specification, jointly developed by Europay, Mastercard, and Visa in 1994, integrates these standards for payment-specific protocols, mandating chip cryptographic functions like RSA or elliptic curve digital signatures to authenticate transactions and prevent counterfeiting.[87] EMVCo, the managing body formed in 1999, has certified over 3 billion EMV chips annually by the 2010s, driving global standardization.[88] Adoption in finance accelerated post-2000 to combat rising skimming fraud; Europe's mandatory chip-and-PIN rollout by 2005 cut counterfeit losses by up to 80% in participating countries, per industry reports.[89] In the U.S., a 2012 liability shift incentivized issuers to migrate, with over 90% of cards chip-enabled by 2018, though full terminal compliance lagged until 2015 deadlines.[81] Contactless EMV, leveraging NFC, surged during the COVID-19 pandemic, with transaction volumes exceeding 50% of in-person payments in regions like the UK by 2022, supported by chips handling multiple applications like loyalty programs on the same card.[88] Despite benefits, challenges include higher costs—chip cards cost 10-20 times more to produce than magnetic ones—and backward compatibility issues in legacy systems.[89] The global smart card market for financial ICs reached $3.88 billion in 2025, projected to grow at 3.5% CAGR through 2034, driven by secure element demands in mobile wallets and tokenization.[90]Organizations and programs
CHIPS and Science Act
The CHIPS and Science Act of 2022 (Pub. L. 117-167) is a United States federal statute enacted to strengthen domestic semiconductor manufacturing, research, and broader scientific innovation amid concerns over supply chain vulnerabilities and competition from China.[91] Signed into law by President Joe Biden on August 9, 2022, the legislation allocates approximately $52.7 billion specifically for semiconductor-related initiatives, including $39 billion in direct incentives for fabrication facilities and supply chain enhancements, $13 billion for research and development, and a 25% investment tax credit for advanced manufacturing equipment.[92] Beyond semiconductors, the act authorizes over $280 billion across five years for federal science agencies, such as the National Science Foundation (NSF), to fund research in areas like quantum computing, artificial intelligence, and biotechnology, effectively doubling NSF's budget if fully appropriated through fiscal year 2027.[93] Provisions also restrict recipients from expanding certain manufacturing in China or other countries deemed national security risks, aiming to prevent subsidizing foreign competitors.[94] The semiconductor incentives, administered by the Department of Commerce's CHIPS Program Office, prioritize construction or expansion of fabrication plants (fabs) for leading-edge chips (nodes of 10 nanometers or smaller) and workforce development.[92] By June 2025, the office had awarded over $33.7 billion in preliminary funding across multiple projects, including up to $6.6 billion to Intel for facilities in Arizona, Ohio, and New Mexico; $8.5 billion to TSMC for three Arizona fabs; and $1.5 billion to GlobalFoundries for New York and Vermont expansions.[95] Smaller awards in 2025 included $32 million to Corning for materials production in New York and $53 million to HP for packaging technology in Oregon, supporting over 90 projects nationwide that have leveraged nearly $450 billion in private investment.[96][97] These funds require matching private commitments and compliance with domestic content rules, though implementation has faced delays due to rigorous national security reviews and environmental permitting.[95] Early outcomes indicate accelerated private sector commitments, with announcements of over $50 billion in additional investments shortly after passage, contributing to new fabs in states like Arizona, Texas, and Ohio.[98] However, effectiveness remains debated: proponents credit it with reducing U.S. reliance on Asian imports, which accounted for 92% of advanced chips pre-act, while critics argue it exemplifies costly industrial policy with uncertain long-term self-sufficiency gains, potential for corporate welfare, and risks of funding inefficiency given historical government R&D successes versus direct subsidies.[99][100] In 2025, President-elect Donald Trump described the program as "horrible," signaling potential revisions or clawbacks, which could disrupt ongoing projects despite evidence of boosted investment.[101][99] Workforce challenges persist, with NSF-funded training grants addressing a projected shortage of 67,000 semiconductor jobs by 2030, though scalability depends on sustained appropriations.[93][102]Payment and research systems
The CHIPS and Science Act authorizes approximately $52.7 billion in federal funding for semiconductor-related incentives, primarily disbursed through grants, loans, and tax credits administered by the Department of Commerce's CHIPS Program Office.[103] Grants cover 5% to 15% of eligible project costs for constructing or expanding semiconductor fabrication facilities, with individual awards capped at $3 billion absent special approval, and require matching private investments.[103] As of November 2024, the program had announced $33.7 billion in grants and up to $28.8 billion in loans across 32 projects involving 20 companies, prioritizing domestic manufacturing capacity to reduce reliance on foreign supply chains.[104] Additionally, a 25% investment tax credit applies to qualified expenditures on advanced manufacturing equipment, claimed through the Internal Revenue Service.[102]| Funding Mechanism | Description | Approximate Allocation |
|---|---|---|
| Direct Grants | For fabrication facilities and supply chain enhancements | $39 billion[105] |
| Loans | Low-interest financing for eligible projects | Up to $75 billion authorized[103] |
| Investment Tax Credits | Refundable credits for capital investments | 25% of qualified costs[102] |
| R&D Grants | For research, workforce, and technology development | $13 billion[103] |
Other uses
Material fragments and processes
In manufacturing, a chip refers to a small fragment of material detached from a workpiece during mechanical removal processes such as cutting, milling, or turning, typically consisting of excess material like metal shavings generated as a byproduct.[109] These fragments arise from the interaction between the cutting tool and the workpiece, where forces cause localized deformation and separation of material.[110] Chip formation occurs through a shearing mechanism in a primary deformation zone ahead of the tool, involving plastic flow and fracture, followed by secondary deformation as the chip slides along the tool face.[111] The process depends on factors including workpiece material ductility, cutting speed, feed rate, and tool geometry; for instance, ductile metals like steel at moderate speeds (e.g., 100-200 m/min) promote continuous shearing, while brittle materials like cast iron under low feeds (e.g., 0.1 mm/rev) lead to cracking and fragmentation.[110][112] Common types of chips include:- Continuous chips: Ribbon-like, unbroken forms produced from ductile materials under high cutting speeds and low feeds, minimizing fracture but risking tool entanglement if unmanaged.[113][112]
- Discontinuous chips: Short, irregular segments from brittle or work-hardened materials at low speeds and high feeds, resulting from repeated fracture along shear planes.[113][112]
- Continuous chips with built-up edge: Formed when workpiece material adheres to the tool edge during cutting of ductile substances at low speeds, creating a temporary layer that alters effective rake angle and surface finish.[113][112]