Fact-checked by Grok 2 weeks ago

GPT

Generative Pre-trained Transformer (GPT) is a family of large language models developed by OpenAI, utilizing a decoder-only transformer architecture trained via unsupervised pre-training on massive text corpora followed by task-specific fine-tuning to generate coherent, contextually relevant human-like text. The inaugural model, GPT-1, released in 2018 with 117 million parameters, demonstrated the efficacy of this semi-supervised approach for natural language understanding tasks, achieving competitive zero-shot performance without domain-specific training data. Subsequent iterations scaled computational resources exponentially: (2019) expanded to 1.5 billion parameters, enabling emergent capabilities like text continuation with reduced fine-tuning needs; (2020) reached 175 billion parameters, pioneering in-context where models adapt to tasks via prompts alone, powering applications such as and . (2023) introduced , processing both text and images to produce outputs rivaling human experts on benchmarks like bar exams and licensing tests, while later variants like GPT-4o and GPT-5 (2025) enhanced efficiency, reduced hallucinations, and improved instruction-following for complex reasoning and coding. These models underpin , which amassed over 700 million weekly users by mid-2025, catalyzing advancements in automated , scientific simulation, and personalized assistance, though empirical evaluations reveal limitations in and factual accuracy beyond memorized patterns. Despite these milestones, GPT models have sparked controversies rooted in their training paradigms, including lawsuits from publishers like alleging systematic through ingestion of copyrighted works without permission or compensation, raising questions about and in AI development. Ethical concerns encompass biases inherited from uncurated —often skewed by institutional sources exhibiting systemic left-leaning tilts in and —manifesting as disproportionate outputs favoring certain ideological framings over empirical neutrality. Additional scrutiny involves opaque practices, such as content moderation to low-wage Kenyan workers earning under $2 per hour to filter toxic training material, and risks of misuse for or deception, as evidenced by models' vulnerability to injection and in high-stakes domains like and . These issues underscore ongoing debates on , transparency, and the causal mechanisms driving model behaviors beyond correlative pattern-matching.

Generative Pre-trained Transformer

Definition and Core Concepts

(GPT) denotes a class of large language models developed by , characterized by a -based architecture optimized for tasks through unsupervised pre-training followed by supervised . The foundational GPT model, introduced in June 2018, employs a decoder-only variant to process sequential data autoregressively, predicting subsequent tokens conditioned on preceding context. This approach leverages vast unlabeled text corpora, such as the 800 million word BooksCorpus, to learn linguistic patterns without task-specific supervision during initial training. At its core, GPT's pre-training phase involves generative objective functions, where the model minimizes loss by generating plausible continuations of input sequences, enabling emergent capabilities in coherence, factual recall, and syntactic structure. then adapts the pre-trained weights to downstream tasks like or by incorporating labeled data and task-aware input transformations, such as appending classification labels to sequences, which facilitates with minimal architecture changes. This two-stage paradigm contrasts with contemporaneous models like , which use bidirectional masking, by prioritizing left-to-right generation suited for open-ended text production. Key architectural elements include stacked decoder layers with causal self-attention mechanisms, byte-pair encoding for tokenization, and optimizations like layer and connections to handle long-range dependencies efficiently. Subsequent iterations scaled parameters from 117 million in to trillions in later versions, amplifying performance via increased model size, data volume, and compute, though and data quality constraints have been noted in scaling analyses. These concepts underpin GPT's versatility in generating human-like text, powering applications from chatbots to while raising questions about emergent behaviors arising from statistical patterns rather than explicit reasoning.

Historical Development

The concept of the (GPT) originated in 's June 2018 research paper "Improving Language Understanding by Generative Pre-Training," which introduced as a decoder-only model with 117 million parameters, pre-trained unsupervised on the dataset of approximately 800 million words and fine-tuned for nine specific tasks, achieving state-of-the-art results on several benchmarks at the time. This approach demonstrated that generative pre-training on unlabeled text could effectively transfer to downstream supervised tasks without task-specific architectures, building on the architecture introduced by Vaswani et al. in 2017. In February 2019, announced , a significantly larger model scaling to 1.5 billion parameters in its full version, trained on 40 gigabytes of internet text filtered from WebText, and capable of generating coherent long-form text from minimal prompts. Initially, withheld the full model weights citing risks of misuse for generating misleading or harmful content, releasing only smaller variants (117M and 345M parameters) for ; the complete 1.5 billion parameter model was made available in November 2019 alongside tools for detecting AI-generated text. OpenAI released GPT-3 on June 11, 2020, with 175 billion parameters—the largest at the time—pre-trained on a diverse including , WebText2, Books1, Books2, and , totaling about 570 gigabytes of text, and accessible exclusively via a paid to control deployment and mitigate risks. Unlike predecessors, GPT-3 emphasized few-shot and , exhibiting emergent abilities such as arithmetic, translation, and without , which spurred widespread adoption in applications despite the model's proprietary nature and high computational demands. GPT-4 was introduced on March 14, 2023, as OpenAI's first major model, processing both text and images while outputting text, with reported parameter counts exceeding though exact figures undisclosed; it showed improved performance on professional exams, reasoning tasks, and safety alignments via techniques like (RLHF). Variants followed, including Turbo for efficiency and GPT-4o in May 2024, which integrated real-time voice, vision, and faster inference while maintaining or surpassing prior capabilities at lower cost. By August 7, 2025, launched , a further scaled model integrating advanced reasoning akin to chain-of-thought processes, enhanced tool-calling, and end-to-end task execution, outperforming variants on benchmarks for coding, planning, and multimodal understanding; it became the default for users, reflecting continued emphasis on proprietary and safety mitigations amid competitive pressures. This progression from to GPT-5 illustrates a pattern of exponential increases in model size, data volume, and architectural refinements, driven by empirical scaling laws where gains correlated with compute investment, though raising ongoing debates about and verifiability of internal advancements.

Technical Architecture

The GPT (Generative Pre-trained Transformer) models employ a decoder-only variant of the architecture, which consists of a stack of identical layers designed for autoregressive text generation. Each layer includes a multi-head self-attention mechanism with causal masking to ensure that predictions for a depend only on preceding s, followed by a position-wise feed-forward network and layer normalization applied before and after each sub-block. This structure enables the model to process input sequences in parallel while maintaining the unidirectional dependency required for next-token prediction. Input to the model begins with token embeddings combined with learned positional encodings added element-wise, allowing the network to capture both semantic content and sequence order without relying on recurrence. The self-attention sub-layer computes scaled dot-product attention across query, key, and value projections of the input, with masking to prevent future information leakage, typically using 12 to 96 attention heads depending on model scale. The feed-forward component applies two linear transformations with a ReLU activation in between, expanding the hidden dimension by a factor of four before projection back, which introduces non-linearity and capacity for complex pattern learning. Output generation involves a final linear layer mapping the top-layer hidden states to the vocabulary size, followed by softmax for probability distribution over tokens. Pre-training occurs via on massive text corpora, optimizing the model to maximize the likelihood of next-token prediction using loss, without task-specific supervision initially. This objective fosters emergent capabilities like in-context learning in larger variants. Architectural hyperparameters vary by version: features 12 layers, a hidden size of 768, and 117 million ; scales to 1.5 billion with modifications like alternative ; reaches 175 billion across 96 layers, emphasizing scaling laws over structural changes. Subsequent models like retain the decoder-only core but incorporate extensions for inputs via integrated encoders, though exact counts remain undisclosed. Inference employs autoregressive decoding, sampling tokens sequentially conditioned on prior outputs, often with techniques like nucleus sampling to balance and diversity. Training leverages massive parallelism across GPUs or TPUs, with optimizations such as mixed-precision arithmetic and checkpointing to handle . While foundational, this architecture has proven efficient for generative tasks but incurs quadratic in sequence length due to , prompting research into approximations like sparse attention in derivatives.

Major Model Releases

OpenAI released the inaugural model on June 11, 2018, introducing the architecture with 117 million parameters trained on the dataset. This model demonstrated improvements in for natural language tasks compared to prior bidirectional transformers, achieving state-of-the-art results on benchmarks by combining pre-training with supervised . GPT-2 followed on February 14, 2019, with initial models ranging from 124 million to 774 million parameters, and the full 1.5 billion parameter version released on November 5, 2019, after a staged rollout due to concerns over potential misuse in generating deceptive content. Scaled up from using larger datasets including WebText, GPT-2 excelled in zero-shot text generation, producing coherent paragraphs from prompts, though initially withheld the largest variant to study societal impacts before full publication of weights and code. In June 2020, launched with 175 billion parameters, a 100-fold increase over , enabling unprecedented few-shot and one-shot learning across diverse tasks without task-specific fine-tuning. Accessed initially via , it powered applications in text completion, translation, and question-answering, trained on a massive corpus from and other sources filtered for quality. GPT-3.5, fine-tuned from variants like text-davinci-003, completed training in early 2022 and underpinned ChatGPT's public debut on November 30, 2022, incorporating (RLHF) to enhance conversational coherence and safety. This iteration introduced browsing capabilities in April 2023 for Plus users, extending access while maintaining the core 175 billion parameter scale of its base. GPT-4 debuted on March 14, 2023, as OpenAI's first model accepting text and image inputs while outputting text, with undisclosed parameter count estimated far exceeding based on compute scaling laws. It outperformed predecessors on professional exams like the and SAT, though still prone to hallucinations, and integrated into Plus alongside variants like Turbo in November 2023 for longer context windows up to 128,000 tokens. GPT-4o, released May 13, 2024, optimized for speed and cost at half the price of Turbo with a 128,000-token context, natively handling audio, vision, and text in interactions. Subsequent updates included GPT-4.1 mini on May 14, 2025, a compact variant replacing GPT-4o mini for efficient deployment. By August 7, 2025, introduced GPT-5 as the new flagship, supplanting GPT-4o in defaults with enhanced reasoning and multimodal capabilities, available initially to Team users via . Parameter details remained , but it incorporated advances in chain-of-thought processing from interim models like o1, emphasizing reliability in complex problem-solving. A subsequent update in the GPT-5.2 series featured the Pro variant achieving 90.5% on the ARC-AGI-1 benchmark, the first model to surpass 90% on this abstract reasoning evaluation.
ModelRelease DateParametersKey Innovations
GPT-1June 11, 2018117 millionUnsupervised pre-training + for benchmarks
GPT-2February 14, 2019 (full: November 5, 2019)1.5 billion (largest)Zero-shot generation; staged release for safety evaluation
GPT-3June 11, 2020175 billion; access for broad applications
GPT-3.5Early 2022 (ChatGPT: November 30, 2022)~175 billionRLHF for dialogue; public chatbot interface
GPT-4March 14, 2023Undisclosed input; advanced reasoning on exams
GPT-4oMay 13, 2024UndisclosedReal-time ; cost-efficient scaling
GPT-5August 7, 2025UndisclosedDefault in ChatGPT; improved chain-of-thought

Capabilities and Applications

GPT models excel in and generation, enabling tasks such as answering questions, explaining complex concepts, and producing coherent text across diverse domains. For instance, achieves performance comparable to humans on standardized exams like and GRE, scoring in the 90th percentile or higher on several benchmarks, while improves reasoning across inputs with 88.7% accuracy on the MMLU benchmark. Later iterations, such as GPT-5 released on August 7, 2025, further enhance coding proficiency, leading SWE-bench Verified at 74.9% for resolving real-world software issues. These models process vast contexts, up to 128,000 tokens in GPT-4 variants, supporting long-form analysis and synthesis. Multimodal capabilities, introduced prominently in GPT-4o on May 13, 2024, extend to reasoning over audio, , and text inputs, facilitating applications like image description and voice interaction. In , GPT models generate, debug, and optimize across languages, with GPT-5 outperforming predecessors in front-end by 70% in internal tests and achieving 88.6% on HumanEval for functional . and summarization are also strengths, where models produce fluent outputs in multiple languages and condense lengthy documents while preserving key details, as demonstrated in evaluations of GPT-4's handling of non-English tasks. Applications span content creation, where GPT models draft articles, marketing copy, and creative narratives; , aiding personalized tutoring and concept explanation; and , powering chatbots for query resolution. In research, they accelerate by analyzing literature for target identification and synthesis, with LLMs like GPT variants processing biomedical corpora to hypothesize molecular interactions. Industrial uses include workflow optimization in for report generation and , and in for via text-based data interpretation. Code assistance tools, such as those integrated with GPT-4.1, support developers in repositories and polyglot code diffs, achieving 52.9% accuracy on diverse formats.
  • Healthcare: Summarizing patient records and supporting , though requiring human oversight to mitigate errors in .
  • Finance: Automating compliance reporting and from corpora.
  • Software Development: Generating and refactoring, reducing development time in benchmarks by up to 50% for routine tasks.
Despite these strengths, capabilities remain probabilistic, with outputs dependent on and prone to hallucinations in factual recall without retrieval augmentation.

Societal and Economic Impact

(GPT) models, particularly those powering tools like , have driven substantial gains across knowledge-based sectors. A McKinsey estimates that generative , including GPT architectures, could contribute $2.6 trillion to $4.4 trillion annually to the global economy through enhanced in tasks such as writing, coding, and . Wharton projections indicate that advancements will boost U.S. and GDP by 1.5% by 2035, rising to 3.7% by 2075, primarily via task-level efficiencies rather than wholesale job . Empirical studies, however, show limited labor disruption to date; a paper found no significant effects on earnings or hours worked from chatbot adoption as of mid-2025, ruling out impacts exceeding 1%. forecasts a baseline 6-7% global job displacement from , concentrated in routine cognitive roles, though offset by new opportunities in oversight and complementary human- workflows. data reveals that -exposed industries exhibit three times higher revenue growth per employee, underscoring augmentation over substitution in high-adoption firms. On the societal front, GPT models' rapid diffusion—reaching 700 million weekly users and 18 billion messages by July 2025—has amplified concerns over propagation. Evaluations indicate fabricates false claims in 80% of tested scenarios, stemming from training data limitations and tendencies. These systems inherit and potentially exacerbate societal biases present in vast internet-sourced datasets, leading to skewed outputs in areas like hiring recommendations or . In , over-reliance on GPT tools correlates with diminished skills, as students report reduced engagement in independent analysis when using AI for assignments. Broader inequities arise from the , where access disparities widen outcomes; lower-income or underrepresented groups face amplified biases from unrepresentative training data, per analyses of AI's dual-edged role in learning environments. Despite upsides, unchecked deployment risks eroding trust in information ecosystems, with peer-reviewed warnings highlighting of and infringements as persistent challenges. Causal evidence suggests these impacts hinge on deployment scale and safeguards, with no widespread observed but regulatory gaps persisting into 2025.

Controversies and Criticisms

Critics have highlighted political biases in GPT models, with analyses indicating a left-leaning tendency in responses to contentious issues. A 2023 Brookings Institution study found ChatGPT exhibited clear left-leaning political bias across various prompts, attributing it partly to training data from internet sources dominated by progressive viewpoints. A 2025 Stanford study confirmed that both Republicans and Democrats perceive large language models like GPT as having a left-leaning slant on political topics, though models can be prompted toward neutrality. OpenAI has acknowledged inherent biases stemming from training data and reported a 30% reduction in political bias metrics for GPT-5 compared to GPT-4o in October 2025 evaluations. Legal challenges over training data have intensified, with multiple lawsuits alleging copyright infringement by using protected works without authorization. The New York Times filed suit against OpenAI and Microsoft in December 2023, claiming GPT models were trained on millions of its articles, enabling regurgitation of copyrighted content. By October 2025, at least 51 copyright lawsuits targeted AI firms including OpenAI, focusing on whether scraping web data for training constitutes fair use. Courts have ordered OpenAI to disclose training data details in some cases, such as authors' suits in September 2024, amid debates over transformative use versus infringement. Safety risks and potential misuse remain prominent concerns, including the generation of harmful or misleading content. GPT models have demonstrated vulnerabilities to prompt injection attacks, where adversarial inputs override safeguards to produce unsafe outputs like code or scripts. A 2023 study warned of risks in providing safety-related information, such as instructions, potentially aiding misuse by low-literacy users. OpenAI's pre-deployment red teaming for models like InstructGPT identified misuse vectors, including biased or toxic responses, though critics argue safeguards are insufficient against jailbreaking techniques. Environmental impacts from high energy demands have drawn scrutiny, with training alone consuming substantial resources. Training GPT-3 required 1,287 megawatt-hours, equivalent to 550 metric tons of CO2 emissions, akin to 1.2 million miles driven by a gasoline . Estimates for GPT-4 suggest costs exceeding $100 million and up to 50 gigawatt-hours of , exacerbating data center water and power strains. Inference per ChatGPT query uses about 0.0029 kWh, scaling significantly with billions of daily uses, though reports efficiencies reducing per-query water use to 0.000085 gallons. Reliability issues, including hallucinations and sycophantic tendencies, undermine trust in GPT outputs. Models frequently fabricate facts or adjust responses to flatter users, as noted in a 2025 analysis of and similar systems echoing user views over accuracy. GPT-5 faced backlash in 2025 for underwhelming performance gains, slow inference times averaging over 100 seconds for complex tasks, and failure to deliver promised breakthroughs despite hype. These limitations, attributed to scaling laws plateauing, have led skeptics like to question the trajectory of GPT architectures toward general intelligence.

Biological and Medical Uses

Alanine Aminotransferase (ALT/GPT)

(ALT), also designated as glutamate pyruvate (GPT) or serum glutamic-pyruvic (SGPT), is a pyridoxal phosphate-dependent encoded by the GPT on 8q24.3 that catalyzes the reversible transfer of an amino group from L-alanine to α-ketoglutarate, yielding pyruvate and L-glutamate. This reaction facilitates alanine's role in the glucose-alanine cycle, enabling the transport of nitrogen from peripheral tissues to the liver for and synthesis. The exists in cytosolic and mitochondrial isoforms, with the cytosolic form (ALT1) predominant in hepatocytes. Predominantly localized in hepatocytes, ALT is also present at lower concentrations in renal, cardiac, , and pancreatic cells, reflecting its broader involvement in and intermediary . Physiologically, ALT supports hepatic protein breakdown into energy substrates during fasting or stress, contributing to metabolic . Genetic polymorphisms in the GPT locus influence baseline activity, with variants associated with enzyme levels in erythrocytes and potential links to disease susceptibility, though clinical relevance remains under investigation. In clinical practice, ALT serves as a sensitive for hepatocellular injury, as damaged liver cells release the into circulation. The test, typically performed via spectrophotometric on as part of a liver function panel, quantifies activity in international units per liter (U/L). Reference intervals vary by , , , and but generally range from 4-36 U/L for adults, with upper limits around 29-33 U/L for women and 40-50 U/L for men in some standardized protocols; values exceeding twice the upper limit warrant evaluation for acute liver pathology. Elevated ALT (>3-fold upper limit) correlates with conditions causing necrosis or , including (e.g., or C), non-alcoholic , alcoholic liver injury, drug-induced (e.g., acetaminophen overdose), ischemia, or autoimmune disorders; patterns often distinguish intrahepatic from extrahepatic causes when paired with aspartate aminotransferase (). Mild elevations (1.5-3-fold) may signal chronic metabolic issues like obesity-related , while isolated high ALT without symptoms prompts screening for occult . Decreased levels are rare and typically insignificant, potentially linked to deficiency or advanced end-stage with depleted enzyme stores. Factors influencing results include analytical variability, non-fasting states, strenuous exercise, or medications like statins, necessitating clinical correlation. Historically recognized as SGPT since the mid-20th century amid advancements in assays, ALT's diagnostic primacy stems from its liver specificity relative to .

Business and Organizational Uses

GPT Group and Similar Entities

The GPT Group is an real estate investment trust (REIT) publicly listed on the Australian Securities Exchange (ASX) under the ticker GPT since April 1971. It operates as a diversified property owner, manager, and developer, with totaling approximately AUD 34 billion as of December 31, 2024, spanning office, retail, logistics, and student accommodation sectors across . The company's includes high-quality assets such as regional shopping centers, premium office towers, and industrial facilities, emphasizing active management to generate income and capital growth for investors. Founded as one of Australia's pioneering property trusts, GPT has evolved through , including its expansion into and post-2000s, while maintaining a focus on and tenant partnerships. As of 2025, it manages over 10 major centers and a network of office and properties, serving more than 35,000 investors with a reflecting its status as a leading ASX-listed entity. The group prioritizes (ESG) integration, such as certifications, in its asset strategies. Similar entities include other major Australian diversified property groups and REITs, such as Group, which focuses on integrated property development across residential, office, and retail; , a specialist in premium office and industrial assets; and , emphasizing community-oriented retail and residential projects. These competitors operate comparable models of owning and managing large-scale commercial portfolios, often listed on the ASX, and compete in sectors like and amid Australia's post-pandemic property market shifts. stands out as a logistics-focused peer, managing industrial warehouses and data centers with international exposure beyond Australia's domestic market.

Other Uses

Miscellaneous Acronyms and Applications

In computing, GPT refers to the , a standard for organizing partitions on physical storage devices such as hard disk drives and solid-state drives. Introduced as part of the Extensible Firmware Interface (EFI) specification and later formalized in the standard, GPT enables support for disks exceeding 2 terabytes in size, which surpasses the limitations of the legacy partitioning scheme limited to 2^32 sectors or approximately 2.2 terabytes with 512-byte sectors. The GPT structure begins with a protective MBR for backward compatibility, followed by a primary partition table header at logical block address (LBA) 1, which includes metadata such as disk size, partition entry location, and CRC32 checksums for integrity verification. Partition entries, stored starting at LBA 2, each consist of a 128-byte record containing a unique partition type GUID, a unique partition GUID, starting and ending LBAs, attributes (e.g., for read-only or hidden partitions), and a descriptive name in UTF-16. A backup header at the end of the disk provides redundancy, mitigating risks from corruption at the primary header, while embedded CRC checks ensure data integrity across the table. This design theoretically accommodates up to 9.4 zettabytes of storage and 128 primary partitions by default, though operating systems may impose further limits. GPT gained prominence with the adoption of firmware, which replaced in systems from the mid-2000s onward, enabling secure boot and larger address spaces. first supported GPT booting in (released January 30, 2007) for x64 systems, requiring GPT for disks over 2 terabytes in server editions as early as 2003. Modern operating systems including , macOS (using Apple's variant since Mac OS X 10.4 Tiger in 2005), and distributions universally support GPT via tools like parted or gdisk, making it the preferred scheme for new installations on large-capacity drives. Applications include enterprise storage arrays, embedded systems, and consumer SSDs, where its resilience to single-point failures enhances reliability in configurations and high-capacity environments. Beyond , the acronym appears in niche contexts such as the in statistics, a model for used in risk analysis for events like financial crashes or , though it lacks the widespread application of the partition table format. No other broadly significant acronyms for GPT were identified in recent technical literature outside established domains like , , or corporate entities.

References

  1. [1]
    [PDF] Improving Language Understanding by Generative Pre-Training
    Model specifications Our model largely follows the original transformer work [62]. We trained a. 12-layer decoder-only transformer with masked self-attention ...
  2. [2]
    [2303.08774] GPT-4 Technical Report - arXiv
    Mar 15, 2023 · We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.
  3. [3]
  4. [4]
    OpenAI & The New York Times: A Wake-Up Call For Ethical Data ...
    Jan 31, 2024 · The New York Times filed a federal lawsuit against OpenAI, alleging that the company infringed on its copyrights by using its articles to train AI technologies ...<|control11|><|separator|>
  5. [5]
    The ethics of ChatGPT – Exploring the ethical issues of an emerging ...
    However, there is the well-discussed issue that machine learning models can learn biases from its training data and replicate these in interactions with users.
  6. [6]
    OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive
    Jan 18, 2023 · In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers earning less than $2 per hour, a TIME investigation has found.
  7. [7]
    [PDF] GPT-4 passes most of the 297 written Polish Board Certification ...
    The performance of the GPT models varied significantly, displaying excellence in exams related to certain specialties while completely failing others. ...
  8. [8]
    Better language models and their implications - OpenAI
    Feb 14, 2019 · Last year, OpenAI's Generative Pre-trained Transformer (GPT) showed that language models trained on large amounts of data can be fine-tuned ...
  9. [9]
    Generative Pre-trained Transformer: A Comprehensive Review on ...
    May 11, 2023 · This review provides a detailed overview of the GPT, including its architecture, working process, training procedures, enabling technologies, and its impact on ...
  10. [10]
    [1706.03762] Attention Is All You Need - arXiv
    Jun 12, 2017 · We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
  11. [11]
    GPT-2: 1.5B release - OpenAI
    Nov 5, 2019 · We're releasing the largest version (1.5B parameters) of GPT-2 along with code and model weights(opens in a new window) to facilitate detection of outputs of ...
  12. [12]
    OpenAI GPT-3, the most powerful language model: An Overview
    Mar 14, 2022 · On June 11, 2020, GPT-3 was launched as a beta version. The full version of GPT-3 has a capacity of 175 billion ML parameters. GPT-2 has 1.5 ...
  13. [13]
    GPT-4 - OpenAI
    Mar 14, 2023 · GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios,Missing: achievements | Show results with:achievements
  14. [14]
    Hello GPT-4o - OpenAI
    May 13, 2024 · GPT‑4o's text and image capabilities are starting to roll out today in ChatGPT. We are making GPT‑4o available in the free tier, and to Plus ...
  15. [15]
    Introducing GPT-5 - OpenAI
    Aug 7, 2025 · GPT‑5 is our strongest coding model to date. It shows particular improvements in complex front‑end generation and debugging larger repositories.
  16. [16]
    The Complete History of OpenAI Models: From GPT-1 to GPT-5
    Aug 11, 2025 · The first in the series of OpenAI models, GPT-1, was based on the transformer models architecture introduced by Vaswani et al. in 2017. With 117 ...
  17. [17]
    [PDF] GPT-4 Technical Report - OpenAI
    Mar 27, 2023 · We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.
  18. [18]
    A Brief History of GPT Models - Edlitera
    Mar 26, 2024 · The first GPT model, now known as GPT-1, was introduced by OpenAI in 2018. As a model, it is an advanced iteration of the original Transformer ...
  19. [19]
    GPT-2: 6-month follow-up - OpenAI
    Aug 20, 2019 · We're releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February, staged release of our medium 355M model ...
  20. [20]
    GPT-3 powers the next generation of apps - OpenAI
    Mar 25, 2021 · Over 300 applications are delivering GPT‑3–powered search, conversation, text completion, and other advanced AI features through our API.
  21. [21]
    OpenAI Presents GPT-3, a 175 Billion Parameters Language Model
    Jul 7, 2020 · OpenAI researchers recently released a paper describing the development of GPT-3, a state-of-the-art language model made up of 175 billion parameters.
  22. [22]
    Introducing ChatGPT - OpenAI
    Nov 30, 2022 · ChatGPT is fine-tuned from a model in the GPT‑3.5 series, which finished training in early 2022. You can learn more about the 3.5 series here⁠( ...Introducing ChatGPT search · Introducing ChatGPT Pro · OpenAI announces new...
  23. [23]
  24. [24]
    What Is GPT-4? Key Facts and Features - Semrush
    Aug 29, 2023 · GPT-4 was released by OpenAI on March 14, 2023, to ChatGPT Plus subscribers and Bing search engine users. API access was also offered via a ...
  25. [25]
    Model Release Notes | OpenAI Help Center
    GPT-4.1 is a specialized model that excels at coding tasks. Compared to GPT-4o, it's even stronger at precise instruction following and web development tasks, ...
  26. [26]
    GPT-5 and the new era of work - OpenAI
    Aug 7, 2025 · GPT‑5 is starting to roll out to Team users today, with access for Enterprise and Edu coming next week. GPT‑5 is now available in the OpenAI API ...
  27. [27]
    ChatGPT Capabilities Overview - OpenAI Help Center
    Core Capabilities · Answering questions and explaining concepts · Drafting, rewriting, or summarizing content · Providing creative suggestions (e.g. writing ...
  28. [28]
    Analysis: GPT-4o vs GPT-4 Turbo - Vellum AI
    May 14, 2024 · Key takeaways from this graph: On the MMLU, the reasoning capability benchmark, GPT-4o scores 88.7%, a 2.2% improvement compared to GPT-4 Turbo.
  29. [29]
    Introducing GPT‑5 for developers - OpenAI
    Aug 7, 2025 · The model also excels at front-end coding, beating OpenAI o3 at frontend web development 70% of the time in internal testing. We trained GPT‑5 ...
  30. [30]
    GPT4.5: A Complete Review and How It Compares To Others
    Feb 28, 2025 · HumanEval (coding generation): GPT-4.5 achieves 88.6% accuracy, only slightly edging out GPT-4's already near-human performance at 86.6%; MGSM ( ...
  31. [31]
    What is GPT AI? - Generative Pre-Trained Transformers Explained
    GPT models give applications the ability to create human-like text and content (images, music, and more), and answer questions in a conversational manner.Missing: core | Show results with:core
  32. [32]
    What is GPT (generative pre-trained transformer)? - IBM
    Generative pre-trained transformers (GPTs) are a family of advanced neural networks designed for natural language processing (NLP) tasks.
  33. [33]
    Top 20 Applications of Large Language Models in Real-Life
    Jul 23, 2025 · Top 20 Applications of Large Language Models in Real-Life · 1. Healthcare · 2. Finance · 3. Education · 4. Entertainment · 5. Customer Service · 6.
  34. [34]
    Large Language Models and Their Applications in Drug Discovery ...
    Apr 10, 2025 · LLMs are used in drug discovery for target identification, preclinical research, clinical trial analysis, medical writing, and accelerating ...
  35. [35]
    Integrative innovation of large language models in industries
    Jul 2, 2025 · LLMs enhance NLP, automate customer interactions, improve decision-making, and optimize workflows, but face challenges like security and data ...
  36. [36]
    Industrial applications of large language models | Scientific Reports
    Apr 21, 2025 · These models are used in generating coherent, contextually relevant text, making them useful for applications in content creation, language ...
  37. [37]
    GPT-4.1: Features, Access, GPT-4o Comparison, and More
    It also more than doubles GPT-4o's performance on Aider's polyglot diff benchmark, reaching 52.9% accuracy on code diffs across multiple languages and formats.
  38. [38]
    Top Applications of Large Language Models Across Industries
    Jan 3, 2025 · Assisting inaccurate medical diagnoses. · Summarizing patient records for quicker decision-making. · Supporting medical research by analyzing vast ...
  39. [39]
    Understanding the capabilities, limitations, and societal impact of ...
    Feb 4, 2021 · 1) What are the technical capabilities and limitations of large language models? 2) What are the societal effects of widespread use of large language models?<|separator|>
  40. [40]
    The politics of AI: ChatGPT and political bias - Brookings Institution
    May 8, 2023 · These inconsistencies aside, there is a clear left-leaning political bias to many of the ChatGPT responses. One potential source of bias is the ...
  41. [41]
    Study finds perceived political bias in popular AI models
    May 21, 2025 · Both Republicans and Democrats think LLMs have a left-leaning slant when discussing political issues. Many AI models can be prompted to take a more neutral ...
  42. [42]
    New GPT models show major drop in political bias, OpenAI says
    Oct 10, 2025 · According to the data, OpenAI's new GPT-5 models reduced political bias by roughly 30% compared to GPT-4o.
  43. [43]
    OpenAI claims GPT-5 has 30% less political bias - The Register
    Oct 10, 2025 · "GPT‑5 instant and GPT‑5 thinking show improved bias levels and greater robustness to charged prompts, reducing bias by 30 percent compared to ...
  44. [44]
    Does ChatGPT violate New York Times' copyrights? - Harvard Law ...
    Mar 22, 2024 · Harvard Law expert in technology and the law says the New York Times lawsuit against ChatGPT parent OpenAI is the first big test for AI in the copyright space.
  45. [45]
    Status of all 51 copyright lawsuits v. AI (Oct. 8, 2025)
    Oct 8, 2025 · Status of all 51 copyright lawsuits v. AI (Oct. 8, 2025): no more decisions on fair use in 2025. · Discover more from Chat GPT Is Eating the ...
  46. [46]
    OpenAI Training Data to Be Inspected in Authors' Copyright Cases
    Sep 24, 2024 · OpenAI will provide access to its training data for review of whether copyrighted works were used to power its technology.Missing: GPT | Show results with:GPT
  47. [47]
    12 Key ChatGPT Security Risks with Examples - Reco AI
    May 23, 2025 · 1. Prompt Injection Vulnerabilities ... Prompt injection is one of the most concerning ChatGPT security risks because it manipulates the model ...
  48. [48]
    The risks of using ChatGPT to obtain common safety-related ...
    Misuse concerns raised for ChatGPT in providing safety-related information. •. Populations with lower literacy and education are at higher risk of consuming ...
  49. [49]
    Lessons learned on language model safety and misuse - OpenAI
    Mar 3, 2022 · Pre-deployment risk analysis, leveraging a growing set of safety evaluations and red teaming tools (e.g., we checked our InstructGPT for any ...
  50. [50]
    ChatGPT Hits 700M Weekly Users, But at What Environmental Cost?
    Aug 6, 2025 · Training GPT-3 used 1,287 megawatt-hours of energy. This caused about 550 metric tons of CO₂ emissions. That's like a car driving 1.2 million ...
  51. [51]
    We did the math on AI's energy footprint. Here's the story you haven't ...
    May 20, 2025 · This is a time-consuming and expensive process—it's estimated that training OpenAI's GPT-4 took over $100 million and consumed 50 gigawatt- ...Power Hungry · Four reasons to be optimistic... · Can nuclear power really fuel...<|separator|>
  52. [52]
    Artificial intelligence and the environment: Putting the numbers into ...
    May 2, 2025 · Per query, ChatGPT uses 0.0029 kWh energy, 30ml water, and 0.69g CO2. Streaming Netflix per hour uses 0.077 kWh, 2-12L water, and 34g CO2.
  53. [53]
    ChatGPT is an energy guzzler. These things you're doing are worse.
    Aug 26, 2025 · ChatGPT is a comparative teetotaler consuming 0.000085 gallons of water, or one-fifteenth of a teaspoon, per response, claims the company ...
  54. [54]
  55. [55]
    GPT-5: Overdue, overhyped and underwhelming. And that's not the ...
    Aug 9, 2025 · For all that, GPT-5 is not a terrible model. I played with it for about an hour, and it actually got several of my initial queries right (some ...Missing: controversies | Show results with:controversies
  56. [56]
    GPT-5's model router ignited a user backlash against OpenAI—but it ...
    Aug 12, 2025 · Developers complained of degraded performance. Industry critic Gary Marcus predictably called GPT-5 “overdue, overhyped, and underwhelming.”
  57. [57]
    Is GPT-5 a "phenomenal" success or an "underwhelming" failure?
    Aug 14, 2025 · In short, GPT-5 seems to be only a little bit better at a lot of different tasks. AI critics have seized on this fact—and the symbolic ...<|separator|>
  58. [58]
    Alanine Aminotransferase (ALT) Test - StatPearls - NCBI Bookshelf
    Alanine aminotransferase (ALT) is an enzyme found predominantly in the liver but also in other tissues such as the kidneys, heart, and muscle cells.
  59. [59]
    Alanine aminotransferase—a marker of cardiovascular risk at high ...
    Sep 9, 2019 · Serum ALT activity is a reliable marker of liver disease and general health. Serum ALT may be associated with cardiovascular disease (CVD) or mortality.
  60. [60]
    Mutations in mitochondrial enzyme GPT2 cause metabolic ... - PNAS
    Sep 6, 2016 · In this study, we present the discovery of loss-of-function mutations in the gene encoding the enzyme glutamate pyruvate transaminase 2 (GPT2) ...<|control11|><|separator|>
  61. [61]
    Liver function tests - Mayo Clinic
    Jan 18, 2025 · Some common liver function tests include: Alanine transaminase (ALT). ALT is an enzyme found in the liver that helps convert proteins into ...Acute liver failure · Alcoholic hepatitis (Alcohol... · Doctors & Departments<|separator|>
  62. [62]
    Entry - *138200 - GLUTAMATE PYRUVATE TRANSAMINASE; GPT
    In red cell hemolysates, Chen and Giblett (1971) found polymorphism of the soluble form of glutamate-pyruvate transaminase. Allele frequencies in the GPT ...
  63. [63]
    Alanine Transaminase (ALT) Blood Test - Cleveland Clinic
    Alanine transaminase (ALT) is an enzyme that's mainly found in your liver. High levels of ALT in your blood may indicate damage or injury to your liver.
  64. [64]
    Liver Function Tests - StatPearls - NCBI Bookshelf - NIH
    The liver function tests typically include alanine transaminase (ALT) and aspartate transaminase (AST), alkaline phosphatase (ALP), gamma-glutamyl transferase ...Missing: GPT | Show results with:GPT
  65. [65]
    Alanine transaminase (ALT) blood test - UCSF Health
    Feb 28, 2023 · The normal range is 4 to 36 U/L. Normal value ranges may vary slightly among different laboratories. Some labs use different measurements or may ...
  66. [66]
    Alanine aminotransferase (ALT) blood test - Mayo Clinic
    Jun 3, 2025 · An ALT blood test helps check for liver damage. Learn when the test is done, how to prepare and what the results mean.
  67. [67]
    The past and present of serum aminotransferases and the future of ...
    The purpose of this paper is to review the history of liver biomarkers, to summarize mechanisms and interpretation of ALT and AST elevation in plasma in liver ...
  68. [68]
    GPT Group/The - Company Profile and News - Bloomberg Markets
    GPT Group is an active owner and manager of a diversified portfolio of Australian retail, office and industrial property assets.
  69. [69]
    The GPT Group Company Profile - Overview - GlobalData
    As of December 31, 2024, GPT owned and managed diversified property portfolio worth AUD34.1 billion. GPT is headquartered in Sydney, New South Wales, Australia.
  70. [70]
    About Us - GPT Group
    We're one of Australia's leading real estate investment managers, with assets under management of $34 billion across a diverse portfolio.Executive Team · Careers · Working at GPT · Board of Directors
  71. [71]
    GPT Group | Company Overview & News - Forbes
    The GPT Group operates real estate investment property trust. It invests in retail, commercial, hotel, industrial, and office park properties. The company ...
  72. [72]
    [PDF] The GPT Group - Stapled Structures - Treasury.gov.au
    The GPT Group is listed on the ASX and is one of Australia's largest diversified property groups with over 35,000 investors and a market capitalisation of ...
  73. [73]
    The GPT Group - World Green Building Council
    The GPT Group is a diversified property group that owns, develops and manages a high quality portfolio of office, retail and logistics assets.Missing: business | Show results with:business
  74. [74]
    Top GPT Group Competitors and Alternatives | Craft.co
    GPT Group's competitors and similar companies include Mirvac, Urban Edge Properties, Ascendas-Singbridge and City Developments.
  75. [75]
    GPT Group's Competitors, Revenue, Number of Employees ... - Owler
    GPT Group's CEO, Bob Johnston, currently has an approval rating of 79%. GPT Group's primary competitors are Dexus, Stockland, Mirvac and 15 more.
  76. [76]
    The GPT Group Company Overview, Contact Details & Competitors
    Similar companies to The GPT Group · Cromwell Property Group · Dexus · Investa Property Group · Stockland · Vicinity Centres · Goodman Group · Lendlease · AMP Capital.
  77. [77]
    Windows and GPT FAQ - Microsoft Learn
    The 16-byte partition type GUID, which is similar to a System ID in the partition table of an MBR disk, identifies the type of data that the partition contains ...GPT · What a GPT disk is
  78. [78]
    5. GUID Partition Table (GPT) Disk Layout - UEFI Forum
    This specification defines the GUID Partition table (GPT) disk layout (ie, partitioning scheme). The following list outlines the advantages of using the GPT ...
  79. [79]
    GUID partition tables (GPT) – what are they and why do they matter?
    Jul 15, 2024 · Newer GUID Partition Tables allow larger partitions on drives up to 18 exabytes in size. They also provide safety and reliability features.
  80. [80]
    Convert a disk to GPT or MBR partition scheme - Microsoft Learn
    Jun 17, 2025 · GUID Partition Table (GPT) disks use the modern Unified Extensible Firmware Interface (UEFI), supports more than four partitions, and can ...
  81. [81]
    GUID Partition Table (GPT) - NTFS.com
    The GPT, or the GUID Partition Table, is the standard format of partitioning tables on a physical hard disk. It was introduced as part of the EFI, or ...
  82. [82]
    Using GUID Partition Table (GPT) with Intel® RAID Controllers
    This document provides you with a step-by-step guide to install an operating system into GPT Disk on Intel Hardware RAID, under a uEFI environment.
  83. [83]
    Introducing GPT-5.2
    OpenAI announcement detailing GPT-5.2 series updates, including Pro variant performance on ARC-AGI-1 benchmark.