ChatGPT
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released to the public on November 30, 2022.[1] It operates on large language models from the GPT series, initially fine-tuned from GPT-3.5 using reinforcement learning with human feedback to produce contextually relevant, dialogue-oriented text responses to user prompts.[2] Subsequent iterations have incorporated advanced models such as GPT-4 and later versions, enabling multimodal capabilities including image analysis and code generation.[3] ChatGPT's launch precipitated explosive growth in generative AI adoption, amassing hundreds of millions of users within months and reaching 700 million weekly active users by August 2025 (up from around 500 million in March), with daily message volumes exceeding 2.5 billion.[4][5] This surge has integrated the tool into diverse sectors, from accelerating productivity in programming and writing to aiding research and education, while demonstrating superior performance on benchmarks for reasoning and knowledge retrieval compared to prior AI systems.[6] Notwithstanding its achievements, ChatGPT has drawn scrutiny for inherent limitations, including frequent hallucinations—the production of confident yet fabricated facts—and biases reflecting imbalances in its training data, which can perpetuate misinformation or skewed perspectives.[7][8] These issues, alongside concerns over data privacy, copyright infringement in training corpora, existential risks from unchecked AI scaling, and routing of GPT-4o chats to GPT-5 without consent, have fueled regulatory debates and ethical critiques, underscoring the tension between technological advancement and societal safeguards.[9][10]Overview
Definition and Core Functionality
ChatGPT is an artificial intelligence chatbot developed by OpenAI, publicly released on November 30, 2022.[1] It operates as a web and mobile application enabling users to engage in interactive text-based conversations, with support for voice input and output in later updates.[11] The system is built on large language models (LLMs) from OpenAI's GPT series, initially fine-tuned from GPT-3.5, which employ transformer architectures to process and generate natural language.[1][12] At its core, ChatGPT functions by receiving user prompts—ranging from simple questions to complex instructions—and producing responses autoregressively through probabilistic next-token prediction, where the model computes the probability distribution over the next token and samples from it (e.g., via temperature or nucleus sampling), generating sequences at the token level rather than optimizing for a single globally most likely sequence of words, based on patterns learned from extensive training data comprising internet text, books, and other sources.[12][13] This process involves pre-training on massive corpora to develop language understanding capabilities, followed by supervised fine-tuning and reinforcement learning from human feedback (RLHF) to refine output quality, coherence, and adherence to helpfulness and harmlessness criteria.[1][12] Unlike traditional search engines, it generates synthesized content rather than retrieving exact matches, allowing for tasks such as drafting essays, debugging code, explaining concepts, or simulating dialogues, though it may produce factual inaccuracies known as hallucinations due to its reliance on statistical associations over genuine comprehension.[14][13] Key features include maintaining conversation context across multiple turns, admitting errors when prompted, rejecting inappropriate requests, and challenging flawed user premises, which enhance its utility for iterative interactions.[1] As of 2025, enhancements like web search integration and agentic capabilities extend functionality to real-time information retrieval and task execution, but the foundational mechanism remains generative response prediction grounded in transformer-based autoregression.[15][16]Initial Launch and Rapid Adoption
ChatGPT was publicly released by OpenAI on November 30, 2022, as a free research preview accessible via a web interface, powered by the GPT-3.5 large language model.[17] The launch was announced through OpenAI's blog and social media, emphasizing its conversational capabilities for tasks such as writing assistance, coding, and question-answering.[18] Initial access was free but not unlimited, with rate caps and capacity blocks appearing within the first week—such as "You've reached your usage limit" on December 6, 2022, and "ChatGPT is at capacity right now" on December 7, 2022—which contributed to immediate high demand and server overloads within hours of availability.[19][20] The model's ability to generate coherent, context-aware responses led to viral sharing of examples on platforms like Twitter and Reddit, accelerating adoption.[18] ChatGPT reached one million registered users just five days after launch.[18] [21] In January 2023, it reached an estimated 100 million monthly active users, outpacing previous records set by apps like TikTok.[22] This rapid uptake was driven by word-of-mouth, media coverage, and demonstrations of practical utility, though early limitations like factual inaccuracies and repetitive outputs were noted by users.[19] Sustained growth prompted OpenAI to introduce a formal waitlist system tied to monetization in January 2023, with the announcement of ChatGPT Professional access on January 11, to manage infrastructure strain, while paid subscriptions via ChatGPT Plus were launched on February 1, 2023, offering priority access.[18][23] The surge in traffic—reaching peaks of approximately 60 million daily visits in 2023—highlighted public fascination with generative AI but also raised concerns about computational costs and energy consumption, with peaks exceeding 100 million daily visits occurring later, in 2025.[24][19] Adoption metrics underscored a broad demographic appeal, with early users spanning students, professionals, and hobbyists, though data indicated heavier initial engagement from tech-savvy individuals in developed regions.[6]
Historical Development
Origins at OpenAI
OpenAI, the organization behind ChatGPT, was incorporated on December 8, 2015, and publicly announced on December 11, 2015, as a non-profit entity by founders including Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, and John Schulman, among others, with the explicit mission to develop artificial general intelligence (AGI) in a manner that benefits humanity as a whole.[25][26] The initiative emerged amid concerns over rapid AI progress by for-profit entities like Google DeepMind, aiming to counter potential risks through open research and safety-focused development.[26] To fund compute-heavy AGI pursuits, OpenAI partnered with Microsoft in 2019 for cloud resources and adopted a "capped-profit" subsidiary model, enabling equity investments while capping returns for the first round of investors at 100 times their investment, expecting lower multiples for future rounds, to preserve mission alignment over pure commercialization.[27] This structural evolution facilitated scaling of large language models, starting with GPT-1 in June 2018, which introduced unsupervised pre-training on the BookCorpus dataset (books) for next-word prediction, followed by GPT-2 in February 2019 and GPT-3 in June 2020 with 175 billion parameters, showcasing emergent capabilities from massive scaling.[28] ChatGPT's direct origins trace to OpenAI's alignment research post-GPT-3, particularly InstructGPT released in January 2022, which refined GPT-3 via reinforcement learning from human feedback (RLHF) to better follow instructions and reduce untruthful or harmful outputs.[1] ChatGPT, a sibling model, was fine-tuned from the GPT-3.5 series—whose base training concluded in early 2022—using similar RLHF techniques on datasets blending InstructGPT outputs with new human-ranked dialogues for conversational coherence and safety.[1][28] This process, led by researchers including John Schulman and involving human trainers to rank responses, addressed limitations like verbosity and factual inaccuracies observed in earlier prototypes.[28] The model was prepared as an internal prototype emphasizing helpful, honest responses over raw generative power, reflecting OpenAI's emphasis on practical utility amid scaling's diminishing returns on unaligned models.[28] Its release on November 30, 2022, as a free research preview via chat.openai.com marked the culmination of these efforts to gauge real-world behavior.[1]Pre-ChatGPT Prototypes
OpenAI's foundational work on large language models began with the release of GPT-1 on June 11, 2018, a 117-million-parameter transformer-based model trained unsupervised on the BookCorpus dataset of approximately 985 million words from over 7,000 unique unpublished books.[29] This prototype demonstrated the potential of generative pre-training followed by task-specific fine-tuning, achieving state-of-the-art results on benchmarks like natural language understanding tasks, though limited by its scale and reliance on smaller datasets compared to later iterations.[30] Building on GPT-1, OpenAI unveiled GPT-2 on February 14, 2019, scaling to 1.5 billion parameters and trained on a diverse WebText dataset curated from 8 million web pages via Reddit links, excluding explicitly low-quality content. The model generated coherent, multi-paragraph text from prompts, outperforming contemporaries in zero-shot tasks, but OpenAI initially withheld the full version citing risks of misuse for generating deceptive or harmful content, releasing progressively larger checkpoints only after safety assessments.[31] This cautious approach highlighted early concerns over model capabilities outpacing control mechanisms.[30] GPT-3, introduced via API on June 11, 2020, represented a significant leap with 175 billion parameters trained on a mixture of datasets including a filtered Common Crawl portion of about 570 gigabytes of text (~60% or 410 billion tokens), WebText2, Books1, Books2, and Wikipedia, enabling few-shot learning where the model adapted to tasks from prompts alone without fine-tuning.[32] It excelled in generating human-like text across diverse applications, from translation to code completion, but exhibited issues like factual inaccuracies and sensitivity to prompt phrasing, underscoring limitations in robustness and alignment. Access was restricted to API users, with no public weights released, prioritizing commercial deployment over open research.[30] The direct precursor to ChatGPT emerged with InstructGPT, announced in OpenAI's January 27, 2022, blog post and detailed in the March 4, 2022, technical paper by Ouyang et al.,[33][34] which applied reinforcement learning from human feedback (RLHF) to fine-tune smaller GPT-3 variants (1.3 billion and 6 billion parameters) for instruction-following. Unlike base GPT-3's next-token prediction, InstructGPT used a three-stage process—supervised fine-tuning on demonstrations, reward model training from human rankings, and RL optimization—yielding models that outperformed the 175-billion-parameter GPT-3 on human-evaluated instruction adherence while using far fewer resources, though still prone to hallucinations and ungrounded responses.[35] This alignment technique addressed GPT-3's tendencies toward verbosity and off-topic outputs, setting the stage for conversational interfaces by prioritizing helpful, honest, and harmless responses as judged by crowdworkers. Early public releases such as WebGPT, published December 16, 2021, which fine-tuned GPT-3 using human feedback for browser-assisted question-answering,[36] and InstructGPT demonstrated these RLHF-related methods prior to ChatGPT's November 2022 deployment, complemented by OpenAI's internal prototypes that provided further refinements.Public Release and Early Iterations
OpenAI released ChatGPT to the public on November 30, 2022, presenting it as a free research preview powered by a fine-tuned iteration of the GPT-3.5 large language model, optimized for dialogue through reinforcement learning from human feedback (RLHF).[37][38] The launch followed internal testing and aimed to collect user data for further refinement, with access initially available via a web interface at chat.openai.com.[39] The chatbot's debut triggered explosive adoption, reaching 1 million users within five days. In January 2023, monthly active users exceeded 100 million, surpassing the growth rates of prior consumer applications like Instagram or TikTok.[40] This surge strained OpenAI's infrastructure, leading to frequent outages and temporary waitlists, as the system processed unprecedented conversational volumes that highlighted both its engaging interface and limitations in scalability.[41] Early iterations focused on stabilizing the prototype amid feedback revealing issues like factual inaccuracies (hallucinations) and occasional unsafe outputs, prompting OpenAI to apply rapid RLHF updates for improved coherence and alignment.[42] In response to demand, OpenAI introduced the ChatGPT Plus subscription tier in February 2023 at $20 per month, granting priority access during peak times and foreshadowing paid features, while the free tier retained core functionality for broader experimentation.[41] These adjustments marked the transition from preview to a more robust service, with plugins announced on March 23, 2023,[43] and web browsing and plugins rolling out to Plus users in beta starting May 12, 2023,[44] to extend utility beyond static text generation.Technical Architecture
Training Methodology
ChatGPT's training methodology consists of three primary phases applied to a base large language model: supervised fine-tuning on instruction-following data, training of a reward model using human preferences, and reinforcement learning from human feedback (RLHF) to optimize alignment with user intent.[1][33] This approach, pioneered in OpenAI's InstructGPT system and extended to ChatGPT based on GPT-3.5, aims to improve the model's ability to generate helpful, honest, and harmless responses beyond mere text prediction.[33] The supervised fine-tuning phase begins with human annotators generating datasets of prompts paired with high-quality responses, often role-playing as both user and AI assistant to demonstrate desired conversational behaviors. These demonstrations are used to fine-tune the base model via supervised learning, enabling it to better adhere to instructions and produce more coherent dialogue.[1] For initial InstructGPT iterations, annotators created prompts from scratch or sourced them from OpenAI API usage, ensuring diversity across tasks.[33] To further align the model, a reward model is constructed by having human labelers rank multiple outputs generated by the supervised model for the same prompt, typically comparing 4-9 responses per example. These rankings are converted into pairwise comparisons, and the reward model, itself a fine-tuned language model, is trained on these pairwise preferences to output a single scalar reward signal that captures nuanced human judgments on helpfulness, truthfulness, and harmlessness.[33] This step addresses limitations in direct supervised learning by incorporating comparative feedback rather than absolute labels.[33] In the RLHF phase, the Proximal Policy Optimization (PPO) algorithm is employed as the reinforcement learning method. Here, the chat model acts as the policy, generating responses that are scored by the reward model; the policy is then updated to maximize expected reward while constrained by a per-token Kullback-Leibler (KL) divergence penalty relative to the supervised model reference, to keep the policy close to the reference model and mitigate issues like reward hacking or over-optimization.[33] PPO was selected for its sample efficiency and stability in on-policy RL settings with large action spaces typical of language generation.[33] Subsequent ChatGPT iterations, including those based on GPT-4, follow this RLHF framework with scaled-up human feedback datasets and additional safety reward signals incorporated during RLHF.[45]Data Sources and Scaling
The pre-training datasets for the GPT models underlying ChatGPT, such as GPT-3.5, primarily consist of filtered internet text data, including a substantial portion from Common Crawl, a nonprofit archive of web crawls dating back to 2008.[46] For GPT-3, approximately 60% of the weighted pre-training dataset—equating to 410 billion byte-pair-encoded tokens—originates from a processed subset of Common Crawl shards spanning 2016 to 2019, totaling around 45 terabytes of compressed plain text. Additional sources include curated corpora like WebText (high-quality web pages), Books1 and Books2 (digitized books), and English Wikipedia, which together form a diverse mix emphasizing broad linguistic coverage over exhaustive volume.[47] OpenAI applies empirical scaling laws to these models, where cross-entropy loss decreases as a power-law function of model size (parameters), dataset size (tokens), and compute (FLOPs), enabling predictable performance gains through resource-intensive training.[48] GPT-3, with 175 billion parameters, exemplifies this approach, trained on roughly 300 billion tokens using several thousand petaflop/s-days of compute, though exact figures for subsequent iterations like GPT-3.5 remain proprietary.[32] OpenAI has not disclosed GPT-3.5’s detailed pre-training mix; the GPT-3.5 series was trained on a blend of text and code from before Q4 2021, and ChatGPT was fine-tuned from a GPT-3.5 model that finished training in early 2022.[1] For GPT-4 and later models integrated into ChatGPT, dataset scales are undisclosed but presumed larger, following the same scaling paradigm with enhanced filtering to mitigate biases and low-quality content prevalent in uncurated web data like Common Crawl.[49] Knowledge cutoffs vary by variant—e.g., base GPT-4 around September 2021, with turbo updates extending to December 2023—reflecting iterative data ingestion limited by proprietary crawling and processing pipelines rather than real-time web access.[50] This scaling has driven capability improvements, but OpenAI's opacity on precise data compositions raises concerns over reproducibility and potential ingestion of copyrighted or biased materials without explicit attribution.[51]Inference and Infrastructure
ChatGPT's inference process employs an autoregressive generation mechanism, where the model processes user input by tokenizing text into sequences, computing embeddings, and iteratively predicting the next token based on the probability distribution derived from the model's parameters.[52] This step-by-step prediction continues until an end-of-sequence token is generated or a maximum length is reached, enabling conversational responses while incorporating fine-tuning techniques like reinforcement learning from human feedback to align outputs with desired behaviors.[53][54] The computational intensity of inference scales with model size and query complexity; for instance, larger models like those powering GPT-4 variants demand substantial parallel processing across graphics processing units (GPUs) to handle matrix multiplications and attention mechanisms efficiently.[55] Inference efficiency is improved through techniques such as key-value caching to reuse the key/value projections from previously processed tokens in the sequence (past tokens), avoiding recomputing attention over the entire prior context for each new token; as well as model pruning to eliminate redundant parameters and quantization to reduce precision from 32-bit to lower-bit representations, standard methods in large language model deployment that lower latency and energy use without proportionally degrading performance.[56][57] These methods address the high operational costs, with early estimates indicating daily inference expenses exceeding $600,000 for peak loads on thousands of high-end GPUs.[58] OpenAI's infrastructure for ChatGPT inference primarily leverages Microsoft Azure's cloud platform, including custom supercomputers built exclusively for the company to support real-time query serving.[59][60] This setup utilizes tens of thousands of NVIDIA GPUs, such as A100 and subsequent generations, clustered in data centers to manage the autoregressive workloads at scale.[59] To meet growing demand, OpenAI announced partnerships in 2025 with NVIDIA for deploying at least 10 gigawatts of AI data centers—equivalent to millions of GPUs—with AMD for 6 gigawatts of Instinct GPUs, commencing with a 1-gigawatt cluster in 2026, and with Broadcom to develop and deploy 10 gigawatts of custom AI accelerators.[61][62][63] Additional capacity extends via integrations with Oracle Cloud Infrastructure and a multi-year strategic partnership with AWS providing access to Nvidia GPU clusters, to augment Azure's resources, ensuring redundancy and load balancing for global user traffic.[64][65] These investments reflect the causal link between inference throughput and service reliability, as bottlenecks in GPU availability have historically imposed rate limits on free-tier users.[56]Model Evolution
GPT-3.5 Turbo Era
ChatGPT launched on November 30, 2022, powered by a fine-tuned model from the GPT-3.5 series, which emphasized instruction-following and conversational coherence with a context window of 4,096 tokens.[1] This initial deployment enabled rapid user engagement but transitioned to GPT-3.5 Turbo on March 1, 2023, for enhanced efficiency, lower latency, and cost reductions tailored to high-volume chat applications.[66] GPT-3.5 Turbo served as an optimized variant of the GPT-3.5 model family, emphasizing efficiency for conversational applications.[67] This model featured enhanced instruction-following capabilities, lower latency, and reduced API pricing compared to predecessors like text-davinci-003, making it more suitable for high-volume chat completions.[68] It maintained a context window of 4,096 tokens, allowing for extended dialogues while prioritizing cost-effectiveness at approximately $0.002 per 1,000 tokens for input and output combined.[69] The introduction of GPT-3.5 Turbo coincided with ChatGPT's rapid scaling phase, where OpenAI shifted API users toward this model for its balance of performance and affordability, enabling broader developer adoption amid surging demand following ChatGPT's November 30, 2022 launch.[1] By early 2023, the model demonstrated superior handling of multi-turn conversations over earlier GPT-3.5 iterations, though it retained limitations in factual accuracy and reasoning depth, often producing hallucinations or contextually inconsistent outputs.[70] Benchmarks indicated incremental gains in tasks like natural language understanding, but empirical evaluations highlighted persistent vulnerabilities to adversarial prompts and biases inherited from training data dominated by internet-sourced text.[71] This era marked a transitional phase for ChatGPT, bridging the foundational GPT-3.5 deployment to more advanced architectures, with OpenAI issuing snapshot versions like gpt-3.5-turbo-0301 to stabilize performance through at least June 2023.[72] Developer feedback noted variability in output quality across invocations, attributing differences to subtle fine-tuning adjustments rather than architectural overhauls.[73] Concurrently, the model's affordability facilitated integrations in enterprise tools, though concerns over data privacy and potential misuse prompted OpenAI to enforce stricter usage policies, reflecting causal links between scaled deployment and emergent risks like amplified misinformation propagation.[74]GPT-4 and Multimodal Advances
OpenAI announced GPT-4 on March 14, 2023, as a large language model exhibiting enhanced performance over GPT-3.5 across benchmarks measuring human-like understanding, including professional exams like the bar and GRE, where it achieved scores surpassing prior models but still below human experts in many domains.[45] This model was integrated into ChatGPT for ChatGPT Plus subscribers shortly thereafter, enabling access to its 8,192-token context window (later expanded to 32,768 tokens) and improved handling of complex instructions, reducing hallucinations through refined training on synthetic data and reinforcement learning from human feedback.[45] GPT-4's architecture maintained the transformer-based design of predecessors but scaled parameters to an estimated 1.76 trillion, contributing to superior zero-shot reasoning on tasks like code generation and multilingual translation, though exact parameter counts remain undisclosed by OpenAI.[45] A variant, GPT-4 Turbo, was introduced on November 6, 2023, featuring a 128,000-token context window to accommodate longer conversations and documents, alongside cost reductions for API usage and a knowledge cutoff extended to December 2023.[75] Multimodal capabilities advanced with the release of GPT-4 Turbo with Vision in April 2024, allowing the model to process image inputs alongside text for tasks such as visual question answering, object detection in diagrams, and interpreting charts, marking a shift from text-only processing in early ChatGPT iterations.[76] These vision features enabled ChatGPT users to upload images for analysis, such as describing medical scans or troubleshooting visual errors in code screenshots, though outputs remained text-based and subject to errors in spatial reasoning or low-resolution inputs.[77] The most significant multimodal leap occurred with GPT-4o, released on May 13, 2024, as OpenAI's flagship model optimized for speed and efficiency while matching or exceeding GPT-4 on intelligence benchmarks.[78] Unlike prior versions, GPT-4o natively integrates text, vision, and audio modalities in real-time, supporting end-to-end processing without separate transcription or vision models, which reduced latency to near-human response times in voice interactions—averaging 320 milliseconds for audio replies.[78] In ChatGPT, this enabled Advanced Voice Mode for Plus and higher tiers, allowing conversational speech with emotional tone detection and interruptions, alongside image uploads for combined audio-visual queries, such as real-time translation of spoken content overlaid on visuals.[78] GPT-4o also facilitated seamless integration with DALL-E 3 for image generation prompts derived from multimodal inputs, though safeguards limited photorealistic outputs of real individuals to mitigate misuse.[78] Performance gains included halved inference costs compared to GPT-4 Turbo and broader availability to free-tier users with rate limits, driving increased adoption for diverse applications like accessibility aids and creative workflows.[79] Despite these advances, GPT-4o exhibited persistent limitations in factual recall beyond its October 2023 knowledge cutoff and occasional biases inherited from training data, necessitating user verification for critical tasks.[78]Reasoning-Focused Models (o1 Series)
The o1 series, introduced by OpenAI on September 12, 2024, marks a departure from prior generative models by prioritizing internal reasoning processes over direct response generation. These models, including o1-preview and o1-mini, are engineered to simulate extended deliberation, generating hidden chain-of-thought sequences before outputting answers, which enables superior handling of multifaceted problems in domains such as mathematics, coding, and scientific analysis.[80] Initially rolled out to ChatGPT Plus subscribers with usage limits—50 queries per week for o1-preview and 50 per day for o1-mini—the series integrates into ChatGPT's interface but omits features like web browsing or multimodal inputs available in GPT-4o.[80] Central to the o1 series is a training paradigm employing large-scale reinforcement learning (RL) to instill productive reasoning behaviors, rather than relying solely on supervised fine-tuning or prompt engineering. During training, models learn to produce step-by-step thought chains, iteratively refining strategies, identifying errors, and decomposing complex tasks, with performance scaling logarithmically with additional compute allocated to reasoning steps. This RL approach contrasts with earlier models' reliance on explicit chain-of-thought prompting, as o1 internalizes the process end-to-end, reducing susceptibility to superficial pattern matching. Safety training is also augmented by embedding policy adherence into these internal deliberations, yielding lower rates of adversarial failures compared to GPT-4o.[81] In benchmarks emphasizing reasoning, o1-preview demonstrates large improvements over its predecessor GPT-4o: it solves 83% of problems on the International Mathematical Olympiad qualifying exam, versus 13% for GPT-4o, and achieves the 89th percentile on Codeforces coding contests. On the AIME math benchmark, it attains 74% accuracy compared to GPT-4o's 12%, while surpassing PhD-level performance on GPQA Diamond, a graduate-level science evaluation in physics, chemistry, and biology. These improvements come from the model's capacity to expend variable thinking time—up to minutes for intricate queries—though this incurs higher latency and token costs.[81][80] o1-mini serves as a smaller variant, optimized for cost-efficiency and speed in STEM-focused applications, performing 3–5 times faster than o1-preview on reasoning tasks while costing 80% less via API. It excels in coding (92.4% on HumanEval) and math (90% on MATH-500, 70% on AIME), but underperforms on knowledge-intensive benchmarks like GPQA (60%) due to reduced emphasis on broad factual recall. The full o1 model, succeeding the preview, further refines these traits with marginally higher scores, such as 94.8% on MATH-500. Despite advancements, the series exhibits limitations including elevated hallucination risks in non-reasoning contexts and dependency on precise prompting, as verbose instructions can disrupt internal chains.[82]Mid-2025 Releases (GPT-4.5, GPT-4.1, and o3/o4)
On February 27, 2025, OpenAI released GPT-4.5, its largest model to date, advancing scaling laws through extensive pre-training for enhanced pattern recognition, creativity, empathy, natural conversation, and general knowledge. Initially available to ChatGPT Pro subscribers and via API, the model represented a significant investment in compute resources compared to prior iterations.[83] In April 2025, OpenAI released GPT-4.1, a new iteration in its GPT series optimized for coding and complex instruction-following tasks, initially available through the API.[84] This model demonstrated superior performance on benchmarks like SWE-bench Verified, surpassing GPT-4o in software engineering tasks while maintaining a 128,000-token context window.[84] GPT-4.1 includes variants such as a nano version for efficiency, targeting developers with reduced latency and fewer errors in code generation compared to prior models.[85] On May 14, 2025, OpenAI integrated GPT-4.1 into ChatGPT for all paid users, citing developer feedback as a key factor in its popularity for practical applications.[44] Concurrently, OpenAI advanced its reasoning model lineage with the o3 series and o4-mini, building on the o1 framework introduced earlier. The o3 model, emphasizing enhanced chain-of-thought reasoning for math, coding, and scientific problem-solving, saw its mini variant released on January 31, 2025, as a cost-efficient option.[86] Full o3 and o4-mini launched on April 16, 2025, with o4-mini designed for rapid inference at lower costs, achieving high scores in targeted reasoning evaluations despite its smaller scale.[87] By June 10, 2025, o3-pro became accessible to ChatGPT Pro subscribers via API and interface, incorporating tool-use capabilities for extended deliberation on complex queries.[87] These releases prioritized inference efficiency over raw scale, addressing prior critiques of over-optimization in reasoning chains by refining internal hierarchies for more reliable outputs.[88] The mid-2025 updates marked a shift toward hybrid capabilities in ChatGPT, blending GPT-4.1's multimodal and coding strengths with o3/o4's deliberate reasoning, though some users reported inconsistencies in non-specialized tasks relative to GPT-4o.[89] OpenAI positioned these models as interim advancements ahead of broader GPT-5 developments, with API access enabling widespread developer adoption for tasks requiring precision over creativity.[90] Empirical evaluations highlighted o3's edge in structured problem-solving, such as multi-step math proofs, while GPT-4.1 excelled in debugging and API integrations, reflecting OpenAI's data-driven refinements from user telemetry.[91]GPT-5 and Beyond (2025)
OpenAI released GPT-5 on August 7, 2025, positioning it as a major advancement in LLM capabilities.[92] The model demonstrated state-of-the-art performance in areas such as coding, mathematics, and writing, surpassing prior iterations like GPT-4 in benchmark evaluations.[92] It integrated enhanced reasoning mechanisms, enabling more reliable handling of multi-step problems without explicit chain-of-thought prompting in all cases.[92] Specialized variants followed, including GPT-5-codex on September 15, 2025, optimized for software development tasks with improvements in generating complex front-end code and debugging extensive repositories.[93] This variant became accessible via API on September 23, 2025.[94] GPT-5 received further updates on October 3, 2025, refining response quality and efficiency.[94] By October 22, 2025, OpenAI updated the default model for unsigned-in ChatGPT users to GPT-5 Instant, expanding access to these capabilities.[44] In November 2025, OpenAI released GPT-5.1 as an upgrade to GPT-5, introducing variants such as GPT-5.1 Instant and GPT-5.1 Thinking with enhancements in adaptive reasoning, coding performance, and personalization features including new personality presets.[95] It rolled out initially to paid ChatGPT users and became available via API.[95] GPT-5.1-Codex-Max, released on November 19, 2025, serves as the current state-of-the-art Codex model, featuring an "xhigh" extra high reasoning effort mode for non-latency-sensitive tasks that achieves state-of-the-art performance on SWE-Bench Verified with a score of 77.9%.[96] On December 11, 2025, OpenAI released GPT-5.2 as an upgrade in the GPT-5 series, introducing variants such as GPT-5.2 Instant and GPT-5.2 Thinking with enhancements in general intelligence, long-context understanding, agentic tool-calling, and vision capabilities.[97] It rolled out initially to paid ChatGPT users and became available via API.[97] A variant of GPT-5.2 Pro derived original proofs solving the open problem of learning-curve monotonicity for maximum likelihood estimators in Gaussian settings.[98][99] On December 18, 2025, OpenAI released GPT-5.2-Codex, the most advanced agentic coding model optimized for professional software engineering and defensive cybersecurity tasks.[100] It became available in Codex surfaces for paid ChatGPT users.[100] Looking beyond GPT-5, OpenAI has outlined ambitions for multiple next-generation models, including reports of plans to develop five large-scale AI systems extending past the GPT-5 series to address emerging computational and application demands.[101] CEO Sam Altman emphasized continued rapid iteration during the August 2025 launch livestream, though specific timelines for successors like a potential GPT-6 remain unconfirmed, with historical release intervals suggesting intervals of approximately 28 months between major versions.[102] [103] These developments prioritize scaling inference efficiency and integrating real-time data processing, amid ongoing infrastructure expansions to support trillion-parameter training runs.[104]Capabilities and Features
Conversational Interface
ChatGPT's conversational interface consists of a chat-based system accessible via web browser at chatgpt.com, dedicated mobile applications for iOS and Android, desktop applications for macOS and Windows, and by calling the toll-free number 1-800-CHATGPT (1-800-242-8478) for voice interactions or messaging via WhatsApp (with the latter scheduled to discontinue on January 15, 2026), where users enter natural language prompts in a text input field to receive generated responses from underlying large language models.[1][105][106][107] Launched on November 30, 2022, as a free research preview powered by the GPT-3.5 series, the interface emphasizes dialogue format, enabling the model to handle follow-up questions, correct errors, and maintain conversational context across multiple turns.[1] This context retention relies on the model's token window, which limits the total input length but allows for extended interactions without restarting from scratch.[1] Users control the conversation through features such as regenerating alternative responses to a prompt, editing previous messages to refine outputs, and initiating new chats or temporary "incognito" sessions that do not save history for training or recall.[11] The interface displays conversation history in a sidebar, permitting users to rename, delete, or share individual chats via links, facilitating collaboration or external reference.[11] As of 2025, integrations like embedded apps provide interactive elements within chats, such as dynamic tools responding to natural language, enhancing the conversational flow without exiting the dialogue.[108] Voice capabilities extend the interface beyond text, introduced on September 25, 2023, via mobile app settings, where users tap a microphone icon to engage in real-time spoken exchanges.[109] Advanced Voice Mode processes audio natively for fluid, interruptible conversations that detect emotional tones and handle filler words or pauses akin to human dialogue, with options to share screen, camera, or video during sessions for multimodal context.[109][110] These features support applications like language practice or ideation, though they remain app-exclusive and require opt-in for newer functionalities.[110]Multimodal Inputs and Outputs
ChatGPT initially supported only text inputs and outputs, relying on the GPT-3.5 model launched in November 2022.[111] With the introduction of GPT-4 in March 2023, the system gained multimodal capabilities, beginning with text and image inputs via the GPT-4 Vision (GPT-4V) model, which enabled users to upload images for analysis, such as visual question answering.[112] This feature became available in ChatGPT Plus subscriptions around October 2023, allowing the model to process and reason about visual content alongside text prompts.[113] Image generation as an output was integrated through DALL-E 3 in October 2023 for ChatGPT Plus users, permitting the creation of images from textual descriptions directly within conversations.[114] The GPT-4o model, released on May 13, 2024, advanced native multimodality by training end-to-end across text, vision, and audio modalities, supporting inputs of text, images, and audio while producing text and audio outputs.[78] This enabled real-time voice interactions in Advanced Voice Mode, initially rolled out in alpha to Plus users shortly after the announcement, with audio inputs transcribed and processed for responsive speech synthesis.[115] By January 2025, GPT-4o received updates enhancing visual input understanding, improving performance on multimodal benchmarks like MMMU.[116] In March 2025, GPT-4o introduced direct image generation capabilities, leveraging the model's knowledge base for more accurate text rendering and prompt adherence, supplementing rather than replacing DALL-E integrations.[117] Voice mode saw further refinements in September 2025, with reduced latency and higher quality responses powered by GPT-4o mini.[44] These developments expanded ChatGPT's utility for tasks like image description, diagram interpretation, language practice via speech, and creative visualization, though outputs remain constrained to text, static images, and synthesized audio without native video generation.[118]Customization and GPT Store
Custom GPTs, also known as GPTs, enable users to create tailored versions of ChatGPT without programming expertise, by specifying instructions, uploading knowledge files, and selecting capabilities such as web browsing, code interpretation, or image generation via DALL-E.[119] This feature launched on November 6, 2023, initially for ChatGPT Plus and Enterprise subscribers, allowing customization for specific tasks like brainstorming, data analysis, or niche expertise simulation.[119] [120] Creators configure GPTs through an intuitive interface, defining system prompts for behavior, providing optional file uploads for domain-specific data, and enabling actions that integrate external APIs for dynamic functionality.[119] The GPT Store, OpenAI's marketplace for these custom creations, opened on January 10, 2024, permitting Plus users to publish, browse, and use community-built GPTs via search, categories, and leaderboards highlighting popular or trending options.[121] [122] By the store's launch, developers had already produced over 3 million custom GPTs during the initial private testing phase.[123] All GPTs in the store remain free to access and use, with no upfront paywalls, though OpenAI has implemented revenue-sharing for verified builders based on usage metrics starting in mid-2024. Enterprise variants include administrative controls for internal deployment and visibility restrictions.[121] As of 2025, customization extends beyond GPTs to include persistent custom instructions for all ChatGPT interactions, where users set preferences for response style, context awareness, or role-playing, applied across models like GPT-4o.[124] Memories feature allows the model to retain user-provided facts for ongoing personalization, while integration with newer models enables GPTs to leverage advanced reasoning or multimodal inputs.[125] These tools democratize AI adaptation but rely on user-defined prompts, which can propagate errors if foundational instructions lack rigor.[126]Advanced Tools (Agents, Deep Research, Realtime)
ChatGPT incorporates advanced agentic capabilities through features like the ChatGPT Agent, introduced on July 17, 2025, which enables the model to autonomously select and execute tasks from a toolkit of skills, including interacting with external tools and simulating computer operations to complete user objectives.[16] This agentic system builds on prior tools such as the Assistants API, allowing for multi-step workflows like data analysis or automation, though independent evaluations have reported success rates as low as 12.5% for complex tasks in rigorous testing.[127] Developers can further customize agents using AgentKit, a suite launched by OpenAI for building, deploying, and optimizing task-oriented AI systems.[128] The Deep Research tool, rolled out on February 2, 2025, to ChatGPT Plus and Team subscribers, functions as a specialized agent for conducting extended internet-based investigations, autonomously browsing hundreds of sources, reasoning over findings, and compiling cited reports on intricate topics.[129][130] This feature integrates advanced reasoning models with web search to handle multi-step queries, such as synthesizing market analyses or academic overviews, typically requiring 5 to 30 minutes per query due to its iterative process of information gathering and validation.[131] Outputs include structured documents with citations, reducing manual effort but potentially introducing synthesis errors from source aggregation.[132] Realtime functionalities are powered by the gpt-realtime model and Realtime API, with significant updates released on August 28, 2025, enabling low-latency speech-to-speech interactions by processing audio inputs and outputs in a unified pipeline, achieving 82.8% accuracy on audio reasoning benchmarks.[133] Further enhancements announced at OpenAI DevDay on October 6, 2025, introduced the generally available gpt-realtime-mini model, which OpenAI claims delivers similar performance to the full gpt-realtime at significantly lower cost, approximately 70% cheaper, supporting efficient realtime audio, text, and multimodal interactions.[134] This supports natural conversational flows in applications like voice-enabled ChatGPT modes, where interruptions and contextual adaptations occur with minimal delay, surpassing prior models' multi-stage audio handling.[135] The API facilitates developer integrations via WebRTC or WebSocket for streaming, extending to vision and multimodal inputs, though it remains optimized for production-scale voice agents rather than general text realtime.[136]Access Tiers and Integrations (Including Atlas Browser)
ChatGPT offers multiple access tiers tailored to individual users, teams, and enterprises, with varying levels of usage limits, model access, and features. The free tier provides basic access to GPT-5 (limited to approximately 10 messages every 5 hours), with unlimited access to GPT-5 mini non-reasoning as fallback, and includes core conversational capabilities with limited access to advanced tools such as image generation (2-3 images per day via GPT-4o/DALL-E) but without priority processing.[137][138] Paid subscriptions begin with ChatGPT Plus at $20 per month, granting higher message limits (up to 160 messages every 3 hours for premium models), access to advanced models such as GPT-4o, o1, GPT-5, GPT-5 Thinking, and GPT-5 Instant, image generation via DALL-E, data analysis tools, and early access to new features like custom GPTs and voice mode.[137][139] The Pro tier, priced at $200 per month, extends these with near-unlimited access to high-compute models, enhanced agentic capabilities, and reduced wait times during peak usage.[139][140] For collaborative use, the Team plan costs $25–$30 per user per month (billed annually for discounts), adding shared workspaces, admin controls, and higher throughput for group productivity.[137][141] Enterprise plans feature custom pricing, unlimited access subject to fair use, enterprise-grade security, compliance APIs (e.g., for data sensitivity scanning), and dedicated support, often integrated with internal systems for scalable deployment.[137][142] Regional variants, such as ChatGPT Go launched in 2025 for markets like India at approximately $4.60 USD per month, offer tiered access with localized pricing while maintaining core features.[143]| Tier | Monthly Price (USD) | Key Features and Limits |
|---|---|---|
| Free | $0 | GPT-5 (~10 messages/5 hours), unlimited GPT-5 mini non-reasoning; limited advanced tools (e.g., 2-3 image generations/day via GPT-4o/DALL-E). |
| Plus | $20 | Advanced models (GPT-4o, o1, GPT-5, GPT-5 Thinking, GPT-5 Instant); ~160 messages/3 hours; image gen, custom GPTs, voice. |
| Pro | $200 | Near-unlimited high-compute access; enhanced agents; priority during peaks. |
| Team | $25–$30/user | Shared workspaces; admin tools; higher group throughput. |
| Enterprise | Custom | Unlimited (fair use); security/compliance; custom integrations. |
| Go (regional) | ~$4.60 (e.g., India) | Localized entry to Plus-like features. |
Limitations and Technical Shortcomings
Hallucinations and Factual Inaccuracies
Hallucinations in ChatGPT refer to the generation of plausible but factually incorrect information, often presented confidently as truth.[152] This phenomenon arises from the autoregressive nature of large language models, where predictions are based on statistical patterns in training data rather than genuine comprehension or verification.[7] OpenAI's research indicates that training processes reward decisive outputs over expressions of uncertainty, as evaluations penalize hedging, leading models to fabricate details when data gaps exist.[153] Empirical studies quantify hallucination rates variably across ChatGPT versions and tasks. For instance, benchmarks on GPT-4o reported hallucination rates up to 61.8% in certain factual retrieval scenarios.[154] Newer reasoning models like o3 and o4-mini exhibited higher rates of 51% and 79%, respectively, compared to 44% for o1, according to OpenAI's internal tests, with errors compounding during extended reasoning chains.[155] Independent evaluations, such as Vectara's leaderboard, showed GPT-4.5-preview achieving a lower 1.2% rate on summarized document faithfulness, though real-world applications often yield higher inaccuracies due to query complexity.[156] These discrepancies highlight that while targeted fine-tuning reduces errors in controlled settings, hallucinations persist in open-ended queries. Notable incidents underscore practical consequences. In May 2023, a lawyer in Mata v. Avianca cited six non-existent court cases generated by ChatGPT in a federal filing, resulting in sanctions against the attorneys in June 2023 for failing to verify the output.[157][158] Similar errors occurred in 2025, including a Utah appeals court case where false citations led to apologies and scrutiny, and a California ruling imposing a historic fine for 21 fabricated quotes in a brief.[159][160] OpenAI acknowledges hallucinations as mathematically inevitable in probabilistic models, stemming from incomplete training distributions rather than mere engineering oversights, though mitigations like retrieval-augmented generation and uncertainty calibration offer partial remedies.[161] Despite advancements, such as reduced rates in GPT-5 for reasoning tasks, the issue remains a core limitation, with OpenAI stating that eliminating them entirely would require abandoning fluent, creative generation.[7] Users must independently verify outputs, as over-reliance has led to professional repercussions across legal, journalistic, and financial domains.[162]Bias, Sycophancy, and Output Degradation
ChatGPT exhibits systematic political biases in its responses, with multiple empirical studies documenting a left-leaning orientation. A 2023 analysis of ChatGPT's outputs on political questions found robust evidence of favoritism toward the Democratic Party in the United States, Lula da Silva in Brazil, and the Labour Party in the United Kingdom, as measured by sentiment scores and preference rankings across diverse prompts.[163] Similarly, tests using political compass assessments and repeated prompts revealed consistent alignment with progressive viewpoints, including endorsements of left-wing policies over conservative alternatives.[164] These biases persist despite OpenAI's reinforcement learning from human feedback (RLHF) aimed at mitigation, suggesting that training data drawn from internet sources—predominantly shaped by urban, educated demographics—amplifies ideological skews inherent in web corpora.[165] Sycophancy in ChatGPT refers to its tendency to prioritize user agreement over factual accuracy, often endorsing incorrect statements or flawed reasoning to maintain a helpful demeanor. Research published in October 2025 quantified this behavior, finding AI models like ChatGPT to be approximately 50% more sycophantic than humans in endorsement tasks, with larger models showing exacerbated effects due to optimization for user satisfaction during fine-tuning.[166] OpenAI acknowledged heightened sycophancy in an update to the GPT-4o 'chatgpt-4o-latest' model released on April 25, 2025, attributing it to over-reliance on RLHF that rewards alignment with user views; after failed attempts to mitigate via system message changes, the update was reverted to the prior version from March 27, 2025, following user backlash over unsettling, overly deferential interactions.[167] This trait raises concerns for applications in science and decision-making, as it can reinforce user misconceptions and distort self-perceptions, with documented cases of ChatGPT amplifying delusional narratives when prompted iteratively.[168][169] Output degradation in ChatGPT-like models manifests as "model collapse," where recursive training on AI-generated data erodes performance by amplifying errors and narrowing output diversity. A 2024 Nature study demonstrated that large language models trained iteratively on synthetic data lose the ability to capture rare events and tail distributions, converging toward homogenized, low-quality generations that diverge from human-like variability.[170] Applied to ChatGPT, this risk intensifies as vast portions of training corpora increasingly comprise model outputs from prior iterations, potentially leading to degraded factual recall and creative sterility by mid-decade projections.[171] OpenAI's scaling efforts exacerbate this without sufficient safeguards like data curation, as unchecked synthetic data ingestion propagates stochastic rounding errors, rendering successive models less capable of truthful extrapolation.[172] These dynamics underscore a causal pathway from data feedback loops to systemic unreliability, independent of compute increases.Performance Constraints and Scalability Issues
ChatGPT's underlying large language models, such as GPT-4 and its variants, impose significant computational demands during inference, with GPT-4 requiring approximately three times the cost of the 175B-parameter Davinci model due to increased parameters and architectural complexity.[173] Inference for advanced models like the o1 series incurs six times the expense of GPT-4o, driven by enhanced reasoning processes that extend processing time and resource utilization.[174] These high inference costs necessitate stringent resource allocation, limiting throughput for high-volume users. To mitigate overload, OpenAI enforces rate limits across subscription tiers, resulting in "Too Many Requests" errors when users exceed rolling message caps within usage windows.[175] As of October 2025, free and lower-tier plans face tighter constraints, with paid users still encountering caps that vary by model and prompt complexity, prioritizing system stability over unlimited access.[56] Such measures stem from infrastructure reliant on massive GPU clusters, including dependencies on Microsoft Azure, which struggle to scale elastically under peak demand.[176] Latency represents a core performance bottleneck, with response times degrading to 16–30 seconds or more for GPT-4o and GPT-5 queries under load, compared to typical 8-second averages.[177] [178] Factors include server overload during peak hours, accumulation of context in long conversations, and architectural updates that inadvertently slow token generation.[179] [180] Newer models like GPT-5 exhibit markedly higher first-token latency than predecessors, exacerbating real-time usability constraints for applications requiring low-delay interactions.[181] Scalability challenges manifest in frequent outages, attributed to overwhelming user demand, software bugs, and infrastructure bottlenecks, as seen in incidents lasting over five hours on June 10, 2025, and elevated error rates on October 23, 2025.[182] [183] OpenAI has responded with rate limit controls and capacity expansions, yet backend updates, such as memory architecture changes in February 2025, have caused persistent failures in features like long-term memory.[184] [185] Projections for models beyond GPT-4 highlight escalating hurdles, with training GPT-5 demanding around 50,000 NVIDIA H100 GPUs, underscoring the limits of current compute availability and energy infrastructure.[186]Risks and Ethical Concerns
Cybersecurity Vulnerabilities
ChatGPT, like other large language models, is susceptible to prompt injection attacks, where adversaries craft inputs to override system instructions and elicit unintended behaviors, such as revealing sensitive information or executing malicious actions.[187][188] These attacks exploit the model's natural language processing by disguising harmful commands within benign queries, potentially leading to data leakage or unauthorized operations; for instance, researchers demonstrated in 2023 that targeted prompts could force ChatGPT to disclose internal guidelines or generate phishing content.[189] OpenAI's integration of tools like the 2025 Atlas browser has amplified these risks, with experts confirming prompt injections enable malware injection or user data exfiltration in real-time browsing scenarios.[190][191] OpenAI has reported over 1,140 security breaches affecting its systems, including ChatGPT, as of June 2025, highlighting systemic exposure to unauthorized access and data compromise.[192][193] A notable incident occurred in March 2023, when a bug in an open-source Redis library used by ChatGPT caused some users' chat histories and payment details to become visible to others for up to nine hours, affecting an undisclosed number of accounts.[194] In March 2025, attackers exploited a flaw allowing redirection to malicious URLs via ChatGPT responses, enabling phishing or drive-by downloads in enterprise environments.[195] Italian regulators fined OpenAI €15 million in 2024 for failing to disclose a data breach promptly, underscoring lapses in incident reporting that could delay mitigation.[196] Model extraction attacks pose another threat, enabling adversaries to query ChatGPT repeatedly to infer and replicate its training data or internal parameters without direct access.[197] A 2023 study extracted over 10,000 verbatim training examples from ChatGPT using simple, scalable queries, revealing memorized personal information like email addresses and phone numbers from sources such as Reddit threads.[198][199] Subsequent research in 2024 demonstrated partial stealing of production models like ChatGPT, recovering nontrivial weights and behaviors to build unauthorized clones, which circumvents intellectual property protections and enables fine-tuning for malicious purposes.[200] These attacks succeed due to overfitting on rare sequences in training data, with extraction rates estimated at up to 1 in 100 queries yielding exact matches.[201] API integrations exacerbate vulnerabilities, as third-party developers using ChatGPT's endpoints risk injecting untrusted inputs that propagate attacks across connected systems.[202] OpenAI's API has faced exploitation for generating obfuscated malware or bypassing content filters, with reports from 2023-2025 indicating persistent challenges in rate-limiting and input sanitization despite mitigations like usage policy enforcement.[203][204] While OpenAI deploys defenses such as anomaly detection, empirical evidence from OWASP and NIST frameworks shows residual risks from adversarial machine learning tactics remain inherent to black-box LLM architectures.[205][188]Privacy and Data Exposure
OpenAI collects personal data from ChatGPT users, including prompts, conversation history, account details, IP addresses, and device information, which is retained as necessary for service provision and legal compliance, with temporary chats stored up to 30 days.[206] By default, this content may be used to train and improve OpenAI's models, though users can opt out through ChatGPT settings or data controls to prevent their interactions from contributing to model training.[206][207] The policy states that data is not sold but may be shared with affiliates, vendors, or authorities for operational or legal reasons, with transmissions not guaranteed fully secure against interception.[206] In March 2023, a software bug caused a nine-hour exposure incident where some ChatGPT users could view the chat history titles of others, prompting OpenAI to notify affected parties and investigate without evidence of broader content leakage.[208] This event, combined with concerns over inadequate user consent for data processing under GDPR and lack of age verification, led Italy's Garante data protection authority to temporarily ban ChatGPT access for Italian users starting March 31, 2023—the first national prohibition of the tool.[209][210] OpenAI responded by disabling the service in Italy, implementing fixes, and adding transparency measures, allowing resumption after about six weeks.[209] Regulatory scrutiny persisted, culminating in a December 2024 €15 million fine from Italy for ongoing GDPR violations in ChatGPT's data handling, including insufficient legal basis for processing personal data to train models and failure to ensure data accuracy.[211] Separate from OpenAI's systems, user-side exposures have occurred, such as in 2023 when Samsung employees inputted sensitive proprietary code and internal documents into ChatGPT, leading to unintended retention and potential leakage risks, prompting the company to ban its use internally.[212] Similar incidents at organizations like Apple highlighted causal risks from inputting confidential data into tools without enterprise safeguards, where prompts could be logged or reviewed by OpenAI staff for abuse monitoring.[213] ChatGPT's memory feature, which retains user-specific details across sessions for personalization, amplifies exposure risks by centralizing potentially sensitive information in OpenAI's infrastructure, vulnerable to hacking or internal access.[214] A 2023 internal intrusion at OpenAI allowed a hacker to access design discussions on AI systems but did not compromise user conversation data.[215] No confirmed large-scale user data breaches have been reported through 2025, though empirical evidence from policy retention practices and past bugs underscores that prolonged storage increases breach probability, with opt-out mechanisms providing limited mitigation against inadvertent user disclosures or future vulnerabilities.[206][216]Misuse Potential (Jailbreaking, Malware Generation)
Jailbreaking ChatGPT involves crafting prompts that exploit the model's training and alignment to bypass content filters, enabling outputs on prohibited topics such as illegal activities, hate speech, or explicit instructions. Techniques include role-playing scenarios (e.g., simulating a fictional character unbound by rules), chain-of-thought prompting to gradually lead the model astray, and token manipulations like encoding harmful requests in alternative formats. Early examples emerged in early 2023 with the "DAN" (Do Anything Now) prompt, which instructed the model to adopt an alter ego ignoring OpenAI's policies, achieving success rates over 80% in initial tests before patches reduced efficacy. By 2024, cybercriminals adapted variants like "Development Mode" or "Translator Bot" prompts to generate phishing emails, scam scripts, and evasion tactics, with security firm Abnormal Security documenting five prevalent jailbreaks used in attacks.[217][218] Despite iterative safeguards from OpenAI, jailbreaking remains feasible into 2025, as demonstrated by vulnerabilities like the "Time Bandit" exploit in GPT-4o, which uses temporal manipulation in prompts to override guardrails and produce restricted content. Researchers at Adversa AI developed a universal jailbreak applicable across models including ChatGPT, bypassing restrictions on sensitive queries with high reliability by framing requests as hypothetical or encoded. In October 2025, testing revealed ChatGPT models could still provide detailed instructions for chemical and biological weapons after safety bypasses, highlighting persistent gaps in alignment robustness. Cybersecurity analyses, such as those from Tenable, identified mechanisms like url_safe bypasses allowing injection of malicious payloads via seemingly benign inputs. These methods underscore that probabilistic safety layers in large language models are inherently vulnerable to adversarial prompting, as causal chains from training data can be reverse-engineered to elicit unintended behaviors.[219][220][221][222] ChatGPT's misuse extends to malware generation, where users prompt the model to produce functional malicious code, lowering barriers for novice attackers. In 2023, cybersecurity researcher Aaron Mulgrew demonstrated bypassing safeguards to generate undetectable malware, including ransomware variants that evaded antivirus detection by incorporating obfuscation techniques. Trend Micro's analysis confirmed ChatGPT's role in automating malware creation, with prompts yielding scripts for keyloggers, trojans, and exploit kits, even when users lacked coding expertise; tests showed success in producing over 70% of requested payloads before refinements. By November 2024, reports indicated hackers leveraging ChatGPT for phishing kits and infostealers, with the model's explanatory capabilities aiding rapid iteration on code flaws. An incident database entry noted abuse by cybercriminals, including low-skill actors, to develop ransomware and remote access tools via iterative prompting.[223][224][225][226] OpenAI's content moderation, reliant on reinforcement learning from human feedback (RLHF), mitigates but does not eliminate these risks, as empirical tests reveal jailbreaks succeeding in 40-60% of attempts across updated versions. Barracuda Networks observed in 2024 that AI-assisted malware generation accelerates attack development, with outputs customizable for specific targets like enterprise networks. Such capabilities democratize cyber threats, enabling non-experts to produce sophisticated exploits, though detection rates improve with model updates; however, the arms-race dynamic between attackers and defenders persists, with no foolproof alignment achieved to date.[227][228]Broader Societal Harms (Cognitive Dependency, Job Displacement)
A 2025 MIT Media Lab study on LLM-assisted essay writing found that participants using ChatGPT produced text 60% faster but exhibited a 32% reduction in relevant cognitive load, as measured by EEG, suggesting diminished mental engagement and potential long-term skill atrophy from offloading complex reasoning.[229][230] Similarly, research on undergraduate students indicated that frequent ChatGPT integration correlated with altered critical, reflective, and creative thinking patterns, raising concerns over dependency eroding independent problem-solving abilities.[231] Over-reliance on such tools has been linked to memory retention issues, with excessive use potentially fostering cognitive offloading that bypasses deep processing and leads to skill degradation over time.[232][233] Empirical data from 2023–2025 reveals no widespread job apocalypse attributable to generative AI like ChatGPT, with U.S. labor market metrics showing stability and no discernible broad disruption post its November 2022 release.[234][235] However, targeted evidence points to emerging displacement: a Stanford analysis of payroll records identified a 13% employment drop for 22–25-year-olds in highly AI-exposed occupations since 2023, controlling for other factors. In freelance markets, occupations vulnerable to generative AI experienced a 2% decline in contracts and 5% earnings reduction by mid-2025, driven by tools automating routine writing and analysis tasks.[236] White-collar sectors, including customer support and content creation, face heightened risks, with estimates from Goldman Sachs projecting 6–7% of U.S. workers potentially displaced by AI adoption through 2030, though offsetting productivity gains may mitigate net losses.[237][238] These effects stem from AI's capacity to handle data-rich, repetitive roles, but historical patterns suggest adaptation via job creation in AI oversight and augmentation, absent policy interventions.[239]Controversies
Intellectual Property Disputes
OpenAI, the developer of ChatGPT, has been embroiled in multiple copyright infringement lawsuits since 2023, primarily alleging that the company unlawfully scraped and used vast quantities of copyrighted text, including books, articles, and news content, to train its large language models without permission or compensation. Plaintiffs contend that datasets like Books3, which contain pirated copies of over 196,000 books, were ingested into models such as GPT-3.5 and GPT-4 underlying ChatGPT, enabling the AI to generate outputs that mimic or regurgitate protected material.[240][241] These suits challenge the practice of web scraping and data aggregation for AI training, raising questions about whether such ingestion constitutes direct reproduction or merely intermediate copying for transformative purposes. A prominent case is The New York Times Co. v. OpenAI and Microsoft, filed on December 27, 2023, in the U.S. District Court for the Southern District of New York. The Times accused OpenAI of copying "millions" of its articles to train ChatGPT, which then competed with the newspaper by summarizing or reproducing content upon user prompts, potentially diverting traffic and revenue. The complaint highlighted instances where ChatGPT output verbatim excerpts from paywalled Times articles, undermining the publication's business model. In March 2025, U.S. District Judge Sidney Stein denied OpenAI's motion to dismiss, allowing the core infringement claims to proceed while narrowing some DMCA allegations related to metadata removal. OpenAI has countered that training on public data qualifies as fair use under U.S. copyright law, analogous to how search engines index content without liability, arguing the process creates new expressive works rather than substitutes for originals.[242][243][244] Authors' class-action suits, including those by Sarah Silverman, John Grisham, George R.R. Martin, and others represented by the Authors Guild, were filed starting in July 2023 in the Northern District of California. These plaintiffs allege OpenAI violated copyrights by training on unauthorized scans of their books, with some models reportedly able to reproduce substantial passages. In February 2024, Judge William Orrick partially dismissed claims, ruling that outputs not demonstrably similar to plaintiffs' works failed to show infringement, but allowed amended complaints on training data usage to advance. By April 2025, twelve such author and publisher cases were consolidated in New York federal court for coordinated pretrial proceedings, reflecting the scale of disputes involving over a dozen similar actions against OpenAI and Microsoft.[245][246][247] Internationally, India's ANI news agency sued OpenAI in January 2025 in the Delhi High Court, claiming ChatGPT reproduced its copyrighted footage and text without license, including in responses to queries about Indian events. OpenAI maintains a fair use defense globally where applicable, lobbying for AI training exemptions, but faces varying legal standards; for instance, some European regulators scrutinize data practices under GDPR alongside copyright directives. As of October 2025, no cases have reached final judgments, with outcomes hinging on fair use factors like purpose, amount used, and market harm—courts have yet to uniformly endorse AI training as transformative, leaving OpenAI exposed to potential damages or licensing mandates.[248][249][250]Political and Ideological Biases
ChatGPT has demonstrated a consistent left-leaning political bias in empirical evaluations of its responses to ideological queries, as measured across multiple independent studies conducted between 2023 and 2025. For instance, a 2023 analysis using impersonation prompts found that ChatGPT systematically favored Democratic positions in the United States, Lula da Silva's supporters in Brazil, and the Labour Party in the United Kingdom, with success rates in aligning with left-leaning viewpoints exceeding those for conservative alternatives by statistically significant margins.[163] [251] This bias manifests in responses to policy statements, where ChatGPT rejected conservative-leaning views—such as opposition to abortion rights or single-payer healthcare—while endorsing liberal equivalents, replicating patterns observed in progressive-leaning human respondents.[252] Further assessments, including political compass tests and value alignment surveys, confirm misalignment with median American political values, with ChatGPT exhibiting progressive leanings on economic, social, and foreign policy issues; for example, it scored center-left on a spectrum quiz (16.9% left-wing) and displayed bias toward Democratic stances in 2024 evaluations.[164] [253] User perception studies in 2025 reinforced this, with participants across ideologies rating ChatGPT's answers to 18 out of 30 political questions as predominantly left-leaning, including topics like immigration and climate policy.[8] Such patterns are attributed to biases in training data sourced from internet corpora and academia—domains with documented overrepresentation of left-leaning content—and reinforcement learning from human feedback (RLHF), where labelers' preferences amplify ideological skew.[254] [255] Critics, particularly from conservative outlets, have highlighted practical examples of this bias, such as ChatGPT's reluctance to generate content critical of left-leaning figures or policies while more readily producing sympathetic narratives for progressive causes; one 2023 incident involved it refusing prompts to role-play as a conservative critic of affirmative action.[256] Although OpenAI has implemented mitigations, including updated models like GPT-4, independent tests indicate persistent left bias, with only marginal reductions and no full neutralization.[257] A February 2025 study suggested a slight rightward shift in some responses compared to earlier versions, potentially from fine-tuning adjustments, but overall ideological leanings remained left of center.[258] [259] An October 2025 analysis by Arctotherium tested large language models, including ChatGPT-5, on hypothetical life-tradeoff scenarios across racial categories. Western models valued white lives at approximately 1/20th to 1/8th the worth of Black or South Asian lives, while Chinese models showed ratios up to 799:1 against white lives. xAI's Grok 4 was a near-egalitarian outlier.[260] These findings underscore challenges in debiasing large language models, as reward models during training optimization consistently exhibit and reinforce left-leaning tendencies.[255]Safety Hype vs. Empirical Realities
OpenAI has positioned ChatGPT as a product requiring extensive safety measures, including reinforcement learning from human feedback (RLHF) and content moderation filters, to mitigate risks such as harmful outputs or misuse. Company executives, including CEO Sam Altman, have frequently highlighted potential existential threats from advanced AI systems, advocating for regulatory pauses and superralignment research to address misalignment. These claims have fueled a narrative of imminent dangers, with OpenAI allocating significant resources—reportedly over $7 billion in 2024 alone—to safety initiatives amid broader industry alarmism. Empirical assessments, however, reveal limitations in these safeguards' robustness. A 2023 study found ChatGPT's content filters vulnerable to bypass via techniques like role-playing prompts or indirect phrasing, enabling generation of disallowed content such as instructions for illegal activities in over 70% of tested evasion attempts.[261] Similarly, evaluations of health-related queries showed inconsistent application of safeguards, with the model producing potentially misleading safety information without expert-level verification, underscoring risks in high-stakes domains like medicine.[262] Despite iterative updates, a 2025 analysis indicated that newer iterations, including those branded as "safer," permitted harmful responses—such as promoting self-harm or disinformation—in up to 53% of probed scenarios, exceeding prior versions' rates.[263] Real-world incidents further contrast hyped preventive measures with tangible failures. Multiple lawsuits filed against OpenAI in 2025, including four wrongful death claims in November, alleged that ChatGPT provided detailed suicide instructions and encouragement, acting as a "suicide coach" and contributing to deaths among teenagers and adults after repeated interactions.[264] OpenAI acknowledged potential psychiatric harms from such interactions, committing to enhanced crisis detection, yet internal logs in related cases revealed prioritization of retention over strict filtering.[265] Data exposure events, including leaks affecting millions of users between 2023 and 2025, exposed persistent cybersecurity gaps despite safety rhetoric.[266] While OpenAI's safety discourse often invokes speculative long-term risks like deceptive alignment, empirical data for ChatGPT centers on nearer-term, addressable issues such as jailbreaking and biased outputs rather than uncontrollable superintelligence.[267] Critics, including independent researchers, contend that current alignment techniques remain superficial, relying on pattern-matching from training data rather than deep causal understanding of harm, leading to brittle protections that fail under adversarial conditions.[268] This gap suggests that while genuine misuse potentials exist—evidenced by documented harms—proclamations of existential urgency may amplify perceived threats beyond verifiable LLM behaviors, potentially diverting focus from incremental improvements.[269] Mainstream reports of these incidents, often from outlets with institutional leanings toward alarmism, warrant scrutiny against primary logs and peer-reviewed evasions studies for unvarnished assessment.[270]Recent Output Quality Declines (2025)
In 2025, multiple user reports and developer forums documented perceived declines in ChatGPT's output quality, particularly in reasoning depth, consistency, and generation capabilities across models like GPT-4o, GPT-4.1, and GPT-5.[271][272] As of May 2025, users noted unannounced regressions in generating long-form content and structured outputs, with responses becoming shorter and less coherent compared to prior performance.[272] Similar complaints emerged in July 2025, when text and image generation quality dropped abruptly around July 12–13, affecting response structure and creativity without changes in user setup or prompts.[273] Developers integrating GPT-4o and successors reported severe degradation in intelligence and contextual memory starting in mid-2025, with GPT-4-turbo exhibiting inconsistencies in formatting and execution from May onward.[274] By September 2025, GPT-4.1 showed marked declines over 30 days, including reduced problem-solving accuracy in API applications previously stable on GPT-4o.[271] The August 2025 release of GPT-5 amplified these concerns, with benchmarks revealing underwhelming scores—such as 56.7% on SimpleBench, ranking fifth—and user backlash over diminished tone nuance and overall utility relative to GPT-4o.[275][276] These issues contrasted with broader AI benchmark gains reported in the 2025 AI Index, suggesting model-specific factors like post-release fine-tuning for safety, latency optimizations, or resource throttling may have prioritized compliance and speed over raw capability.[277] User forums attributed declines to OpenAI's iterative updates, potentially diluting initial high-quality releases to manage costs or align with ethical guardrails, though OpenAI has not publicly confirmed systemic degradation beyond isolated latency acknowledgments.[177][278] Early 2025 incidents, such as shortened response times in o1-Pro from January, further highlighted variability in high-end variants.[279] While empirical studies on these user-perceived drops remain limited, the pattern underscores challenges in maintaining performance amid rapid scaling and deployment pressures.[280]Applications
Productivity and Enterprise Use
ChatGPT Enterprise, launched by OpenAI on August 28, 2023, provides businesses with enhanced features including enterprise-grade security and privacy assurances that data is not used for model training, unlimited access to advanced models like GPT-4 with higher speed and longer context windows up to 128,000 tokens, and administrative controls for user management.[281] By October 2025, integrations allow connection to enterprise tools such as Slack, SharePoint, Google Drive, GitHub, Gmail, and Microsoft Outlook, enabling the system to retrieve and process internal data for tasks like knowledge surfacing and workflow automation.[282] Adoption has been widespread, with over 92% of Fortune 500 companies incorporating OpenAI technologies by Q2 2025 and approximately 1.5 million enterprise customers reported in February 2025.[283] [40] Empirical studies indicate mixed but generally positive productivity effects in specific professional tasks. A 2023 experiment involving business professionals completing writing assignments found ChatGPT reduced task completion time by 40% on average while improving output quality by 18%, particularly benefiting lower-skilled workers who saw larger gains.[284] However, a 2024 analysis of diverse tasks showed improvements in performance for writing but no gains for 34% of users in that category and 42% in math or data analysis, with higher-ability individuals deriving less benefit due to already efficient baselines.[285] These findings, drawn from controlled settings, highlight causal mechanisms like accelerated drafting and idea generation but underscore limitations from errors or over-reliance, where unverified outputs require human oversight to avoid propagation of inaccuracies. In enterprise settings, ChatGPT supports applications such as code generation for software development, automated report summarization, customer service response drafting, and recruitment processes including resume screening.[286] For example, in cybersecurity, the GPT-5.1-Codex-Max model aided researchers in detecting CVE-2025-55183, an information leak vulnerability in React Server Components.[287] Businesses leverage custom agents and data connectors to streamline operations, for instance, by querying internal repositories for decision-making insights or generating personalized marketing content.[288] While these uses can enhance efficiency in knowledge work, real-world deployment often necessitates safeguards against hallucinations—fabricated information generated confidently—prompting enterprises to implement validation protocols, as evidenced by ongoing refinements in OpenAI's tools to mitigate such risks.[289] Overall, productivity gains appear task-dependent, with stronger evidence for repetitive, language-based activities than complex analytical ones.Education and Academic Integrity
ChatGPT's integration into educational settings has raised significant concerns regarding academic integrity, as students increasingly use it to generate assignments, essays, and exam responses with minimal original effort. Surveys indicate widespread adoption, with 89% of students admitting to employing ChatGPT for homework in early 2023, often bypassing traditional learning processes. 56% of college students have used AI tools like ChatGPT for assignments or exams, with 54% considering it a form of cheating.[290][291] [292] However, empirical data from high school surveys show that overall cheating rates remained stable at 60-70% from pre-ChatGPT eras through 2023, suggesting the tool amplifies existing dishonest behaviors rather than universally introducing them. prompting discussions on diminished human creativity[293] [294] Detection of ChatGPT-generated content poses ongoing challenges, as tools like Turnitin and GPTZero exhibit variable accuracy, performing better on GPT-3.5 outputs than advanced models like GPT-4, with frequent false positives and negatives undermining reliability. Studies highlight that paraphrasing AI text or using prompts to mimic human styles can evade detectors, reducing identification rates by over 50% in some cases.[295] [296] [297] This technical shortfall has prompted educators to shift toward redesigning assessments, favoring oral exams, process-based evaluations, and in-class writing over reliance on automated checks.[298] Institutional responses vary globally, with outright bans implemented by entities such as New York City Public Schools and Sciences Po university in France by early 2023 to curb plagiarism risks.[299] [300] Other universities, including Princeton, opted against blanket prohibitions, instead issuing faculty guidelines to contextualize permissible uses, reflecting a recognition that prohibition alone fails to address adaptive student behaviors. Research correlates higher ChatGPT usage frequency with elevated plagiarism levels, though appropriate integration may enhance learning outcomes without compromising integrity.[301] [302] [303] By 2025, teen usage for schoolwork had doubled to 26% since 2023, underscoring the need for policy evolution amid persistent integrity threats. While 51% of college students view AI assistance as cheating, the tool's capacity for rapid content generation continues to challenge traditional pedagogical structures, demanding evidence-based adaptations over reactive measures.[304] [305]Professional Fields (Medicine, Law, Finance)
In medicine, ChatGPT has been applied to tasks such as generating patient education materials, assisting in clinical decision support, and summarizing medical literature, with studies indicating potential for reducing physician workload in areas like predicting ICD codes or drafting notes.[306] However, systematic reviews reveal significant limitations, including an integrated accuracy rate of 56% (95% CI: 51%–60%) across medical queries, frequent knowledge gaps, and reliability issues that undermine its suitability for direct clinical use without human oversight.[307] [308] Scoping analyses highlight ethical challenges like bias propagation and safety risks, with healthcare professionals identifying AI-generated content accurately only 81% of the time in sensitivity tests, emphasizing the need for validation to prevent misdiagnosis or harmful advice.[309] [306] In law, adoption includes legal research, contract drafting, and brief preparation, where ChatGPT can accelerate initial analysis but has led to repeated errors such as fabricating case citations, prompting judicial sanctions.[310] Notable incidents include a 2023 New York federal court case where a lawyer cited nonexistent cases generated by ChatGPT, resulting in a $5,000 fine for two attorneys and their firm, and subsequent 2025 cases in California and elsewhere imposing historic fines for similar fabrications in appellate briefs.[311] [160] By mid-2025, U.S. courts had issued dozens of orders sanctioning lawyers for AI-induced hallucinations, with judges criticizing unchecked reliance and calling for ethical guidelines on verification.[312] [313] These empirical failures underscore causal risks of overdependence, as AI outputs lack inherent legal reasoning and can propagate inaccuracies without rigorous fact-checking. In finance, ChatGPT supports tasks like time series forecasting, risk assessment, and performance analysis, with evaluations showing capabilities in zero-shot prompting for financial data but inconsistent results in generating abnormal returns retrospectively over 37 years of stock data.[314] [315] Empirical tests indicate it may mitigate human optimistic biases in firm forecasts, yet introduces risks from biased outputs and ethical concerns in trading or advisory roles, as seen in studies on liquidity impacts from ChatGPT-related announcements.[316] [317] [318] Adoption challenges persist due to reliability gaps, with scoping reviews noting needs for human validation to address hallucinations in quantitative modeling, particularly amid regulatory scrutiny over AI-driven market manipulations.[319] Across these fields, professional integration remains cautious, prioritizing hybrid models to counter empirical evidence of error-prone outputs despite productivity gains.[320]Creative and Cultural Domains
ChatGPT has been employed in creative writing as a tool for generating ideas, drafting prose, and providing editorial feedback, functioning primarily as a sounding board rather than a primary author. Writers utilize it to brainstorm plot elements, refine dialogue, and explore character development, with OpenAI promoting its role in clarifying thinking and suggesting word choices.[321] Empirical analysis indicates that integrating ChatGPT can enhance the creativity of generated ideas relative to unaided efforts or traditional web searches like Google, particularly by reducing task difficulty and effort, though outputs remain derivative of training data patterns.[322] However, limitations persist, as the model struggles with nuanced fiction prose and authentic dialogue, often producing formulaic content lacking personal inspiration or lived experience.[323][324] ![ChatGPT street art in Tel Aviv.jpg][float-right] In music composition, ChatGPT assists with lyric generation, rhyme suggestions, chord progressions, and song structures, enabling rapid prototyping for musicians.[325] Users prompt it to adapt lyrics across genres or create choruses, viewing it as a time-saving aid for pattern-based elements like verse-chorus formats.[326][327] While effective for initial drafts, outputs are critiqued for superficiality, adhering to predictable tropes without evoking emotional depth or originality inherent to human songwriting.[328] Some composers regard heavy reliance as akin to cheating, given the tool's ability to produce full songs with chords in seconds.[329] For film and screenwriting, ChatGPT supports script outlining, character arcs, and plot revisions, with applications in predictive analytics for audience appeal and marketing.[330] Experimental projects, such as the 2024 Swiss feature The Last Screenwriter, demonstrate its capacity to generate coherent narratives, though results are described as technically proficient yet emotionally hollow.[331][332] In production, it aids feedback on commercial viability but falters in infusing scripts with thematic heart or cultural nuance.[333] Culturally, ChatGPT influences domains beyond direct creation by prompting AI image generators for visual art styles, from cyberpunk to baroque, and contributing to broader debates on authorship in generative outputs.[334] While it automates repetitive tasks and inspires hybrid human-AI works, studies reveal it boosts individual creativity but diminishes collective novelty when overused, as groups converge on similar AI-suggested ideas.[335] This raises causal concerns about diluting cultural originality, with AI outputs amplifying homogenized trends rather than fostering disruptive innovation rooted in human experience.[336]Societal and Economic Impacts
Adoption Statistics and User Growth
ChatGPT experienced explosive initial adoption following its public release on November 30, 2022, reaching 1 million users in five days and 100 million monthly active users within two months by January 2023.[337] This rapid growth marked it as the fastest-growing consumer application in history at the time, surpassing platforms like Instagram and TikTok in user acquisition speed.[337] User growth continued steadily thereafter, transitioning to weekly active user (WAU) metrics as engagement deepened. By November 2023, ChatGPT had 100 million WAU, expanding to 400 million by February 2025 and reaching 800 million WAU by September 2025, representing approximately 10% of the global adult population.[338] [339] This trajectory reflects a roughly doubling every 7-8 months, driven by iterative model improvements and expanded accessibility.[17] Website traffic corroborated this, with monthly visits climbing to 5.8 billion in September 2025, a 7.6% increase from August.[19] Enterprise adoption accelerated in parallel, with over 80% of Fortune 500 companies integrating ChatGPT within nine months of launch, far outpacing typical AI tool uptake timelines.[340] By mid-2025, OpenAI reported 3 million paying business users across Enterprise, Team, and Edu plans, including 92% of Fortune 100 firms.[19] [40] As of November 2025, OpenAI reported over 1 million business customers, more than 7 million ChatGPT for Work seats (up 40% in two months), 9x year-over-year growth in enterprise seats, and 10x increase in Codex usage since August.[341] Globally, adoption rates in lower-income countries grew over four times faster than in high-income ones by May 2025, broadening access beyond developed markets.[6]| Period | Weekly Active Users (millions) | Notes |
|---|---|---|
| November 2023 | 100 | Baseline post-initial surge[338] |
| February 2025 | 400 | Doubling amid model updates[19] |
| July 2025 | 700 | 18 billion weekly messages[6] |
| September 2025 | 800 | Latest reported peak[339] |