Hugging Face
Hugging Face is an American-French artificial intelligence company and open-source platform that facilitates collaboration in machine learning, particularly through its Hugging Face Hub, a repository for sharing models, datasets, and applications across modalities like text, image, audio, and video.[1] Founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf in New York City, it initially launched as a chatbot app aimed at teenagers but pivoted in 2018 to building machine learning infrastructure after recognizing the need for accessible AI tools.[2][3] The company's mission is to democratize good machine learning, one commit at a time, emphasizing open-source development to make advanced AI accessible to developers, researchers, and organizations worldwide.[4]
At the core of Hugging Face's offerings is the Transformers library, a Python package that provides state-of-the-art pretrained models for natural language processing, computer vision, audio, and multimodal tasks, supporting both training and inference with frameworks like PyTorch and TensorFlow.[5] Complementing this are the Datasets library for efficient data loading and processing, and the Hub, which as of October 2025 hosts over 2 million models—more than quadruple the number from early 2024—along with over 500,000 datasets used for tasks ranging from translation to speech recognition.[6][7][8] These tools have fostered a vibrant community, with more than 50,000 organizations actively using the platform for AI development and deployment.[1]
Hugging Face has grown rapidly, achieving a valuation of $4.5 billion following a $235 million Series D funding round in 2023, backed by investors including Google, Amazon, Nvidia, and Salesforce Ventures.[9] By 2025, the company employs around 250 people and generates approximately $130 million in annualized revenue as of 2024, primarily from enterprise features like private hubs, compute resources, and inference APIs, while maintaining free access to its core open-source ecosystem.[10] This blend of community-driven innovation and commercial scalability positions Hugging Face as a pivotal force in advancing open AI, enabling rapid prototyping and deployment of models like BERT and GPT variants.[5]
History
Founding and Early Development
Hugging Face was founded in 2016 in New York City by French entrepreneurs Clément Delangue, who serves as CEO, Julien Chaumond, the CTO, and Thomas Wolf, the Chief Science Officer.[3][11][12] The company originated from the founders' shared interest in advancing conversational AI, with Delangue bringing product and marketing expertise, Chaumond contributing engineering and mathematical skills, and Wolf offering scientific and legal insights in AI applications.[11]
The initial product was a mobile chatbot application targeted at teenagers, branded as an "AI best friend forever (BFF)" to provide emotional support, entertainment, and interactive companionship beyond traditional productivity tools like Siri.[3][11] This app leveraged early natural language processing (NLP) techniques to enable open-domain conversations, aiming to foster engaging interactions through humor and personalization.[11] However, the startup encountered significant early challenges, particularly in sustaining user engagement, as the chatbot struggled to maintain long-term interest among its young audience amid the limitations of nascent deep learning models at the time.[3]
These hurdles were compounded by the company's relocation from France to the United States, where the founders moved to access a larger talent pool and market opportunities in New York City, marking a strategic shift to establish a stronger foothold in the American tech ecosystem.[3][11] To address the technical demands of improving the chatbot, the initial team structure emphasized NLP experimentation, with early hires focused on developing and iterating on conversational algorithms using available datasets and models.[11] This small, specialized group empowered rapid prototyping of features, laying the groundwork for deeper exploration into AI-driven dialogue systems despite the engagement obstacles.[3]
Pivot to Machine Learning
In 2018, Hugging Face made a strategic decision to pivot from its initial chatbot application to the development and release of open-source natural language processing (NLP) tools, driven by the transformative potential of the transformer architecture introduced in the 2017 paper "Attention Is All You Need."[13] This shift was further catalyzed by the rapid adoption of models like Google's BERT, released in October 2018, which highlighted the need for accessible implementations in popular frameworks such as PyTorch.[14][15]
A pivotal moment came when co-founder Thomas Wolf ported BERT to PyTorch over a single weekend and shared it on GitHub, receiving immediate enthusiasm from the machine learning community with over 1,000 likes and contributions.[13] This led to the official release of the first version of the Transformers library in late 2018, establishing Hugging Face as a provider of pre-trained models and tools for state-of-the-art NLP tasks.[16] The library quickly gained traction as an open-source resource, reflecting the company's new focus on democratizing AI through collaborative development.[15]
Early community feedback played a crucial role in shaping the library, with users contributing bug fixes, new model integrations, and documentation improvements that drove iterative updates.[15] Hosted on GitHub from its inception, the project benefited from the platform's ecosystem, enabling seamless collaboration and version control that accelerated its evolution into a robust toolkit.[13] By 2019, Hugging Face expanded this foundation to include datasets and model sharing capabilities, fostering a collaborative environment for AI practitioners to exchange resources and build upon shared innovations.[15]
Funding, Growth, and Acquisitions
Hugging Face's funding trajectory began to accelerate in late 2019 with a Series A round of $15 million led by Lux Capital, enabling expansion of its open-source natural language processing tools.[17] This was followed by a $40 million Series B in March 2021, led by Addition with participation from Amazon and Nvidia, which supported scaling the Transformers library and community platform.[18] The company's valuation reached $500 million post-Series B, reflecting growing adoption in machine learning development.[19]
In May 2022, Hugging Face raised $100 million in a Series C round led by Lux Capital, with key investments from Sequoia Capital and Coatue Management, achieving a $2 billion valuation.[20] A subsequent $235 million Series D in August 2023, led by Salesforce Ventures and including Google and Nvidia, brought total funding to approximately $396 million by 2025.[21] Other prominent backers such as Lux Capital have consistently supported the company's focus on collaborative AI infrastructure.[22] These investments fueled rapid growth, with employee numbers growing to around 160 by 2023 and approximately 250 by 2025, alongside a valuation climbing to $4.5 billion.[23]
Strategic acquisitions have complemented this expansion. In December 2021, Hugging Face acquired Gradio, a Python library for creating customizable user interfaces for machine learning models.[24] In June 2024, it acquired Argilla, a platform for collecting and managing human feedback in AI development.[25] In August 2024, Hugging Face acquired XetHub, a Seattle-based startup specializing in scalable data storage for AI models, to enhance collaboration on large datasets.[26] The most notable move came in April 2025 with the acquisition of Pollen Robotics, a French humanoid robotics firm, for an undisclosed amount, aimed at integrating open-source hardware with AI software.[27] This deal enabled the release of the SO-101, a 3D-printable robotic arm starting at $100, designed for accessible experimentation in AI-driven robotics.[28]
Core Technologies
The Transformers library is an open-source Python library developed by Hugging Face that serves as a unified framework for accessing, loading, and utilizing state-of-the-art transformer-based machine learning models across domains such as natural language processing, computer vision, audio, video, and multimodal tasks.[5] It emphasizes ease of use by providing model definitions that are compatible with major deep learning frameworks, including PyTorch as the primary backend, alongside TensorFlow and JAX through dedicated support and converters.[29] Initially released on November 17, 2018, the library has undergone continuous development, reaching version 4.57.1 by October 2025, with regular updates incorporating new architectures and optimizations.[16]
A core strength of the Transformers library lies in its pipeline API, which abstracts complex model loading and inference into simple, task-oriented interfaces for applications like text classification, machine translation, question answering, and image segmentation. This enables users to perform high-level operations with minimal code, automatically handling preprocessing, model execution, and postprocessing. The library supports over 300 distinct architectures, encompassing encoder-only models like BERT for bidirectional text representation, decoder-only models such as GPT variants for autoregressive generation, encoder-decoder setups for sequence-to-sequence tasks, and multimodal extensions including CLIP for cross-modal alignment of text and images, as well as Vision Transformers for patch-based visual feature extraction.[30][31] Internally, it manages transformer-specific components like tokenization via fast Rust-based preprocessors tailored to each architecture and efficient attention mechanisms, ensuring compatibility and performance across models.[5]
To illustrate practical usage, the library allows quick instantiation of pre-trained models for inference, as shown in the following example for sentiment analysis:
python
from transformers import [pipeline](/page/Pipeline)
classifier = [pipeline](/page/Pipeline)("sentiment-analysis")
result = classifier("I love using Hugging Face!")
print(result) # Outputs: [{'label': 'POSITIVE', 'score': 0.9998}]
from transformers import [pipeline](/page/Pipeline)
classifier = [pipeline](/page/Pipeline)("sentiment-analysis")
result = classifier("I love using Hugging Face!")
print(result) # Outputs: [{'label': 'POSITIVE', 'score': 0.9998}]
This code loads a default pre-trained model, processes input text, and returns predictions with confidence scores, leveraging automatic tokenization and model execution under the hood.
Since its inception, the Transformers library has evolved to include robust fine-tuning tools, such as the Trainer class, which streamlines supervised learning workflows with built-in support for distributed training, gradient accumulation, and evaluation metrics. Optimizations have been integrated for transformer-specific challenges, including accelerated attention computations via FlashAttention to reduce memory usage and computation time during both training and inference, as well as handling of tokenizer configurations that adapt to diverse languages and modalities.[5] These enhancements have made the library suitable for fine-tuning large models on standard hardware, focusing on conceptual scalability rather than exhaustive hardware specifics.
In terms of performance, the library incorporates features like model parallelism—encompassing data parallelism, tensor parallelism, and pipeline parallelism—to distribute computation across multiple devices, enabling the training and inference of models too large for single GPUs. Benchmarks highlight substantial speedups; for instance, integration with tools like DeepSpeed for ZeRO optimization can yield 2-10x reductions in memory footprint and training time for billion-parameter models compared to baseline PyTorch implementations, depending on scale and hardware configuration. Such capabilities underscore the library's role in democratizing access to high-performance transformer models while maintaining focus on core architectural efficiency.
Supporting Libraries
The Hugging Face ecosystem includes several supporting libraries that facilitate data preparation, tokenization, distributed training, efficient fine-tuning, and specialized model handling, enabling seamless machine learning workflows beyond core model inference. These libraries are designed to integrate tightly with the broader platform, allowing users to load datasets, preprocess inputs, scale training across hardware, and apply advanced techniques like parameter-efficient adaptation, all while leveraging the Hugging Face Hub for sharing resources.
The Datasets library provides tools for easily loading, processing, and sharing AI datasets across natural language processing, computer vision, and audio tasks. It supports streaming large datasets directly from the Hub, which is particularly useful for handling multi-terabyte collections without full downloads, as demonstrated in recent optimizations for prefetching and buffering introduced in late 2025. By November 2025, the library enables access to over 544,000 datasets hosted on the Hub, including multimodal examples like the FineVision dataset with 24 million image-text pairs for vision-language model training. Key features include built-in data augmentation, such as random cropping or text perturbations, and support for multimodal data formats that combine text, images, and audio for diverse applications.
The Tokenizers library offers fast, customizable tokenization algorithms tailored for various languages and model architectures. It implements efficient methods like Byte-Pair Encoding (BPE), which merges frequent character pairs to build subword vocabularies, reducing out-of-vocabulary issues in multilingual settings. This library processes text into tensor inputs optimized for transformer models, with Rust-based backends ensuring high performance even on large corpora.
Other prominent libraries include Accelerate, which simplifies distributed training by allowing the same PyTorch code to run across single GPUs, multiple GPUs, TPUs, or clusters with minimal modifications—typically just four lines of code for setup. PEFT (Parameter-Efficient Fine-Tuning) enables methods like Low-Rank Adaptation (LoRA), which fine-tunes large models by updating only a small subset of parameters, drastically reducing memory and compute needs while maintaining performance. Diffusers specializes in pretrained diffusion models for generating images, videos, and audio, providing pipelines for tasks like text-to-image synthesis with easy customization.
These libraries interoperate closely with the Transformers library; for instance, the Datasets library can stream and preprocess data directly into training loops managed by Accelerate, while PEFT adapters apply to models loaded via Transformers for efficient fine-tuning. This integration streamlines end-to-end workflows, from data ingestion to optimized training.
By 2025, recent additions have expanded support for advanced techniques, including the GRPO (Group Relative Policy Optimization) trainer in the TRL (Transformer Reinforcement Learning) library, which facilitates reinforcement learning from human feedback (RLHF) through online iterative improvements using self-generated data. Additionally, enhancements in Datasets and Diffusers have bolstered tools for audio and vision tasks, such as multimodal streaming for vision-language datasets and diffusion-based audio generation pipelines.
Safetensors
Safetensors is a lightweight library developed by Hugging Face that provides a secure and efficient serialization format for machine learning model weights, serving as a safer alternative to PyTorch's pickle format to mitigate vulnerabilities such as arbitrary code execution during model loading.[32][33] This format addresses critical security risks in shared model repositories, where malicious code embedded in pickle files could compromise user systems upon deserialization.[33]
Key features of Safetensors include zero-copy deserialization, which allows tensors to be loaded directly into memory without intermediate copying, enabling faster inference startup times.[32] It supports tensors from multiple frameworks, including NumPy, PyTorch, JAX, and TensorFlow, through Python and Rust bindings that facilitate seamless integration.[33] The file format consists of a compact 8-byte header indicating the size of the metadata, followed by a JSON-encoded header containing tensor details such as names, data types (e.g., bfloat16, fp8), shapes, and byte offsets, and then the raw binary tensor data stored in little-endian, row-major order without striding.[33] This structure supports sharded files for large models, avoiding file size limits and enabling lazy loading in distributed environments.[33]
Safetensors was released in September 2022 and quickly integrated into the Hugging Face Transformers library and Hub, becoming the recommended standard for uploading models to prevent security risks associated with legacy formats.[34] By 2025, nearly all new models on the Hugging Face Hub, including major releases like Llama, Gemma, and Stable Diffusion, are stored in the Safetensors format.[35]
Performance benchmarks demonstrate Safetensors' efficiency: for the BLOOM model, loading times were reduced from 10 minutes using PyTorch pickle to 45 seconds on 8 GPUs, representing over 13x speedup in this case.[36] On CPU, loading is extremely fast compared to pickle, while GPU loading matches or exceeds PyTorch equivalents, with general improvements of 2-5x for typical models like GPT-2.[37][33]
In 2025, Safetensors received enhancements for better support of quantized models, including compatibility with formats like GPTQ and AWQ for reduced precision weights, and improved sharding for multi-GPU deployments.[38] These updates also facilitate integration with enterprise security protocols, such as secure model catalogs that scan for vulnerabilities in distributed AI environments.[39]
Hugging Face Hub
The Hugging Face Hub is a central collaborative platform for the machine learning community, functioning as a Git-based repository that enables hosting, discovery, and versioning of resources such as models and datasets. Launched in 2019, it has grown significantly, hosting over 2 million models, more than 500,000 datasets, and over 1 million interactive demos called Spaces as of 2025.[40][8] This infrastructure democratizes access to pre-trained models and data, allowing users to share and build upon open-source contributions without proprietary barriers.
Key features of the Hub include model cards, which provide comprehensive metadata for each hosted model, such as usage instructions, supported tasks, languages, ethical considerations, potential biases, and limitations.[41] Similarly, dataset viewers facilitate exploration through Dataset Cards and the Data Studio, enabling interactive previews and analysis of structured data. Version control is powered by Git, with support for Git LFS to handle large files efficiently, allowing users to track changes via commit histories, diffs, and branches.[42][40]
Collaboration is streamlined through familiar tools like forking repositories, submitting pull requests for contributions, and participating in community discussions directly on the platform. The Hub integrates with GitHub, enabling seamless synchronization of repositories and broader code-sharing workflows.[40] For search and discovery, users can apply filters by task (e.g., text classification or image generation), supported library (e.g., Transformers), and language, while trending sections highlight popular and recently updated resources to aid navigation across the vast collection.[40]
Spaces extend the Hub's utility by offering no-code hosting for interactive machine learning applications, primarily built using Gradio or Streamlit SDKs. These allow creators to deploy demos for diverse tasks, such as building chatbots for natural language interaction or tools for image generation and editing, with over 1 million public Spaces available for experimentation and reuse.[40][43]
Hugging Face provides a suite of tools designed to facilitate the inference and deployment of machine learning models in production environments, enabling developers to run models at scale without managing underlying infrastructure. These tools bridge the gap between model development on the Hugging Face Hub and real-world applications, supporting everything from quick prototyping to high-throughput serving. Central to this ecosystem is the emphasis on accessibility, optimization, and integration with major cloud platforms.
The Inference API offers a serverless solution for rapid model testing, allowing users to perform inference via simple HTTP endpoints on thousands of models hosted on the Hugging Face Hub without any setup or infrastructure management. It includes a free tier suitable for experimentation, with rate limits that scale for PRO subscribers, and supports tasks such as text generation, image classification, and audio processing through a unified Python or JavaScript client. This API is particularly useful for validating model performance in low-stakes scenarios, powering interactive playgrounds where users can query models directly in the browser.[44][45]
For production-grade deployment, Inference Endpoints enable the hosting of dedicated, scalable instances of models on GPU, CPU, or accelerator hardware, with pay-as-you-go pricing starting at $0.033 per hour for basic CPU cores and $0.50 per hour for entry-level GPUs like NVIDIA T4 as of November 2025. Users can configure auto-scaling by setting minimum and maximum replicas to handle variable loads, and select custom hardware options across providers such as AWS, Google Cloud, and Azure, including advanced instances like NVIDIA A100 GPUs or AWS Inferentia2 chips. This service ensures low-latency responses and secure, isolated environments, billed per minute of active compute usage.[46]
Complementing these deployment options, the Optimum library extends the Transformers framework to optimize models specifically for efficient inference, incorporating techniques like ONNX Runtime export for cross-platform compatibility and quantization methods that reduce model size and accelerate execution on diverse hardware. For instance, 8-bit or 4-bit quantization can yield up to 4x speedups in latency while maintaining accuracy, making it ideal for resource-constrained settings. Optimum integrates seamlessly with pipelines for tasks like question answering or summarization, allowing developers to export and run optimized models via a single API call.[47][48]
Hugging Face's tools integrate natively with leading cloud providers to simplify scaling and serverless deployment. On AWS, models can be deployed via Amazon SageMaker endpoints using dedicated SDK extensions that handle containerization and monitoring automatically. Similarly, Google Cloud integration supports deployment on Kubernetes Engine (GKE) or Vertex AI for managed inference, enabling low-latency applications through serverless options like Cloud Run. These integrations allow for hybrid setups, where models from the Hub are pulled directly into cloud workflows for seamless orchestration.[49][50]
In 2025, Hugging Face enhanced its inference capabilities with a focus on edge deployment for mobile, IoT, and robotics applications, bolstered by the April acquisition of Pollen Robotics. This move integrated open-source hardware like the Reachy 2 humanoid robot, featuring a mobile base with LiDAR for navigation, into the LeRobot platform, which provides PyTorch-based tools for on-device model training and inference in real-world embodied AI scenarios. These advancements lower barriers for deploying optimized models on edge devices, tying software optimizations from Optimum to physical hardware for applications in autonomous systems and teleoperated robotics.[51][52]
Enterprise Offerings
Hugging Face provides enterprise-grade solutions through its Enterprise Hub, which enables organizations to privately host and collaborate on AI models, datasets, and applications with enhanced security and management tools. Key features include unlimited private repositories, role-based access controls via Resource Groups, and integration with Single Sign-On (SSO) protocols such as SAML and SCIM for user provisioning. Pricing for the Enterprise Hub starts at $50 per user per month, with options for annual commitments and managed billing to support scalable team deployments.[53][54]
Complementing the Hub, AutoTrain offers a no-code platform for fine-tuning custom machine learning models, supporting supervised tasks like classification and question answering, as well as unsupervised tasks such as clustering. Enterprise users can leverage AutoTrain Spaces within the Hub for seamless, GPU-accelerated training without infrastructure management, making it suitable for rapid prototyping and deployment of tailored AI solutions. This service abstracts complex training pipelines, allowing businesses to iterate on models using their proprietary data while maintaining privacy.[55][56]
Hugging Face's professional services include dedicated expert support for model customization and optimization consulting, helping enterprises integrate AI into production workflows. These services facilitate partnerships with major players like IBM and Salesforce, enabling collaborative development of customized large language models and deployment strategies. For instance, integrations with IBM's watsonx and Salesforce's Einstein platforms allow for secure, scalable AI applications built on open-source foundations.[57][58]
Security is a cornerstone of these offerings, with the Enterprise Hub achieving SOC 2 Type 2 compliance and GDPR adherence to ensure data protection and auditability. Features encompass audit logs for tracking model usage, malware scanning on uploads, and private endpoints for Inference Endpoints to isolate sensitive computations. These measures support regulatory requirements and mitigate risks in enterprise AI deployments.[59][60]
In 2025, following the April acquisition of Pollen Robotics, Hugging Face expanded its enterprise services to include hardware integration for robotics and edge AI applications. This move introduces support for deploying open-source AI models on humanoid robots like Reachy 2, enabling businesses to customize edge deployments with optimized hardware-software stacks for real-world automation tasks.[27]
Community and Impact
Open-Source Ecosystem
Hugging Face's open-source ecosystem is built around a vast collaborative community, comprising over five million registered users as of 2025, who actively contribute to the development and refinement of AI models, datasets, and applications.[61] This scale is evidenced by more than two million public models hosted on the platform, alongside hundreds of thousands of datasets and spaces created by contributors worldwide.[8] The community engages through regular events, such as Community Weeks focused on specific technologies like JAX and Flax for natural language processing and computer vision tasks, fostering hands-on collaboration and knowledge sharing among participants.[62]
Contributions operate under an open governance model primarily hosted on GitHub, where repositories like Transformers encourage pull requests, issue discussions, and code reviews from global developers to iteratively improve libraries and models. To incentivize high-impact work, Hugging Face offers bounties via GitHub issues and grants through programs like the Fellowship, which supports early-career researchers in advancing open AI projects.[63] Key initiatives underscore this collaborative spirit; for instance, the BigScience workshop from 2021 to 2022 united over 1,000 researchers to develop the BLOOM multilingual language model, emphasizing transparent training processes and resource allocation.[64] Complementing such efforts, ethical AI guidelines are integrated into model cards, requiring creators to document intended uses, biases, limitations, and societal impacts to promote responsible development.[41]
Collaboration is facilitated by built-in tools like discussion forums for peer feedback and leaderboards that benchmark model performance on standards such as GLUE and SuperGLUE, enabling competitive yet cooperative advancements in natural language understanding.[65] These features allow users to compare results, share insights, and build upon each other's work without proprietary barriers. To address inclusivity, Hugging Face runs diversity-focused programs, including the AI Research Residency and Fellowship initiatives, which prioritize applicants from underrepresented groups in AI to broaden participation and perspectives in the ecosystem.[66]
Adoption and Broader Influence
Hugging Face's tools and platform have achieved broad industry adoption, powering AI initiatives for over 50,000 organizations worldwide, including major enterprises in technology, finance, and healthcare.[1] In natural language processing, companies deploy Hugging Face models to build intelligent chatbots that handle customer interactions with high accuracy and scalability, while in computer vision, they enable applications like object detection in manufacturing quality control. Generative AI use cases, such as content creation and image synthesis, further demonstrate its versatility, with businesses fine-tuning models like Stable Diffusion for customized creative workflows.[67][12]
A key example of enterprise integration is Hugging Face's partnership with IBM, where models from the Hub are seamlessly incorporated into the watsonx.ai platform to support scalable deployments in business analytics and decision-making.[68] For sentiment analysis at scale, organizations fine-tune BERT-based models to process vast customer feedback datasets, improving market insights without requiring extensive in-house expertise. These applications highlight how Hugging Face reduces development time and costs, allowing teams to focus on innovation rather than foundational infrastructure.[69]
The platform's broader influence stems from its role in democratizing AI, providing free access to pre-trained models, datasets, and tutorials that lower barriers for developers and researchers globally.[70] This accessibility has accelerated AI research, with the Transformers library serving as a foundation for numerous state-of-the-art natural language processing advancements, evidenced by over 20 billion downloads of top models on the Hub.[71] By fostering an open ecosystem, Hugging Face has influenced ethical AI practices through transparent model sharing via model cards, which document biases, limitations, and usage guidelines to promote responsible deployment.[72] However, the platform has faced challenges with security, including the identification of over 100 malicious models in early 2025 that exploited pickle file vulnerabilities for potential code execution; Hugging Face responded swiftly by removing the models and improving scanning tools like Picklescan.[73]
In emerging areas, Hugging Face's April 2025 acquisition of Pollen Robotics marks a significant push into AI-enabled robotics, open-sourcing designs for humanoid robots like Reachy 2 to integrate large language models with physical actions.[51] This initiative includes hardware innovations such as 3D-printed arms, enabling customizable, affordable robotics for research and applications in automation and human-robot interaction. Following the acquisition, Hugging Face launched the Reachy Mini, an open-source desktop humanoid robot in July 2025, priced starting at $299 for the lite version, to facilitate broader experimentation with AI-driven robotics.[27][74] Overall, these efforts address key challenges by making advanced AI and robotics accessible to non-experts, while emphasizing transparency to mitigate ethical risks in deployment.[75]