Fact-checked by Grok 2 weeks ago

Amazon S3

Amazon Simple Storage Service (Amazon S3) is a cloud-based object storage service provided by Amazon Web Services (AWS) that enables users to store and retrieve any amount of data at any time from anywhere on the web. Designed for scalability without the need for provisioning storage capacity, Amazon S3 organizes data as objects within containers called buckets, allowing for virtually unlimited storage and automatic scaling to handle high volumes of requests. It supports key features such as data versioning to preserve multiple variants of objects, fine-grained access controls through bucket policies, AWS Identity and Access Management (IAM), and access control lists (ACLs), as well as multiple storage classes tailored to different access patterns and cost requirements. Additionally, S3 integrates encryption at rest by default, server-side and client-side encryption options, and comprehensive auditing tools to ensure data security and compliance. Among its notable benefits, Amazon S3 delivers 99.999999999% (11 9's) durability over a given year by automatically replicating across multiple devices and facilities within a , alongside 99.99% for the S3 Standard storage class. Its pay-as-you-go model eliminates upfront costs, making it cost-effective for diverse workloads, while its performance supports an average of over 150 million requests per second globally as of December 2024. These attributes have made S3 a foundational service for building data lakes, enabling backup and restore operations, disaster recovery, archiving, and powering generative AI applications, as utilized by organizations such as , Ancestry, , and . As of March 2025, Amazon S3 stores over 400 trillion objects and manages exabytes of , underscoring its role in supporting cloud-native applications, mobile apps, and .

Overview

Introduction

Amazon Simple Storage Service (Amazon S3) is a scalable service offered by (AWS) that allows users to store and retrieve any amount of data from anywhere on the web using a simple web services interface. This service is designed for developers and IT teams to upload, organize, and access data as discrete objects within storage containers called buckets, each object identified by a , eliminating the need to manage complex infrastructure. Launched by AWS on March 14, , Amazon S3 pioneered -based , providing a foundational building block for modern applications. As of 2025, Amazon S3 stores over 400 trillion objects comprising exabytes of and processes an average of 150 million requests per second. It also supports up to 1 petabyte per second in to handle massive demands.

Key Characteristics

Amazon S3 is designed for elastic scalability, automatically expanding and contracting to accommodate unlimited amounts of data without the need for users to provision storage capacity in advance. This capability ensures seamless handling of varying workloads, from small datasets to petabyte-scale storage, as the service manages resource allocation dynamically behind the scenes. A core attribute of Amazon S3 is its pay-as-you-go pricing model, which charges users solely for the resources consumed, including storage volume, requests, , and outbound data transfer, with no minimum fees or long-term commitments required. This approach aligns costs directly with usage patterns, making it economical for both intermittent and continuous data storage needs. Amazon S3 provides high through low-latency data access, facilitated by integration with AWS's global edge locations for optimized content delivery and multi-AZ replication that ensures consistent availability across Availability Zones. These features enable rapid read and write operations, supporting applications that demand quick response times without performance degradation at . Data in Amazon S3 is organized using buckets as top-level logical containers, each serving as a globally unique for storing objects, which are the fundamental units of data with a maximum size of 5 terabytes. Objects are addressed via unique keys within a flat , allowing flexible organization through prefixes that mimic hierarchical structures without imposing a true . Lifecycle management in Amazon S3 enables automated policies that transition objects between tiers based on predefined rules, such as age or frequency, to optimize costs and efficiency over time. These rules can also handle object expiration, ensuring data is retained only as long as necessary while complying with retention requirements. Complementing these characteristics, Amazon S3 is engineered for exceptional , targeting 99.999999999% (11 ) over a given year through redundant across multiple facilities.

Technical Architecture

Design Principles

Amazon S3 employs an object-based storage model, where data is stored as discrete, immutable objects rather than files within a traditional . Each object consists of a (a ), the data itself (up to 5 terabytes in size), and associated in the form of name-value pairs that describe the object for management and retrieval purposes. This flat design eliminates the need for directories or folders, using key prefixes to simulate if desired, which simplifies and avoids the complexities of management. Objects are immutable, meaning any modification requires uploading a new object with an updated or version, ensuring in a distributed . The architecture of Amazon S3 is fundamentally distributed to achieve high and reliability, with data automatically replicated across multiple devices within a single facility and further across multiple Availability Zones (AZs) for redundancy. Availability Zones are isolated locations engineered with independent power, cooling, and networking to minimize correlated failures, and S3 spreads objects across at least three AZs (except for one-zone storage classes) to protect against facility-wide outages. An elastic repair mechanism proactively detects and mitigates failures, such as disk errors, by re-replicating data to healthy storage, scaling operations proportionally to the total data volume stored. This cell-based design confines potential issues, like software updates or hardware faults, to small partitions of the system, limiting the and maintaining overall service availability. Amazon S3 provides a ful interface for all operations, leveraging standard HTTP methods to ensure simplicity, interoperability, and ease of integration with web-based applications and tools. Core operations include PUT for uploading objects, GET for retrieving them, and DELETE for removal, all authenticated via AWS Signature Version 4 to secure requests over . This design adheres to REST principles, treating buckets and objects as resources addressable via URLs, which enables stateless interactions and compatibility with a wide range of clients without requiring proprietary protocols. As a dedicated service, Amazon S3 intentionally avoids server-side processing capabilities, focusing exclusively on durable and retrieval while delegating any computational needs to complementary AWS services. This allows S3 to optimize for efficiency and scalability, integrating seamlessly with services like for event-driven processing or Amazon EC2 for custom compute workloads triggered by S3 events. Since December 2020, Amazon S3 has implemented a strong across all operations, ensuring that any subsequent read immediately reflects the results of a successful write, overwrite, delete, or metadata update without requiring application changes. This upgrade from the prior for new object writes provides predictable behavior for applications, particularly those involving real-time data access or listings, while preserving the service's high performance and availability.

Storage Classes

Amazon S3 provides multiple storage classes tailored to different access frequencies and performance requirements, allowing users to balance cost efficiency with retrieval needs while maintaining consistent durability across all classes at 99.999999999% (11 nines) over a given year. These classes support data redundancy across multiple Availability Zones (AZs) except for single-AZ options, and most enable seamless transitions via S3 Lifecycle policies. The following table summarizes the key characteristics of each storage class:
Storage ClassPrimary Access PatternsRetrieval TimeDesigned AvailabilitySLA AvailabilityKey Features and Notes
S3 StandardFrequently accessed dataMilliseconds99.99%99.9%Low-latency, high-throughput ; data stored across at least 3 s; supports lifecycle transitions.
S3 Intelligent-TieringUnknown or changing patternsMilliseconds (frequent tiers); varies for infrequent/archive99.9%99%Automatically moves objects between frequent, infrequent, and archive instant tiers after 30, 90, or 180 days of no ; no retrieval fees; monitoring applies; stored across at least 3 s; supports lifecycle transitions.
S3 Express One ZoneLatency-sensitive, frequently accessed data in a single Single-digit milliseconds99.95%99.9%High-performance for demanding workloads; supports up to millions of requests per second; uses directory buckets; single only; no support for lifecycle transitions; introduced in 2023.
S3 Standard-Infrequent Access (IA)Infrequently accessed data needing quick Milliseconds99.9%99%Suitable for objects larger than 128 KB stored for at least 30 days; retrieval fees apply; stored across at least 3 s; supports lifecycle transitions.
S3 One Zone-Infrequent Access (IA)Infrequently accessed, re-creatable dataMilliseconds99.5%99%Lower redundancy in a single for cost savings; suitable for objects larger than 128 KB; retrieval fees apply; supports lifecycle transitions.
S3 Glacier Instant RetrievalRarely accessed data requiring immediate Milliseconds99.9%99%Archival option with low cost; minimum object size of 128 KB; 90-day minimum ; stored across at least 3 s; supports lifecycle transitions.
S3 Glacier Flexible RetrievalRarely accessed data for or Minutes to hours (expedited, standard, bulk options)99.99%99.9%Retrieval flexibility with free bulk options; 90-day minimum ; stored across at least 3 s; supports lifecycle transitions.
S3 Glacier Deep ArchiveVery rarely accessed long-term archival data12–48 hours (standard); 48–72 hours (bulk)99.99%99.9%Lowest-cost for compliance or ; 180-day minimum ; stored across at least 3 s; supports lifecycle transitions.
S3 Lifecycle policies enable automated management by defining rules to transition objects between storage classes based on age, access patterns, or other criteria, such as moving from S3 Standard to S3 Glacier Deep Archive after 365 days of storage. These policies apply to buckets and can include expiration rules to delete objects after a specified period, optimizing storage without manual intervention.

Limits and Scalability

Amazon S3 imposes specific limits on object sizes to ensure efficient storage and retrieval. Individual objects can range from 0 bytes up to a maximum of 5 tebibytes (TiB), with multipart uploads enabling the handling of large files by dividing them into parts ranging from 5 mebibytes (MiB) to 5 gibibytes (GiB), up to a total of 10,000 parts per upload. Bucket creation is limited by default to 10,000 general purpose buckets per AWS , though this quota can be increased upon request, with support for up to 1 million buckets. Each bucket can store an unlimited number of objects, allowing for virtually boundless data accumulation without predefined caps on object count. Request rates are designed for high throughput, with Amazon S3 supporting at least 3,500 PUT, COPY, , or DELETE requests per second and 5,500 GET or HEAD requests per second per in a . These rates scale horizontally by distributing requests across multiple prefixes, enabling applications to achieve significantly higher —such as 55,000 GET requests per second with 10 prefixes—without fixed upper bounds. At a global level, Amazon S3 handles massive scale through features like cross-region replication for data across multiple AWS Regions and integration with for edge caching, which reduces for worldwide access. The service processes an average of over 100 million requests per second while storing more than 350 trillion objects, demonstrating its that automatically adjusts to varying workloads. To maintain performance at scale, Amazon S3 employs automatic partitioning strategies, including sharding of the object into prefixes for even load across underlying . This approach ensures balanced request handling and prevents bottlenecks, with gradual that may involve temporary throttling via HTTP 503 errors during traffic spikes.

Features and Capabilities

Durability and Availability

Amazon S3 achieves exceptional data through its architecture, which is designed to deliver 99.999999999% (11 9's) of objects over a given year by automatically storing data redundantly across multiple devices and at least three distinct Availability Zones (AZs) within a . This multi-fold replication ensures that the annual risk of due to hardware failure, errors, or disasters is extraordinarily low, with the system engineered to sustain the concurrent loss of multiple facilities without . To maintain this durability, Amazon S3 employs advanced mechanisms, including automatic error correction and verification using checksums to detect and repair issues such as or corruption. These checksums are computed on and used to validate , enabling proactive repairs to restore when degradation is identified. Additionally, options like S3 Cross-Region Replication (CRR) allow users to further enhance by asynchronously objects to a different AWS region for . Availability in Amazon S3 varies by storage class but is optimized for high uptime; for example, the S3 Standard class is designed for 99.99% availability over a year, meaning objects are accessible for requests with minimal interruption. In contrast, classes like S3 One Zone-IA, which store data within a single AZ, offer lower designed availability of 99.5% to balance cost and performance needs. These guarantees are backed by the Amazon S3 Service Level Agreement (SLA), which commits to a monthly uptime percentage of at least 99.9% for S3 Standard and similar classes, with service credits provided as compensation: 10% of monthly fees for uptime below 99.9% but at or above 99.0%, 25% for below 99.0% but at or above 95.0%, and 100% for below 95%. For classes like S3 One Zone-IA, the SLA is 99.0%, reflecting their single-AZ design. The uptime is calculated based on error rates in 5-minute intervals, excluding factors like customer-induced issues or force majeure events. Users can monitor object integrity and replication status through built-in features such as , which preserves multiple versions of objects to enable recovery from overwrites or deletions, and replication metrics available via for tracking completion and errors in replication jobs. These tools provide visibility into data persistence without requiring manual intervention.

Security and Compliance

Amazon S3 provides robust security features to protect , in transit, and during access, including , fine-grained access controls, and comprehensive auditing mechanisms. These features are designed to help users meet organizational security requirements and regulatory standards while leveraging AWS-managed infrastructure.

Encryption

Amazon S3 supports multiple options to secure data, ensuring confidentiality against unauthorized access. Server-side (SSE) is applied automatically to objects upon , with three primary variants: SSE-S3 uses keys managed by Amazon S3, SSE-KMS integrates with AWS Service (KMS) for customer-managed keys with additional control and auditing, and SSE-C allows users to provide their own keys for each operation. Client-side , where users encrypt data before using tools like the Amazon S3 Encryption Client or AWS Encryption Library, offers further flexibility for sensitive workloads. Since January 2023, all new S3 buckets have default server-side enabled with SSE-S3 to establish a level of protection without additional configuration. For advanced scenarios, dual-layer server-side encryption with AWS keys (DSSE-KMS) combines S3-managed encryption with a second layer using customer or AWS-managed keys, enhancing for high-stakes applications. In the context of emerging workloads like vector data storage in S3 Vectors, dual-layer incorporates multiple controls for and in transit, including automatic with AWS-managed keys.

Access Controls

Access to S3 resources is managed through a combination of identity and policy-based mechanisms to enforce least-privilege principles. AWS Identity and Access Management (IAM) policies allow users to define permissions for principals like users, roles, and services, specifying actions such as read, write, or delete on buckets and objects. Bucket policies provide resource-level controls directly on S3 buckets, enabling conditions like IP restrictions or time-based access, while access control lists (ACLs) offer legacy object and bucket-level permissions, though AWS recommends transitioning to policies for finer granularity. Note that support for creating new Email Grantee ACLs ended on October 1, 2025. To prevent accidental public exposure, the S3 Block Public Access feature blocks public access at the account, bucket, and access point levels; since April 2023, it is enabled by default for all new buckets, and ACLs are disabled to simplify ownership and reduce misconfiguration risks.

Auditing and Logging

Amazon S3 offers detailed logging capabilities to track access and operations for security monitoring and incident response. S3 server access logs capture detailed records of requests to buckets and objects, including requester identity, bucket name, request time, and response status, which can be delivered to another S3 bucket for analysis. For API-level auditing, integration with AWS CloudTrail logs management events (like bucket creation) by default and optional data events (like object-level Get or Put requests), providing a comprehensive audit trail of who performed actions, when, and from where. These logs support compliance requirements by enabling forensic analysis and anomaly detection when combined with tools like Amazon Athena for querying.

Compliance Certifications

Amazon S3 adheres to numerous industry standards and regulations through third-party audits and built-in features that facilitate compliance. It holds certifications including 1, 2, and 3 for controls relevant to financial reporting and security, PCI DSS for payment card data handling, HIPAA/HITECH for protected health information, and support for GDPR through data residency and processing controls. To enable write-once-read-many (WORM) storage for retention policies, S3 Object Lock allows users to lock objects for a specified or indefinitely, preventing deletion or modification and helping meet requirements for immutable records in regulations like Rule 17a-4.

Recent Enhancements

In 2025, Amazon S3 introduced S3 Metadata, a fully managed service that automatically generates and maintains queryable tables of metadata for all objects in a , enhancing visibility for security assessments, , and compliance audits by tracking attributes like size, tags, and status without manual processing. This feature supports security use cases such as identifying unprotected objects or monitoring changes over time. In July 2025, Amazon introduced S3 Vectors (preview), the first cloud with native vector support for storing large vector datasets and subsecond query performance, optimized for applications.

Pricing Model

Amazon S3 operates on a pay-as-you-go model, charging users only for the resources they consume without minimum fees or long-term commitments. Costs are determined by factors such as the volume and type of , number of requests, operations, and outbound data transfers. Pricing varies by AWS region, with the US East (N. ) region serving as a common reference point. Storage costs are tiered based on the selected storage class and volume stored, billed per GB per month. For instance, S3 Standard storage costs $0.023 per GB for the first 50 TB, $0.022 per GB for the next 450 TB, and $0.021 per GB for volumes over 500 TB (as of November 2025), while S3 Glacier Deep Archive offers lower rates at $0.00099 per GB for the first 50 TB. S3 Intelligent-Tiering includes monitoring and automation fees of $0.0025 per 1,000 objects per month in addition to tier-specific storage rates starting at $0.023 per GB for frequent access. These classes, which balance cost and access needs, are detailed further in the storage classes section. Request fees apply to operations like reading or writing objects, with GET requests charged at $0.0004 per 1,000 for S3 Standard and PUT, COPY, , or requests at $0.005 per 1,000. Data transfer fees primarily affect outbound traffic, where the first 100 per month to the is , followed by $0.09 per for the next 10 TB (with tiered reductions for larger volumes). Additional charges include retrieval fees for infrequent or archival storage classes to account for the higher operational costs of accessing less frequently used data. For example, S3 Standard-Infrequent Access incurs $0.01 per retrieved, S3 Flexible Retrieval charges $0.01 per for standard retrieval and $0.0025 per for bulk, and S3 Deep Archive retrieval is $0.02 per for standard or $0.0025 per for bulk (as of November 2025). Minimum storage duration charges may also apply, enforcing 30 days for Standard-IA, 90 days for Flexible Retrieval, and 180 days for Deep Archive to discourage short-term use of low-cost tiers. To optimize costs, Amazon S3 provides tools such as S3 Storage Lens, which offers free basic metrics and customizable dashboards for analyzing storage usage and identifying savings opportunities across buckets and regions. AWS Savings Plans allow eligible customers to commit to usage for discounted rates on S3 requests and data transfers, potentially reducing expenses by up to 72% compared to on-demand . New AWS accounts include a free tier for S3, providing 5 GB of storage, 20,000 GET requests, 2,000 PUT/COPY/POST/LIST requests, 100 DELETE requests, and 100 GB of transfer out to the per month for the first 12 months.

Use Cases and Applications

Common Use Cases

Amazon S3 serves as a reliable platform for operations, providing offsite storage with built-in versioning that enables from accidental deletions or modifications. This feature supports by allowing users to replicate data across regions and integrate with AWS Backup for automated policies that meet recovery time objectives (RTO) and recovery point objectives (RPO). Organizations leverage S3's 99.999999999% (11 9's) to safeguard critical data against hardware failures or site disasters, ensuring minimal during restoration processes. In data lakes and analytics, S3 functions as a centralized repository for storing vast amounts of structured and at petabyte scale, facilitating querying and without upfront schema definitions. It supports tools like Amazon Athena for serverless SQL queries directly on S3 data and for data warehousing, enabling cost-effective processing of logs, streams, and application data. With features like S3 Select for in-storage filtering, users can reduce data transfer costs and accelerate insights from diverse datasets. For archiving and compliance, S3 offers long-term retention through storage classes like and , which provide retrieval times ranging from minutes to hours at significantly lower costs than standard storage. S3 Object Lock implements write-once-read-many (WORM) policies to prevent alterations or deletions, ensuring compliance with regulations such as , , and . This setup allows organizations to retain data for 7 to 10 years or longer while optimizing costs via lifecycle transitions based on access patterns. Media and content distribution represent another core application, where S3 hosts static websites and serves as scalable for images, videos, and audio files. By enabling public bucket policies and integrating with for global edge caching, S3 delivers low-latency content to end-users, supporting high-traffic scenarios like video streaming or e-commerce assets. Its ability to handle millions of requests per second ensures reliable performance for dynamic content delivery without managing servers. In and workloads, S3 stores datasets for models, including vector embeddings via S3 Vectors, which provide native support for high-dimensional queries with sub-second latency. It accommodates generative applications by hosting large-scale datasets and enabling efficient access for frameworks like or . Recent innovations like Amazon S3 Tables, introduced in 2024, optimize tabular storage with integration, improving query performance for and pipelines by up to 3x through automated compaction. S3's reference to classes helps tailor these uses to infrequent access patterns for cost efficiency.

Notable Users and Examples

NASCAR utilizes Amazon S3 to store and manage its extensive media library, which includes race videos, audio, and images accumulated over decades of motorsport events. The organization migrated a 15-petabyte archive from legacy LTO tapes to S3 in just over one year, leveraging storage classes such as S3 Standard for active high-resolution mezzanine files, S3 Glacier Instant Retrieval for frequently accessed content, and S3 Glacier Deep Archive for long-term retention of proxy files. This setup handles an annual growth of 1.5 to 2 petabytes, enabling cost-effective scalability and rapid retrieval for fan engagement and production needs. The British Broadcasting Corporation (BBC) employed Amazon S3 Glacier to digitize and centralize its 100-year archive of broadcasting content, transitioning from tape-based systems to for improved preservation and . In a 10-month project, the BBC migrated 25 petabytes of data—averaging 120 terabytes per day—to S3 Glacier Instant Retrieval and S3 Intelligent-Tiering, retiring half of its physical infrastructure while reducing operational costs and enhancing data durability. This migration supported the archival of diverse media assets, ensuring long-term integrity without the vulnerabilities of physical tapes. Ancestry leverages Amazon S3 Glacier to efficiently restore and process vast collections of historical images, facilitating -driven enhancements for genealogy research. The company handles hundreds of terabytes of such images, using S3 Glacier's improved throughput to complete restorations in hours rather than days, which accelerates the training of models for tasks like on digitized records. This capability has enabled Ancestry to deliver higher-quality, searchable historical photos to millions of users, transforming faded or damaged artifacts into accessible family history resources. Netflix relies on Amazon S3 as a foundational component of its global and analytics infrastructure, managing exabyte-scale data lakes to support personalized streaming recommendations and performance optimization. S3 stores petabytes of video assets and user interaction logs, enabling the processing of billions of hours of monthly content delivery across devices while powering real-time analytics on viewer behavior. This architecture allows Netflix to scale storage elastically, handling daily ingestions that contribute to its massive data footprint for machine learning-driven personalization. Airbnb employs Amazon S3 for robust backup and storage of operational data, including and system logs essential for platform reliability and . The company maintains 10 terabytes of user pictures and other static files in S3, alongside daily processing of 50 gigabytes of log data via integrated services like Amazon EMR, ensuring durable retention for and . This implementation supports Airbnb's high-traffic environment by providing scalable, low-latency access to backups without managing on-premises hardware.

Integrations and Ecosystem

AWS Integrations

Amazon S3 integrates closely with AWS compute services to enable efficient data access and processing. (EC2) instances can directly access S3 buckets by attaching roles that grant the necessary permissions, allowing applications to store and retrieve data without embedding credentials. This setup supports use cases like hosting static websites or running data-intensive workloads on EC2. extends this capability through serverless execution, where S3 event notifications—such as object uploads or deletions—trigger Lambda functions to process data automatically, facilitating real-time transformations without managing servers. For analytics workloads, S3 serves as a foundational data lake storage layer integrated with services like Amazon Athena and Amazon EMR. Athena enables interactive querying of S3 data using standard SQL, eliminating the need for ETL preprocessing or infrastructure management, and supports features like federated queries across data sources. Amazon EMR, on the other hand, treats S3 as a scalable file system via the S3A connector, allowing users to run Apache Hadoop, Spark, and other frameworks directly on S3-stored data for large-scale processing tasks like ETL and machine learning model training. Backup and management integrations enhance S3's operational resilience and efficiency. AWS Backup provides centralized, policy-based protection for S3 buckets, supporting continuous backups for and periodic backups for cost-optimized archival, with seamless across other AWS services. Complementing this, S3 Batch Operations allow bulk execution of actions on billions of objects, such as copying, tagging, or invoking functions, streamlining large-scale data management without custom scripting. Networking features ensure secure and performant connectivity to S3. VPC endpoints, specifically gateway endpoints for S3, enable private access from resources within a (VPC) without traversing the public or incurring data transfer fees, improving security and latency. For hybrid environments, AWS Direct Connect facilitates dedicated, private fiber connections from on-premises data centers to S3, bypassing the for consistent, high-bandwidth data transfers. A notable recent advancement is Amazon S3 Tables, launched in 2024, which optimizes S3 for tabular data using the open format and integrates natively with AWS Glue for metadata cataloging and schema evolution, as well as for building and deploying models on Iceberg tables stored in S3. This integration automates tasks like compaction and , enabling analytics engines to query S3 data as managed tables. Access to these integrations is governed by AWS Identity and Access Management () policies, ensuring fine-grained control over permissions. In July 2025, Amazon announced Amazon S3 Vectors in preview, the first cloud object store with native support for storing and querying large-scale vector datasets for AI applications. It integrates with Amazon Bedrock Knowledge Bases for cost-effective Retrieval-Augmented Generation (), Amazon SageMaker Unified Studio for building generative AI apps, and Amazon OpenSearch Service for low-latency vector searches, reducing costs by up to 90% compared to general-purpose storage.

Third-Party Compatibility

Amazon S3's serves as an for , enabling compatibility with various third-party solutions for on-premises and hybrid deployments. , an open-source system, implements the S3 to provide high-performance, scalable storage that mimics S3's behavior for cloud-native applications. Similarly, Ceph's Object Gateway (RGW) supports a RESTful compatible with the core data access model of the Amazon S3 , allowing seamless integration for distributed storage environments. Developers can interact with S3 using official AWS SDKs available in multiple languages, facilitating integration into diverse applications without proprietary dependencies. The AWS SDK for offers for S3 operations, enabling Java-based applications to handle uploads, downloads, and bucket management efficiently. For , the Boto3 library provides a high-level interface to S3, supporting tasks like object manipulation and multipart uploads. The AWS SDK for .NET similarly equips .NET developers with libraries for S3 interactions, including asynchronous operations and error handling. Additionally, the AWS (CLI) allows command-line access to S3 for scripting and automation, such as listing objects or syncing directories. S3 integrates with third-party content management systems to serve as a backend for file storage and delivery. leverages S3 through connectors like Amazon AppFlow, which transfers data from to S3 buckets for analytics and archiving. Adobe Experience Platform uses S3 as a source and destination for data ingestion, supporting authentication via access keys or assumed roles to manage files in workflows. For large-scale data imports, S3 supports migration tools that bridge external environments to AWS storage. AWS devices enable physical shipment of petabyte-scale data to S3, ideal for offline transfers where network bandwidth is limited. AWS Transfer Family provides managed file transfer protocols (, , FTP) directly to S3, securing imports from on-premises or legacy systems. S3's support for open table formats enhances interoperability with data analytics ecosystems, particularly through . In 2025, S3 introduced sort and z-order compaction strategies for Iceberg tables, optimizing query by reorganizing data partitions in both S3 Tables and general-purpose buckets via AWS Glue. These enhancements, building on the December 2024 launch of built-in Iceberg support in S3 Tables, allow automatic maintenance to reduce scan times and storage costs in open data lakes.

S3 API

API Overview

The Amazon S3 is a RESTful that enables developers to interact with S3 through standard HTTP methods such as GET, PUT, , and DELETE, using regional endpoints formatted as s3.<region>.amazonaws.com for virtual-hosted-style requests or path-style requests like s3.<region>.amazonaws.com/<bucket-name>. Path-style requests remain supported but are legacy and scheduled for future discontinuation. This structure supports operations across buckets and objects, with key actions including ListBuckets to retrieve a list of all buckets owned by the authenticated user and GetObject to the content and metadata of a specified object from a bucket. Developers typically access the via AWS SDKs, CLI tools, or direct HTTP requests, with recommendations to use SDKs for handling complexities like request signing and error management. Authentication for S3 API requests relies on AWS Signature Version 4, which signs requests using access keys and includes elements like the request , payload , and canonicalized resource to ensure and authenticity. For scenarios requiring temporary access without sharing credentials, presigned URLs can be generated, embedding the in query parameters to time-limited permissions for operations like uploading or downloading objects, valid for up to seven days. This mechanism allows secure delegation of access, such as enabling client-side uploads directly to S3 buckets. Advanced features in the S3 include multipart uploads, which break large objects into parts for parallel uploading, initiated via CreateMultipartUpload, followed by individual part uploads and completion with CompleteMultipartUpload, supporting objects up to 5 terabytes. Additionally, Amazon S3 Select, introduced in 2018, allows in-place querying of objects in , , or formats using SQL-like expressions through the SelectObjectContent operation, reducing data transfer costs by retrieving only relevant subsets without full downloads. The supports versioning through operations like PutObject with versioning enabled on the , automatically assigning unique IDs to objects for preserving multiple iterations and enabling retrieval via GetObject with a versionId . Tagging is managed via dedicated calls such as PutObjectTagging to add key-value tags to objects for and allocation, with limits of up to 10 tags per object and retrieval through GetObjectTagging. In 2025, enhancements to S3 Batch Operations expanded support for processing up to 20 billion objects in jobs for actions like copying, tagging, and invoking Lambda functions, facilitated by on-demand manifest generation for targeted large-scale operations. Further updates in 2025 include the discontinuation of support for Email Grantee Access Control Lists (ACLs) as of October 1, 2025; the limitation of S3 Object Lambda access to existing customers only, effective November 7, 2025; the introduction of Amazon S3 Vectors in preview (announced July 15, 2025) for native storage and querying of vector datasets with subsecond performance for AI applications; and the planned removal of the Owner.DisplayName field from API responses starting November 21, 2025, requiring applications to use canonical user IDs instead.

Competing Services

Google Cloud Storage serves as a primary competitor to Amazon S3, offering similar storage classes such as for frequently accessed data and Nearline for infrequently accessed data with retrieval fees. It supports multi-region replication for and integrates seamlessly with Google Cloud Platform's services, including Vertex AI for document summarization and analytics workflows. This integration enables -driven data processing directly within the Google ecosystem, providing an alternative for users prioritizing applications. Microsoft Azure Blob Storage competes with S3 through its tiered structure, including for active data, for less frequent access, and for long-term retention. It features strong integration with for and authentication, enhancing enterprise security in hybrid environments. Azure also offers lower egress fees compared to S3 in certain scenarios, appealing to data transfer-heavy workloads. Open-source alternatives provide self-hosted options compatible with the S3 API standard. is an S3-compatible system designed for high-performance, on-premises deployments, supporting distributed architectures without . Ceph offers distributed with S3 compatibility via its Object Gateway, enabling scalable, software-defined storage across clusters for block, file, and object needs. Other notable services include Backblaze B2, which emphasizes low-cost storage without API request fees, making it suitable for budget-conscious backups and archiving. provides zero-egress , eliminating data transfer costs out of the platform while maintaining S3 API compatibility for use cases. Amazon S3 differentiates itself through its extensive ecosystem depth, including deep integrations with AWS services like and EC2, which surpass competitors in breadth for complex cloud-native applications, whereas rivals often excel in pricing simplicity or specialized features like zero egress.

Development History

Launch and Early Years

Amazon Simple Storage Service (S3) was publicly launched on March 14, 2006, marking it as the first major infrastructure service offered by (AWS) and establishing the foundation for cloud-based . The service entered a testing phase earlier that year, allowing select developers to experiment with its capabilities before general availability. At launch, S3 provided basic designed for scalability, with a focus on durability rated at 99.999999999% (11 nines) and of 99.99%, enabling users to store and retrieve unlimited amounts of data via a simple web services without upfront infrastructure management. Initially, there was no (GUI); interactions were limited to calls, with the AWS Management Console introduction in 2009 providing the first web-based access for easier management. Early adoption was driven by developers seeking cost-effective, on-demand storage for internet-scale applications, with S3 quickly integrating into the AWS ecosystem following the August 2006 launch of (EC2). This integration allowed EC2 instances to directly access S3 buckets for data persistence and transfer, accelerating the shift toward cloud-native architectures and enabling startups to build without owning physical servers. By mid-2006, initial users included early cloud pioneers testing web applications, though adoption grew steadily as reliability improved. The service faced significant challenges during its early years, including major outages in and 2008 that disrupted access for hours and affected dependent websites and applications. These incidents, attributed to software bugs and network issues in the US East region, highlighted vulnerabilities in the nascent cloud infrastructure and prompted AWS to enhance monitoring, error detection, and mechanisms to bolster resilience. Despite these setbacks, S3's growth was remarkable; by the end of the third quarter of 2009, it stored over 82 billion objects, processing peak request rates exceeding 100,000 per second and demonstrating its scalability for diverse workloads.

Major Updates and Milestones

In 2011, Amazon S3 introduced server-side encryption (SSE), enabling automatic encryption of using AES-256 without requiring changes to application code or by users. This feature, initially supporting SSE-S3 with AWS-managed keys, addressed growing demands for data protection in . The following year, in 2012, AWS launched Amazon Glacier as a low-cost archival integrated with S3 via lifecycle policies, allowing seamless transitions of infrequently accessed to durable, retrieval-optimized at approximately one-tenth the cost of standard S3. Glacier provided 99.999999999% (11 9's) durability and retrieval options ranging from minutes to hours, fundamentally expanding S3's utility for long-term . By 2015, S3 enhanced its to support read-after-write consistency for new objects in the US East (N. Virginia) Region, reducing the window for PUT and DELETE operations and improving reliability for applications requiring immediate data visibility post-upload. This update laid the groundwork for broader guarantees, benefiting workloads like content distribution and real-time analytics. In December 2020, Amazon S3 advanced to strong read-after-write consistency for all new object PUTs across all AWS Regions, eliminating the need for retry logic in many applications and further enhancing reliability for global workloads. This was later expanded in 2021 to include strong consistency for all data consistency models, including overwrites and deletes. In 2018, S3 Select became generally available, allowing users to query and retrieve specific data from objects in place using SQL expressions on CSV, JSON, or Parquet formats, which reduced data transfer costs and latency by up to 99% for targeted extractions without full object downloads. This serverless query capability integrated with tools like AWS Athena, enabling efficient in-storage processing for large-scale analytics. Advancing performance for high-throughput applications, S3 Express One Zone was introduced in 2023 as a single-Availability Zone storage class optimized for low-latency access, delivering up to 10x faster throughput than S3 Standard with consistent single-digit millisecond latencies for reads and writes. Designed for workloads like training and real-time analytics, it uses directory buckets for millions of requests per second per bucket while maintaining S3's durability. From 2024 to 2025, S3 introduced Tables in December 2024, providing native support for managed tabular data storage optimized for analytics, with built-in features for schema evolution, time travel, and transactions on petabyte-scale datasets. Complementing this, full visibility via S3 Metadata launched in preview in late 2024 and reached general availability in January 2025, offering near real-time, queryable for all objects—including existing ones by mid-2025—across over 20 attributes like size, tags, and encryption to accelerate data discovery. In June 2025, S3 Tables added Iceberg compaction with sort and z-order strategies, clustering data by columns to improve query performance by up to 3x through better file pruning and reduced scan volumes. Concurrently, S3 Batch Operations scaled to handle up to 20 billion objects per job, supporting bulk actions like copying, tagging, and invocations across vast datasets for efficient large-scale management. In July 2025, Amazon S3 Vectors entered preview as the first cloud object storage with native support for storing and querying large-scale vector datasets, enabling sub-second similarity searches for AI and applications like retrieval-augmented generation () at lower costs. Also in July, S3 Tables reduced compaction processing fees by up to 90%, further optimizing costs for analytics workloads. In October 2025, S3 Batch Operations introduced on-demand manifest generation, accelerating the creation of manifests for jobs processing up to 20 billion objects without waiting up to . On November 6, 2025, S3 added support for tags on S3 Tables, enabling (ABAC) and improved cost allocation. Starting November 7, 2025, S3 Object Lambda entered , limiting access to existing customers only.

References

  1. [1]
    Amazon S3 - Cloud Object Storage - AWS
    Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance.
  2. [2]
    Amazon S3 Features - Cloud Object Storage - AWS
    Security. Amazon S3 offers flexible security features to block unauthorized users from accessing your data. Use VPC endpoints to connect to S3 resources from ...<|separator|>
  3. [3]
    Amazon S3 FAQs - Cloud Object Storage - AWS
    S3 Transfer Acceleration communicates with clients over standard TCP and does not require firewall changes.
  4. [4]
    What is Amazon S3? - Amazon Simple Storage Service
    Store data in the cloud and learn the core concepts of buckets and objects with the Amazon S3 web service.
  5. [5]
    Announcing Amazon S3 - Simple Storage Service - AWS
    Mar 13, 2006 · Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web.
  6. [6]
    AWS Pi Day 2025: Data foundation for analytics and AI
    Mar 14, 2025 · S3 tables are specifically optimized for analytics workloads, resulting in up to threefold faster query throughput and up to tenfold higher ...Missing: bandwidth | Show results with:bandwidth
  7. [7]
    How AWS S3 serves 1 petabyte per second on top of slow HDDs
    Sep 24, 2025 · Learn how Amazon built the backbone of the modern web that scales to 1 PB/s and 150M QPS on commodity hard drives.
  8. [8]
    S3 Pricing - Amazon AWS
    Pay only for what you use. There is no minimum charge. Amazon S3 cost components are storage pricing, request and data retrieval pricing, data transfer and ...S3 features · Amazon S3 Tables · Amazon S3 storage classes · S3 Intelligent-Tiering
  9. [9]
    Performance guidelines for Amazon S3
    When building applications that upload and retrieve objects from Amazon S3, follow our best practices guidelines to optimize performance.
  10. [10]
    AWS Global Infrastructure
    Using 9+ million kilometers of fiber optic cabling, AWS's global network backbone enables faster data transfer, reduced latency, and enhanced application ...AWS Services by Region · Regions and Availability Zones · Local Zones
  11. [11]
    Amazon S3 objects overview - Amazon Simple Storage Service
    Amazon S3 offers object storage service with scalability, availability, security, and performance. Manage storage classes, lifecycle policies, access ...
  12. [12]
    Naming Amazon S3 objects - Amazon Simple Storage Service
    The object key (or key name) uniquely identifies the object in an Amazon S3 bucket. When you create an object, you specify the key name.
  13. [13]
    Managing the lifecycle of objects - Amazon Simple Storage Service
    S3 Lifecycle helps you store objects cost effectively throughout their lifecycle by transitioning them to lower-cost storage classes, or, deleting expired ...Expiring objects · Examples of S3 Lifecycle · AWS S3 Storage classes
  14. [14]
    Data protection in Amazon S3 - Amazon Simple Storage Service
    30-day returnsBacked with the Amazon S3 Service Level Agreement. · Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.
  15. [15]
    Architecting for high availability on Amazon S3 | AWS Storage Blog
    Mar 15, 2021 · I give an inside look on how S3 is architected for high availability, and actionable takeaways that you can implement into your environment to achieve high ...Architecting For High... · Measuring Availability · Operational Best Practices
  16. [16]
    Making requests using the REST API - AWS Documentation
    When using the REST API, you can directly access a dual-stack endpoint by using a virtual hosted–style or a path style endpoint name (URI). All Amazon S3 dual- ...
  17. [17]
    Making requests - Amazon Simple Storage Service
    Amazon S3 is a REST service. You can send requests to Amazon S3 using the REST API or the AWS SDK (see Sample Code and Libraries ) wrapper libraries that ...
  18. [18]
    Amazon S3 Strong Consistency
    Amazon S3 delivers strong read-after-write consistency automatically for all applications, without changes to performance or availability.
  19. [19]
    Amazon S3 Update – Strong Read-After-Write Consistency
    Dec 1, 2020 · All S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent.
  20. [20]
    Object Storage Classes – Amazon S3
    Our One Zone storage classes use similar engineering designs as our Regional storage classes to protect objects from independent disk, host, and rack-level ...
  21. [21]
    Amazon S3 Intelligent-Tiering Storage Class | AWS
    The Amazon S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective access tier when ...
  22. [22]
    Understanding and managing Amazon S3 storage classes
    By default, objects in S3 are stored in the S3 Standard storage class, however Amazon S3 offers a range of other storage classes for the objects that you store.<|control11|><|separator|>
  23. [23]
    Amazon S3 Glacier storage classes
    Long-term, secure, durable Amazon S3 object storage classes for data archiving, starting at $1 per terabyte per month.
  24. [24]
    Transitioning objects using Amazon S3 Lifecycle
    In an S3 Lifecycle configuration, you can define rules to transition objects from one storage class to another to save on storage costs.
  25. [25]
    Amazon S3 multipart upload limits - Amazon Simple Storage Service
    Amazon S3 multipart upload limits ; Maximum number of parts per upload, 10,000 ; Part numbers, 1 to 10,000 (inclusive) ; Part size, 5 MiB to 5 GiB. There is no ...
  26. [26]
    Amazon Simple Storage Service endpoints and quotas
    Service quotas, also referred to as limits, are the maximum number of service resources or operations for your AWS account. For more information, see AWS ...
  27. [27]
    General purpose bucket quotas, limitations, and restrictions
    Restrictions for using general purpose buckets in Amazon S3, including the number of buckets per account and bucket naming guidelines.
  28. [28]
    Amazon S3 now supports up to 1 million buckets per AWS account
    Nov 14, 2024 · Amazon S3 has increased the default bucket quota from 100 to 10,000 per AWS account. Additionally, any customer can request a quota increase ...
  29. [29]
    Best practices design patterns: optimizing Amazon S3 performance
    The following topics describe best practice guidelines and design patterns for optimizing performance for applications that use Amazon S3.Organizing objects using... · Performance guidelines · Performance design patterns
  30. [30]
    Checking object integrity in Amazon S3 - AWS Documentation
    With Amazon S3, you can use checksum values to verify the integrity of the data that you upload or download. In addition, you can request that another checksum ...
  31. [31]
    Replicating objects within and across Regions - AWS Documentation
    You can use replication to enable automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication ...What does Amazon S3... · Troubleshooting replication · Setting up live replication
  32. [32]
    Amazon S3 Service Level Agreement
    Nov 28, 2023 · This Amazon S3 Service Level Agreement (SLA) is a policy governing the use of Amazon S3 and applies separately to each account using Amazon S3.Amazon S3 Service Level... · Service Credits · Amazon S3 Sla Exclusions
  33. [33]
    How S3 Versioning works - Amazon Simple Storage Service
    You can use S3 Versioning to keep multiple versions of an object in one bucket so that you can restore objects that are accidentally deleted or overwritten.
  34. [34]
    Monitoring replication with metrics, event notifications, and statuses
    You can monitor your live replication configurations and your S3 Batch Replication jobs through the following mechanisms.
  35. [35]
    Getting replication status information - AWS Documentation
    In the Amazon S3 console, you can view the replication status for an object on the object's details page. Sign in to the AWS Management Console and open the ...
  36. [36]
    Security in Amazon S3 - Amazon Simple Storage Service
    Configure Amazon S3 to meet your security and compliance objectives, and learn how to use other AWS services that can help you secure your Amazon S3 ...
  37. [37]
    Amazon S3 Security Features
    S3 maintains compliance programs, such as PCI-DSS, HIPAA/HITECH, FedRAMP, EU Data Protection Directive, and FISMA, to help you meet regulatory requirements. AWS ...AWS Security BlogAmazon S3 controlsAWS News BlogS3AWS Storage Blog
  38. [38]
    Client-side and server-side encryption - AWS Documentation
    The Amazon S3 Encryption Client supports client-side encryption, where you encrypt your objects before you send them to Amazon S3.
  39. [39]
    Using dual-layer server-side encryption with AWS KMS keys (DSSE ...
    Use dual-layer server-side encryption with AWS Key Management Service (AWS KMS) keys (DSSE-KMS) so that Amazon S3 manages encryption and decryption for you.Missing: enhancements | Show results with:enhancements
  40. [40]
    Data protection and encryption in S3 Vectors - AWS Documentation
    Data protection in S3 Vectors encompasses multiple layers of security controls designed to protect your vector data both at rest and in transit.
  41. [41]
    Access control in Amazon S3 - Amazon Simple Storage Service
    To manage access, the bucket owner uses policies or another access management tool, excluding ACLs. ... When creating a new Amazon S3 bucket, the Block Public ...
  42. [42]
    Access control list (ACL) overview - Amazon Simple Storage Service
    Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource.
  43. [43]
    Examples of Amazon S3 bucket policies
    With Amazon S3 bucket policies, you can secure access to objects in your buckets, so that only users with the appropriate permissions can access them.
  44. [44]
    Blocking public access to your Amazon S3 storage
    The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources.Block public access settings · Using IAM Access Analyzer for...
  45. [45]
    Logging Amazon S3 API calls using AWS CloudTrail
    CloudTrail logs provide you with detailed API tracking for Amazon S3 bucket-level and object-level operations. Server access logs for Amazon S3 provide you with ...Enabling CloudTrail event... · Amazon S3 CloudTrail events
  46. [46]
    Logging options for Amazon S3 - Amazon Simple Storage Service
    To do this, you can use server-access logging, AWS CloudTrail logging, or a combination of both. We recommend that you use CloudTrail for logging bucket-level ...
  47. [47]
    Cloud Compliance - Amazon Web Services (AWS)
    AWS supports 143 security standards and compliance certifications, including PCI-DSS, HIPAA/HITECH, FedRAMP, GDPR, FIPS 140-3, and NIST 800-171.Compliance Programs · SOC · AWS Services in Scope · Compliance ResourcesMissing: object lock WORM
  48. [48]
    Compliance validation for Amazon S3
    The security and compliance of Amazon S3 is assessed by third-party auditors as part of multiple AWS compliance programs.Missing: GDPR | Show results with:GDPR
  49. [49]
    Locking objects with Object Lock - Amazon Simple Storage Service
    S3 Object Lock can help prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely.Missing: PCI DSS HIPAA GDPR
  50. [50]
    Amazon S3 Metadata now supports metadata for all your S3 objects
    Jul 15, 2025 · Amazon S3 Metadata now provides complete visibility into all your existing objects in your Amazon Simple Storage Service (Amazon S3) buckets, ...
  51. [51]
    S3 Metadata journal tables schema - Amazon Simple Storage Service
    You can use the journal table for security, auditing, and compliance use cases to track uploaded, deleted, and changed objects in the bucket. For example, you ...Missing: enhancements | Show results with:enhancements
  52. [52]
  53. [53]
    Free Cloud Computing Services - AWS Free Tier
    ### Free Tier Details for Amazon S3
  54. [54]
    Amazon S3 Vectors
    Amazon S3 Vectors is the first cloud object store with native support to store and query vectors, delivering purpose-built, cost-optimized vector storage.Amazon S3 Vectors · What Is S3 Vectors? · Use Cases
  55. [55]
    Announcing Amazon S3 Tables – Fully managed Apache Iceberg ...
    Amazon S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale.
  56. [56]
    Tabular Data Storage At Scale - Amazon S3 Tables - AWS
    Amazon S3 Tables deliver an efficient way way to store tabular data at scale, helping you optimize query performance and costs as your data lake grows.Missing: 2024 | Show results with:2024
  57. [57]
  58. [58]
    The BBC Preserves 100 Years of History Using Amazon S3
    Learn how the BBC, a UK public service broadcaster, safely migrated its flagship archive using Amazon S3 Glacier Instant Retrieval.Bbc Archive At-A-Glance · Using Aws To Preserve Data... · Migrating 120 Tb Per Day And...
  59. [59]
    None
    - **Company**: Ancestry
  60. [60]
    Netflix's Apache Iceberg Data Lake Migration - Amazon AWS
    Nov 22, 2024 · Netflix engineers share journey of modernizing exabyte-scale data lake using Apache Iceberg at AWS re:Invent 2023.
  61. [61]
    Optimizing data warehouse storage | by Netflix Technology Blog
    Dec 21, 2020 · At Netflix, our current data warehouse contains hundreds of Petabytes of data stored in AWS S3, and each day we ingest and create additional ...Missing: Amazon exabytes
  62. [62]
    airbnb-case-study - Amazon AWS
    Airbnb is also using Amazon Simple Storage Service (Amazon S3) to house backups and static files, including 10 terabytes of user pictures. To monitor all of ...Missing: log | Show results with:log
  63. [63]
    Use Amazon S3 with Amazon EC2 instances - AWS Documentation
    You can use Amazon S3 to store and retrieve any amount of data for a range of use cases, such as data lakes, websites, backups, and big data analytics.
  64. [64]
    Process Amazon S3 event notifications with Lambda
    Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 ...
  65. [65]
    What is Amazon Athena? - Amazon Athena - AWS Documentation
    Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL.
  66. [66]
    Working with storage and file systems with Amazon EMR
    Amazon EMR uses HDFS, S3A, and local file systems. HDFS is for caching, S3A for S3 integration, and local for temporary data.
  67. [67]
    Amazon S3 backups - AWS Documentation
    AWS Backup supports centralized backup and restore of applications storing data in S3 alone or alongside other AWS services for database, storage, and compute.Missing: Airbnb | Show results with:Airbnb
  68. [68]
    Performing object operations in bulk with Batch Operations
    You can use S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. S3 Batch Operations can run a single operation or action on lists ...
  69. [69]
    Gateway endpoints for Amazon S3 - Amazon Virtual Private Cloud
    With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost.Considerations · Private DNS · Associate route tables · Edit the VPC endpoint policy
  70. [70]
    Access your Amazon S3 bucket over Direct Connect | AWS re:Post
    Use a private IP address with an interface VPC endpoint to access your S3 bucket over Direct Connect. To connect to an Amazon S3 bucket with a private IP ...
  71. [71]
    Amazon S3 Tables integration with Amazon SageMaker Lakehouse ...
    Mar 13, 2025 · Amazon S3 Tables integration with Amazon SageMaker Lakehouse is now generally available by Channy Yun (윤석찬) on 13 MAR 2025
  72. [72]
    AWS S3 Compatible Object Storage - MinIO
    MinIO offers high-performance, AWS S3 compatible object storage for cloud-native applications, ideal for scalable and secure data management.
  73. [73]
    Ceph Object Gateway S3 API
    Ceph supports a RESTful API that is compatible with the basic data access model of the Amazon S3 API.
  74. [74]
    AWS SDK for Java
    The AWS SDK for Java provides Java APIs for each AWS service. Using the SDK, you can build Java applications that work with Amazon S3, Amazon EC2, Amazon ...
  75. [75]
    AWS SDK for Python (Boto3)
    The AWS SDK for Python provides Python APIs for each AWS service. Using the SDK, you can build Python applications that work with Amazon S3, Amazon EC2, Amazon ...
  76. [76]
    AWS SDK for .NET
    The AWS SDK for .NET helps develop and deploy applications by calling AWS services using .NET APIs, simplifying use with consistent libraries.Related AWS tools · Get started · Developer Guide
  77. [77]
    Amazon AppFlow | SaaS Integrations List | Amazon Web Services
    With AppFlow, you can seamlessly transfer data from Salesforce, Google Analytics, Zendesk, and other supported data sources to an Amazon S3 bucket integrated ...
  78. [78]
    Amazon S3 Source Connector Overview | Adobe Experience Platform
    Apr 1, 2025 · Learn how to connect Amazon S3 to Adobe Experience Platform using APIs or the user interface.Configure permissions on... · Use temporary security...
  79. [79]
  80. [80]
    Amazon S3 now supports sort and z-order compaction for Apache ...
    Jun 24, 2025 · Amazon S3 now supports sort and z-order compaction for Apache Iceberg tables, available both in Amazon S3 Tables and general purpose S3 buckets using AWS Glue ...Missing: enhancements | Show results with:enhancements
  81. [81]
    Welcome - Amazon Simple Storage Service - AWS Documentation
    Explains the Amazon S3 API operations, related request and response structures, and error codes to enable you to store data in the cloud.
  82. [82]
    Authenticating Requests (AWS Signature Version 4)
    You can use presigned URLs to embed clickable links, which can be valid for up to seven days, in HTML. For more information, see Authenticating Requests: Using ...Using an Authorization Header · Using Query Parameters · Authenticating Requests
  83. [83]
    Download and upload objects with presigned URLs
    Use a presigned URL to share or upload objects in Amazon S3 without requiring AWS security credentials or permissions. Grant time-limited download and ...Who can create a presigned... · Limiting presigned URL...
  84. [84]
    Uploading and copying objects using multipart upload in Amazon S3
    Multipart upload uploads an object as parts, in any order, with a three-step process: initiate, upload parts, and complete. It's best for large objects.Multipart upload limits · Create-multipart-upload · Tracking a multipart upload...
  85. [85]
    Querying data in place with Amazon S3 Select - AWS Documentation
    Amazon S3 Select only allows you to query one object at a time. It works on an object stored in CSV, JSON, or Apache Parquet format. It also works with an ...
  86. [86]
    Document History - Amazon Simple Storage Service
    For more information, see Storage Classes in the Amazon Simple Storage Service User Guide. April 4, 2018. Amazon S3 Select. Amazon S3 Select is now generally ...
  87. [87]
    Retaining multiple versions of objects with S3 Versioning
    You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets.
  88. [88]
    GetObjectTagging - Amazon Simple Storage Service
    For a versioned bucket, you can have multiple versions of an object in your bucket. To retrieve tags of any other version, use the versionId query parameter.
  89. [89]
    Accelerating Amazon S3 Batch Operations at scale with on-demand ...
    Oct 13, 2025 · The S3 Batch Operations manifest generator is a powerful feature that enables you to create batch jobs on demand without pre-generating a ...
  90. [90]
  91. [91]
    Cloud Storage
    ### Summary of Google Cloud Storage Features
  92. [92]
    Azure Blob Storage | Microsoft Azure
    ### Summary of Azure Blob Storage Details
  93. [93]
    Ceph.io — Technology
    Applications which use S3 or Swift object storage can take advantage of Ceph's scalability and performance within a single data center, or federate multiple ...
  94. [94]
    Amazon S3 Standard vs. Backblaze B2 Cloud Storage
    Backblaze B2 is object storage built to open workflows and ease budgets. Amazon S3 Standard is AWS's legacy storage service.Missing: features | Show results with:features
  95. [95]
    Cloudflare R2 vs AWS S3 | Review Pricing & Features
    With zero egress fees, Cloudflare R2 beats out Amazon S3 as the most cost-effective object storage solution. Compare R2 pricing and features to S3.Português (Brasil) · 한국어 · 日本語 · Español (España)
  96. [96]
    2025-26 DCIG TOP 5 AWS S3 Alternatives Enterprise Edition Report ...
    Jun 10, 2025 · Complimentary Report Provides Organizations with Guidance on the Best AWS S3 Alternatives to Choose from Today · Google Cloud Storage · IBM ...
  97. [97]
    Happy 15th Birthday Amazon S3 -- the service that started it all
    Mar 23, 2021 · Amazon S3 was launched 15 years ago on March 14, 2006, and set off a revolution in IT services.
  98. [98]
    The earliest AWS customers who helped build the cloud
    Mar 14, 2021 · The year was 2006, and Alvarez was one of the very first people in the world to see, and test out, Amazon Simple Storage Service (S3), at that ...
  99. [99]
    The AWS Blog: The First Five Years
    Nov 9, 2009 · The next step was Amazon S3 in the spring of 2006. That service has grown and grown (and grown) and now holds a remarkable 82 billion objects.
  100. [100]
    Amazon explains its S3 outage - ZDNET
    Feb 16, 2008 · Here's Amazon's explanation of the S3 outage, which wreaked havoc on startups and other enterprises relying on Amazon's cloud.
  101. [101]
    Amazon Web Services Goes Down, Takes Many Startup Sites With It
    Feb 15, 2008 · Amazon Web Services suffered a major outage this morning, affecting the thousands of Websites that rely on its storage (S3) and cloud ...
  102. [102]
    82 Billion Objects in Amazon S3 - All Things Distributed
    Nov 9, 2009 · At the end of Q3 2009 we counted over 82 billion objects in Amazon S3. Congrats to the team for providing such a rock solid service!
  103. [103]
    Amazon S3 announces Server Side Encryption Support - AWS
    Oct 4, 2011 · Amazon S3 Server Side Encryption (SSE) enables you to easily encrypt data stored at rest in Amazon S3. Using Amazon S3 SSE, you can encrypt data ...
  104. [104]
    Happy 10th Anniversary, Amazon S3 Glacier – A Decade of Cold ...
    Aug 22, 2022 · Ten years ago, on August 20, 2012, AWS announced the general availability of Amazon Glacier, secure, reliable, and extremely low-cost storage designed for data ...
  105. [105]
    Document history - Amazon Simple Storage Service
    S3 Object Lambda will no longer be open to new customers starting November 7, 2025. If you would like to use S3 Object Lambda, sign up prior to that date.
  106. [106]
    New Storage Class and General Availability of S3 Select
    Apr 4, 2018 · This new storage class stores data in a single AWS Availability Zone and is designed to provide eleven 9's (99.99999999%) of data durability.
  107. [107]
    Announcing the new Amazon S3 Express One Zone high ...
    Nov 28, 2023 · The new Amazon S3 Express One Zone storage class is designed to deliver up to 10x better performance than the S3 Standard storage class.
  108. [108]
    Amazon S3 Metadata is now generally available - AWS
    Jan 27, 2025 · AWS announces the general availability of Amazon S3 Metadata, the easiest and fastest way to discover and understand your Amazon S3 data.