Fact-checked by Grok 2 weeks ago

NetApp FAS

NetApp FAS (Fabric-Attached Storage) is a line of hybrid flash storage arrays developed by , designed to deliver efficient, scalable, and secure solutions primarily for secondary workloads such as tiering, , and cyber vaulting, powered by the 's data management operating system. These systems support unified storage protocols including block () and file (), enabling seamless integration across on-premises, hybrid cloud, and multi-cloud environments while providing features like inline data compression, deduplication, and automatic tiering to optimize capacity and performance. Introduced as part of 's evolution from early NFS filers in the , the FAS platform has grown to emphasize cost-effective hybrid flash configurations, with models scaling up to 24 nodes and 14.7 PB of raw capacity per high-availability pair. Key benefits of NetApp FAS include its low through shared infrastructure management and automation, support for a simplified backup strategy, and built-in protection with immutable snapshots and guaranteed recovery times. Current models, such as the FAS2750, FAS2820, FAS50, FAS70, and FAS90, cater to midrange and enterprise needs, balancing flash performance for active data with cost-efficient HDDs for archival storage, and they integrate natively with popular and services like AWS, , and Cloud. This architecture ensures via role-based access controls, , and non-disruptive upgrades, making FAS a trusted choice for organizations managing large-scale data protection and .

System Overview

Definition and Purpose

NetApp FAS, or Fabric-Attached Storage, is primarily a storage platform that integrates hard disk drives (HDDs) and solid-state drives (SSDs) to deliver scalable capacity at lower costs while maintaining performance for enterprise workloads. The primary purpose of FAS systems is to provide unified capabilities through the operating system, supporting file protocols such as NFS and , block protocols including and (), and for diverse applications. This enables efficient handling of mixed workloads, including backups, data analytics, primary data storage, and cyber vaulting in environments. Key benefits include simplified management via shared tools, built-in features like inline deduplication and to optimize utilization, and seamless integration with hybrid cloud infrastructures for tiering and recovery. originated from Network Appliance's filer technology developed in the , evolving from early NFS servers like the 1992 "" prototype, and was rebranded under following the company's name change in 2008.

Variants and Types

NetApp's ONTAP-powered arrays include several primary types: the , the All-Flash (AFF), and the All-Flash Array (), each tailored to distinct requirements while sharing the underlying operating system for unified management. The type emphasizes cost efficiency through mixed HDD and SSD configurations, enabling tiered for capacity-intensive operations. In contrast, AFF systems employ exclusively SSDs to deliver superior performance for latency-sensitive applications, with controllers optimized for all-flash environments. systems, meanwhile, focus on block in setups, streamlining protocols like (FC) and to minimize overhead associated with file services. Design distinctions among these types center on media support and optimization. Hybrid FAS allows flexible mixing of HDDs for bulk storage with SSDs for caching and tiering, facilitating automatic placement based on patterns to balance and performance. AFF types, built solely on SSDs including NVMe options, incorporate specialized hardware accelerations for write-intensive workloads, ensuring consistent low-latency responses without the need for tiering. ASA designs eliminate layers, reducing complexity for pure block and enhancing efficiency in FC/iSCSI environments by prioritizing direct I/O paths. These types address specific use cases aligned with enterprise needs. FAS excels in secondary storage scenarios such as backups, archival, and , where high capacity at lower cost is paramount. AFF systems are ideal for I/O-intensive tasks like databases, platforms, and workloads requiring rapid data access and scalability. ASA targets mission-critical block storage in infrastructures, supporting high-availability applications such as databases and environments with guaranteed data availability. The evolution of these types reflects NetApp's adaptation to flash technology and workload shifts. AFF was introduced in 2014 to accelerate the adoption of all-flash arrays, building on the architecture to provide enterprise-grade performance without sacrificing data management features. emerged in 2023 as a dedicated solution, simplifying deployment for block-only customers and extending ONTAP's capabilities to pure block environments amid growing demand for resilient, high-IOPS ; as of 2025, the line expanded with entry-level models (A20, A30, A50) in February and capacity-optimized C-Series in May.

Hardware Platforms

FAS Hybrid Models

NetApp FAS hybrid models represent the current lineup of scalable, cost-optimized storage arrays designed for enterprise secondary workloads, such as data tiering, backups, and , combining HDDs with caching for balanced performance and capacity. As of 2025, these systems run on the operating system and support (HA) configurations with dual controllers per pair, enabling seamless scale-out architectures. They emphasize hybrid media mixes to deliver economical storage while maintaining enterprise-grade reliability and efficiency. The FAS hybrid portfolio includes high-end, mid-range, and entry-level models tailored to varying scales of deployment. High-end systems like the and provide robust scalability in a , supporting up to 24 nodes (12 pairs) and 1440 drives per pair for maximum raw capacity of 14.7 per pair. Mid-range options, such as the , target entry-to-mid deployments with a 2U , up to 8 nodes (4 pairs), 480 drives per pair, and 10.6 raw capacity per pair. For smaller or branch environments, the and offer compact 2U designs with up to 24 nodes (12 pairs), 144 drives per pair, and maximum raw capacities of 2.3 (FAS2820) or 1.2 (FAS2750) per pair.
ModelForm FactorMax Nodes (HA Pairs)Max Drives per HA PairMax Raw Capacity per HA Pair
FAS904U24 (12)144014.7 PB
FAS704U24 (12)144014.7 PB
FAS502U8 (4)48010.6 PB
FAS28202U24 (12)1442.3 PB
FAS27502U24 (12)1441.2 PB
These models feature dual controllers per HA pair, equipped with Intel Xeon processors to handle demanding workloads, and support memory configurations starting from 128 GB up to several terabytes per system depending on the model and scale-out configuration. They accommodate both 2.5-inch and 3.5-inch HDDs and SSDs for hybrid setups, allowing flexible media mixes to optimize cost and performance. Connectivity options include Ethernet ports at speeds of 1/10/25/100 GbE (with support for up to 200 Gbps in select configurations) and Fibre Channel at 16/32/64 Gb, enabling integration with diverse network fabrics. Expansion is achieved through compatible drive shelves, such as the DS2246, a 2U enclosure supporting 24 (2.5-inch) drives for high-density scaling. Overall scalability reaches up to 8 PB of effective capacity per HA pair when leveraging efficiency features like and deduplication, making these systems suitable for growing estates. For scenarios demanding higher and lower latency, 's AFF all-flash models serve as a performance-oriented alternative.

AFF All-Flash Models

The AFF All-Flash models, part of NetApp's family, serve as high-performance storage arrays exclusively configured with SSDs, extending the unified storage capabilities of systems while eliminating HDD support to focus on speed and efficiency for mission-critical workloads. These systems leverage software for block, file, and object protocols, with hardware tailored for low-latency operations in , , and environments. Unlike hybrid models, which balance cost and capacity through mixed HDD/SSD setups, AFF prioritizes all-flash architecture for superior and throughput. As of 2025, the AFF lineup includes enterprise-grade models like the AFF A1K and AFF A90, supporting up to 24 nodes (12 pairs) and a maximum raw SSD capacity of 14.7 per pair, ideal for large-scale deployments. Midrange options such as the AFF A70 offer up to 14.7 raw capacity, while entry-level systems like the AFF A400 and AFF A250 provide up to 14.7 in a compact , suitable for smaller centers. Recent releases, including the edge-oriented AFF A20, A30, and A50, extend capabilities to distributed sites with support for up to 576 drives, emphasizing in remote or branch locations. Complementing these, the capacity-optimized AFF C-Series—featuring the C250, C400, and C800—delivers effective capacities up to 707 using high-density NVMe QLC SSDs, targeting environments needing dense storage without sacrificing performance. AFF hardware incorporates advanced controllers with NVMe-attached SSD via PCIe Gen4 interfaces, enabling seamless integration with shelves like the NS224 (2U holding 24 SSDs). Memory configurations scale up to 2 TB per controller in top-tier models, facilitating robust caching and processing for intensive tasks. Firmware optimizations handle flash-specific functions, including wear-leveling, collection, and inline data reduction, ensuring longevity and efficiency in all-SSD environments. Performance metrics highlight AFF's design for demanding applications, achieving up to 40 million and sub-millisecond in unified configurations, with native support for NVMe/FC protocols to maximize throughput in , , and database scenarios. This all-flash focus differentiates AFF from broader FAS variants by tuning the entire stack— from controllers to storage shelves—for SSD-exclusive operations, avoiding the mechanical limitations of systems.

ASA All-SAN Models

The NetApp ASA All-SAN Models represent a line of streamlined, all-flash storage arrays designed exclusively for block-based storage in (SAN) environments, optimizing performance for mission-critical applications such as and virtualized infrastructure. These systems eliminate support for file protocols like NFS and , reducing operational complexity by focusing solely on SAN workloads via (FC), , NVMe/FC, and NVMe/TCP protocols. Built on the proven architecture of 's AFF platforms, ASA models leverage the same NVMe SSD shelves but incorporate SAN-optimized configurations of software to deliver sub-millisecond latency and consistent high throughput without the overhead of unified storage capabilities. As of 2025, the lineup includes the flagship ASA A900 and mid-range ASA A400, among other variants, forming a simplified portfolio tailored for dedicated block storage without hybrid media options. The ASA A900, in a high-availability () pair configuration occupying an 8U , supports up to 14.7 of raw per HA pair, scalable to 88.2 PB across a 12-node (6 HA pairs) cluster, with 2 TB of per HA pair to handle demanding I/O patterns. In contrast, the ASA A400 offers a more compact 4U HA pair with up to 14.7 PB of raw capacity per HA pair and 256 GB of DRAM, making it suitable for mid-tier deployments requiring efficient scaling. Both models utilize PCIe Gen4 NVMe SSDs for end-to-end NVMe connectivity, ensuring low-latency performance up to 1 million per system. Introduced in 2023, ASA models prioritize reduced complexity through single-purpose controllers that omit file-serving stacks, enabling faster provisioning—often in seconds—and 100% data availability guarantees via 's high-availability features. They integrate seamlessly with management tools for unified administration, data protection, and efficiency features like inline compression and deduplication, while maintaining compatibility with existing AFF ecosystems for shared infrastructure elements. This SAN-focused design distinguishes ASA from broader unified storage solutions like AFF, providing cost-effective modernization for block-only environments.

Legacy Systems

The NetApp FAS C-Series, particularly the FAS6200 series introduced in the early , comprised mid-range systems optimized for cost-effective scaling in shared IT infrastructures supporting and workloads. These systems utilized a dual-controller high-availability (HA) configuration in a 6U and enabled cluster-scale capacities exceeding 69 PB through integration of caching via the Virtual Storage Tier for enhanced and efficiency. The FAS6200 series reached end-of-support in December 2018. Early FAS models, including the FAS8040 and FAS9000 from 2014 to 2020, expanded capabilities with maximum raw capacities reaching 11.52 per pair for the FAS8040 (end-of-support January 2023) and up to 172 in scaled-out FAS9000 configurations (supported as of 2025), while introducing all-flash aggregation options through Flash Pool and NVMe-based Flash Cache. The FAS8040 featured a 6U dual-controller supporting up to 1,440 drives, whereas the FAS9000 allowed scale-out to 24 nodes with a maximum of 17,280 drives across HDDs and SSDs. The FAS2500 series, suited for branch offices and midsized deployments (end-of-support January 2023), offered compact 2U or 4U chassis variants like the FAS2520 and FAS2554, accommodating up to 576 drives and 2.3 raw capacity per pair for unified and block storage in distributed environments. Historical architectures in these systems relied on older multiprocessor chipsets, emphasized 10 GbE connectivity with onboard ports for and protocols, and constrained to levels like 192 GB maximum in the FAS6200 or 144 GB in the FAS2554, far below modern capacities.

Internal Architecture

Controller and Processing

NetApp FAS systems employ a dual-controller high-availability (HA) design, where two identical controllers operate in an active-active mode to provide redundancy and seamless data access. In this configuration, both controllers actively process input/output (I/O) operations independently, sharing the workload across supported protocols such as NFS, SMB, iSCSI, and Fibre Channel (FC), while maintaining synchronized state information via a high-speed interconnect. This setup ensures that if one controller fails, the partner assumes its responsibilities without data loss, committing any uncommitted writes from non-volatile random-access memory (NVRAM) to disk for consistency. Failover in FAS HA pairs typically completes in under 60 seconds, minimizing disruption to ongoing operations and enabling rapid recovery in environments. Each controller manages its own I/O paths, utilizing dedicated resources to handle protocol processing, operations, and , which optimizes and isolates failures. The controllers integrate with NVRAM for write buffering to accelerate acknowledgments and protect against power or hardware issues. At the core of each FAS controller are multi-socket processors, typically featuring 2 to 8 cores per socket depending on the model, with dedicated cores allocated for specific tasks: protocol handling (e.g., network and traffic), storage management (e.g., data layout and operations), and overall system administration. For instance, mid-range models like the FAS8300 use dual-socket configurations to balance compute demands across diverse workloads. These processors support scalable performance, enabling FAS systems to handle thousands of I/O operations per second while integrating with software for efficient resource utilization. I/O connectivity in FAS controllers relies on high-bandwidth PCIe lanes—often Gen3 or Gen4 with 8 to 16 lanes per controller—for connecting to expansion shelves and host networks, ensuring low-latency data transfer to or NVMe drives. Onboard application-specific integrated circuits () offload protocol processing, such as TCP/IP for Ethernet-based and NFS, and for block storage, reducing CPU overhead and improving throughput. This allows controllers to sustain multi-gigabit-per-second speeds across multiple ports, with configurations supporting up to 32 host connections per HA pair. Power and cooling systems in FAS controllers emphasize reliability through redundant, hot-swappable units (PSUs), typically dual 800W to 1200W modules per controller that provide to prevent single points of . These PSUs draw from separate circuits, with typical power consumption for an HA pair ranging from 500W to 2000W depending on model, load, and drive count— for example, approximately 354W (1209 BTU/hr) for a base FAS2750 pair under typical operation. Cooling is managed by redundant fans integrated into the , maintaining optimal temperatures for components during continuous 24/7 use, with automatic to backup fans if needed. All components, including controllers and PSUs, support hot-swapping to enable non-disruptive .

NVRAM and Cache Mechanisms

NetApp FAS systems employ Non-Volatile Random Access Memory (NVRAM) as a critical component for ensuring data durability during write operations. NVRAM consists of battery-backed (DRAM), typically ranging from 512 MB in early models to up to 8 GB in NVRAM5 and NVRAM6 implementations found in mid-range FAS platforms like the FAS31xx and FAS32xx series. This memory logs incoming write requests without storing the full data blocks, instead recording intent logs—metadata such as block addresses and transaction details—to enable rapid replay and recovery. In high-availability (HA) configurations, the intent log is mirrored to the partner node's NVRAM before acknowledging the write to , guaranteeing zero even in the event of a power failure or node crash. Upon system reboot following an outage, the controller replays the intent log from NVRAM to destage the pending writes to stable disk storage, restoring filesystem consistency without . This process, managed by the Consistency Point () server, flushes buffered data from system memory to disk in periodic intervals, clearing the log once operations are committed. In newer models, such as those from the FAS8000 series onward, NVRAM has evolved to NVRAM9 with capacities up to 32 GB per controller (64 GB per pair), supporting higher throughput. By the , traditional battery-backed NVRAM transitioned to integrated NVMEM using PCIe modules, eliminating the need for batteries while maintaining similar and functions through for short-term holdup. In current models such as the FAS2750 and FAS2820, traditional NVRAM has been replaced by NVMEM, a PCIe-based module with backup, providing similar capabilities without batteries. Caching mechanisms in FAS systems complement NVRAM by optimizing read and write performance. The primary read cache utilizes system , providing fast access to recently referenced data and , with capacities scaling up to several terabytes depending on the controller's total memory allocation—often 256 GB to 2 TB per node in modern FAS configurations. Write caching is handled exclusively through NVRAM's intent logging, buffering operations until safe destaging to disk. For extended read caching, FAS supports optional Flash Cache modules, which are PCIe-based SSD accelerators that act as a second-level , storing hot user data and filesystem to reduce for random read-intensive workloads; these modules are available in sizes from 256 GB to 2 TB per controller. Additionally, Flash Pool extends caching at the aggregate level by incorporating SSDs into HDD aggregates, creating a intelligent tiered for both reads and writes. SSDs in a Flash Pool serve as a high-performance layer, automatically promoting frequently accessed while retaining cold on HDDs, with support for up to 20 SSDs per in systems running 8.2 or later. This hardware integrates seamlessly with the controller's I/O processing to handle placement without software reconfiguration. The evolution from basic 1 GB NVRAM in legacy FAS models (e.g., FAS2020) to these flash-integrated solutions in the reflects NetApp's focus on enhancing reliability and scalability for enterprise storage.

Storage Subsystem

Drives and Capacities

NetApp FAS systems support a variety of certified drive types to accommodate storage configurations, combining hard disk drives (HDDs) for with solid-state drives (SSDs) for . HDDs are available in 3.5-inch form factors using or interfaces, with capacities up to 22 TB at 7,200 RPM, suitable for bulk needs in FAS models. SSDs, provided in 2.5-inch form factors, support NVMe or QLC technologies with capacities up to 15.3 TB, enabling mixed-drive setups where SSDs accelerate access to frequently used data while HDDs handle archival workloads. All drives must be NetApp-certified to ensure compatibility and optimal within the operating system. FAS systems utilize modular disk shelves to expand , with configurations designed for flexibility in drive mixing and . The DS2246 shelf, a 2U unit with 24 slots, supports mixed populations of 2.5-inch SSDs and performance-oriented , ideal for FAS deployments requiring balanced capacity and speed. For HDD-focused expansion, the DS4246 shelf offers a 4U with 24 slots optimized for 3.5-inch high-capacity drives, providing dense in enterprise environments. Higher-density options like the DS460C shelf deliver up to 60 drives in a 4U using 3.5-inch HDDs, maximizing raw capacity per . Shelf stacking is limited to 10 shelves deep per stack path to maintain and manageability, with support for up to 1440 drives per high-availability (HA) pair in top-tier models such as the FAS90. Capacity scaling in FAS systems varies by model, emphasizing raw physical limits while leveraging features for efficiency. Entry-level FAS2750 supports up to 144 drives per pair, yielding a maximum raw capacity of 1.2 , while the FAS2820 achieves up to 2.3 with the same drive count when fully populated with high-capacity HDDs (as of 2025). Mid-range FAS50 configurations scale to 480 drives per pair, achieving up to 10.6 raw. High-end FAS70 and FAS90 systems extend this to 1440 drives per pair, providing a maximum raw capacity of 14.7 , with effective capacities exceeding 100 possible through inline deduplication, , and compaction that can deliver savings ratios of 4:1 or higher depending on workload characteristics. These limits apply to mixes, where protection aggregates drives into resilient pools, though detailed configurations are addressed separately.
Shelf ModelForm FactorSlot CountSupported Drive TypesTypical Use in FAS
DS22462U242.5" SSDs (up to 15.3 TB NVMe/QLC), 2.5" SAS/SATA HDDsMixed hybrid for performance and capacity balance
DS42464U243.5" SAS/SATA HDDs (up to 22 TB)High-capacity HDD-focused expansion
DS460C4U603.5" SAS/SATA HDDs (up to 22 TB)Maximum density for bulk storage

Data Protection and RAID

NetApp FAS systems employ software-based implementations within the operating system to ensure and protection against drive failures, integrating seamlessly with supported drive types such as HDDs and SSDs in or all-flash configurations. Unlike traditional hardware controllers, handles parity calculations, reconstruction, and management at the software level, allowing for flexible aggregate configurations across FAS platforms. The primary RAID scheme in FAS is RAID-DP, a double-parity mechanism equivalent to that dedicates two drives for parity within each RAID group, enabling tolerance of up to two simultaneous drive failures without data loss. RAID-DP is the default policy for aggregates with more than six drives, optimizing storage efficiency while maintaining performance comparable to single-parity schemes. It requires a minimum of three disks per group (one data, two parity) but typically operates with 12 to 20 HDDs or 20 to 28 SSDs to balance capacity and rebuild reliability. For environments with high-capacity HDDs, RAID-TEC extends protection with triple parity, tolerating up to three drive failures and serving as the default for disks of 6 TB or larger. Introduced in to address growing drive sizes and failure risks in large-scale deployments, RAID-TEC requires at least four disks per group (one data, three parity) and supports group sizes of 15 to 20 drives for optimal efficiency. Conversion between RAID-DP and RAID-TEC is possible on existing aggregates with sufficient disks, allowing administrators to adapt protection levels as storage needs evolve. Legacy support includes RAID-4, a single-parity option that protects against one drive failure but is rarely used in modern FAS setups due to its limited redundancy. For SSD-based aggregates, mirroring configurations provide an alternative to parity schemes, offering exact data duplication across plexes for enhanced availability in performance-sensitive scenarios. NetApp FAS does not implement traditional RAID-0 (striping without parity), RAID-1 ( without parity integration), or RAID-5 (distributed single parity), relying instead on these custom schemes for superior . ONTAP's RAID implementation features automatic rebuild processes upon drive failure, utilizing disks to initiate reconstruction without manual intervention, thereby minimizing downtime. Rebuild operations prioritize over performance, with typical reconstruction times ranging from 1 to 3 hours per terabyte, influenced by factors such as group size, drive type, and system load. This software-driven approach ensures consistent protection across FAS hybrid, all-flash, and legacy systems while supporting up to three spares for concurrent failures in advanced configurations.

Caching and Tiering

NetApp FAS systems employ advanced mechanisms to accelerate data access, particularly in hybrid configurations combining HDDs with SSDs. Flash , a legacy read-only caching technology, utilizes dedicated SSD accelerator cards installed in controllers to cache frequently accessed read data and metadata across all aggregates. This approach improves random read-intensive workloads by reducing without requiring changes to aggregates. Supporting up to 4 TB per controller, Flash Cache operates transparently at the controller level, promoting it as a cost-effective performance booster for legacy FAS deployments. In contrast, Flash Pool represents a more integrated caching solution, leveraging SSDs as a read/write cache within hybrid aggregates composed primarily of HDDs. By partitioning SSDs into cache layers, Flash Pool dynamically promotes hot data—frequently read or written blocks—to SSD storage, achieving hit rates approaching 100% for active datasets in suitable workloads. This hardware-assisted caching enhances and lowers response times for mixed I/O patterns, such as those in or , while utilizing the underlying HDD capacity for bulk storage. Administrators can configure caching policies per to prioritize reads, writes, or both, ensuring efficient use of SSD resources across the . Tiering in FAS extends caching benefits through automated data movement to optimize capacity and cost. Auto-tiering, integrated into , applies policy-based rules to relocate inactive () data from performance tiers to lower-cost , including object stores, while keeping new writes inline on high-speed local media. This process occurs without application disruption, using temperature tracking to identify data inactivity over defined cooling periods. For instance, policies like "" tier user data and snapshots after 31 days of inactivity, freeing local space for active workloads. FabricPool, a key enabler of hybrid tiering, combines these capabilities by pairing local FAS aggregates with remote tiers, such as AWS S3. It automatically offloads cold data to the tier, promoting them back on demand for access, which results in significant capacity savings on-premises by reserving flash or HDD for hot data. This policy-driven mechanism supports both and protocols, enhancing scalability for FAS environments with growing data footprints.

Virtualization Features

NetApp FAS systems leverage ONTAP's storage virtualization capabilities to abstract physical storage resources, enabling efficient management and scalability beyond hardware limitations. Storage virtual machines (SVMs), formerly associated with FlexArray technology, serve as logical entities that virtualize third-party arrays—such as those from (HDS) or —presenting them as native ONTAP storage pools. This integration allows FAS controllers to act as initiators, mounting LUNs from external arrays and incorporating them into ONTAP aggregates for seamless data management. At the core of this abstraction is aggregate-level pooling, where physical disks across the cluster form shared aggregates that can be dynamically assigned to volumes, decoupling logical storage from specific hardware. enhances this by allocating space on-demand rather than pre-reserving it, optimizing capacity utilization in volumes and LUNs without upfront commitment. Volume cloning, via FlexClone technology, creates space-efficient, writable point-in-time copies of FlexVol volumes that initially share data blocks with the parent, reducing storage overhead for testing or development environments. FlexVol volumes represent the foundational unit of virtualization in ONTAP, functioning as flexible, logical containers independent of underlying physical disks or aggregates. These volumes support online resizing—expanding or shrinking without downtime—allowing administrators to adapt to changing workloads dynamically. For instance, a FlexVol volume can be grown by adding space from any available aggregate in the cluster, maintaining data availability throughout the process. ONTAP's scale-out architecture further extends by supporting non-disruptive cluster expansion up to 24 nodes for protocols, enabling linear growth in capacity and performance as nodes are added. This allows systems to pool resources cluster-wide, with workloads transparently redistributed across nodes without interrupting access. Caching mechanisms complement these features by accelerating virtualized I/O, though primary benefits stem from the itself.

Data Security and Availability

Storage Encryption

NetApp Storage Encryption (NSE) provides hardware-based protection for in FAS systems by leveraging self-encrypting drives (SEDs) that perform inline as data is written to the drives. These SEDs comply with the Trusted Computing Group (TCG) standard, ensuring robust cryptographic operations at the level without requiring software intervention for processes. NSE supports full-disk across both HDDs and SSDs, enabling comprehensive coverage of all storage media in an array. Key management in NSE is handled through , which includes an onboard key manager (OKM) for internal operations and support for external key managers using the (KMIP). Administrators can enable the OKM with the security key-manager onboard enable command, while external integration allows for centralized key control across multiple systems, including features introduced in 9.6. This setup ensures that keys are securely stored and accessible only to authorized entities, with up to 100% drive coverage achievable in FAS configurations using compatible SEDs. NSE implementation incurs no performance overhead, as occurs transparently at the level and does not impact ONTAP's storage efficiency features such as deduplication, compression, or compaction. The solution supports SEDs that are certified (Certificate #4144) and, as of 2024, validated (e.g., CryptoMod 3.0, Certificate #4731), providing cryptographic modules suitable for regulated environments. For enhanced security, NSE integrates with ONTAP's immutable snapshots via , which create indelible copies of encrypted data to defend against by preventing modification or deletion of backups. Additionally, audit logging capabilities, including key backup tracking via the security key-manager backup show command, enable traceability of encryption-related administrative actions.

MetroCluster Configurations

NetApp MetroCluster provides high-availability clustering for FAS systems by enabling synchronous data replication between two physically separate sites, ensuring continuous data availability and capabilities. This configuration leverages ONTAP's SyncMirror technology to mirror data blocks in real-time across sites, achieving zero recovery point objective (RPO) and near-zero recovery time objective (RTO) for applications requiring uninterrupted access. MetroCluster supports both (MCC-FC) and IP-based (MCC-IP) implementations, allowing FAS deployments to extend over metropolitan distances without data loss during site failures. In MCC-FC configurations, synchronous replication occurs over fabrics, supporting stretch clusters up to 300 km between sites using dedicated FC switches and inter-switch links (ISLs). MCC-IP, in contrast, uses Ethernet-based fabrics for replication, extending distances up to 700 km while maintaining the same zero RPO/RTO guarantees through NVLog mirroring and storage replication over interfaces. Both variants support up to eight nodes in configurations with two (DR) groups, where each site hosts an HA pair or multiple pairs peered across the replication links. These setups require mirrored root and data aggregates on each node, with configuration replication ensuring consistent cluster states. Key components include the Mediator or MetroCluster software, deployed at a third site to act as a tie-breaker during network partitions, preventing scenarios and facilitating automated switchover decisions. utilities monitor site health and cluster connectivity, triggering switchovers or failovers with minimal disruption, while mediators provide additional resolution for fabrics. MetroCluster configurations also support non-disruptive upgrades, allowing rolling updates of controllers and without interrupting replication or client access. MetroCluster IP was introduced with ONTAP 9.3 in 2017, enabling Ethernet-based on systems without FC infrastructure, which simplified deployments in IP-centric environments. By 2025, enhancements in ONTAP 9.15 and later versions expanded support for 100 GbE adapters and switches in MCC-IP setups, improving bandwidth for high-throughput replication and of mirrored data.

Operating System

ONTAP Fundamentals

NetApp is a scale-out storage operating system designed for hybrid environments, serving as the software foundation for FAS systems by enabling unified management of , , and . It supports multiprotocol access, including NFS, , iSCSI, and , allowing seamless integration of and workloads across on-premises, virtual, and deployments. The clustered architecture, introduced in 8.0 in 2010, replaced the legacy 7-Mode single-head system, providing horizontal scaling by adding nodes to clusters without downtime. Current releases, such as 9.17.1 as of September 2025, incorporate optimizations for AI workloads through integration with NetApp AFX, enhancing performance for high-throughput data pipelines. Core features of ONTAP emphasize reliability and efficiency, including non-disruptive operations (NDO) that support upgrades, maintenance, and configuration changes without interrupting data access or application availability. Quality of Service (QoS) policies enable I/O throttling by assigning limits on IOPS or throughput to workloads, preventing resource contention in shared environments and maintaining consistent performance. Snapshot copies provide point-in-time data protection, with support for up to 1023 copies per FlexVol volume starting in ONTAP 9.4, facilitating rapid recovery and space-efficient backups. ONTAP management is facilitated through multiple interfaces: System Manager, a web-based for intuitive cluster administration; the (CLI) for scripted and advanced operations; and APIs for automation and integration with orchestration tools. Cluster peering establishes secure relationships between clusters for asynchronous replication, , and data mobility, using intercluster LIFs to exchange and synchronize volumes. This unified approach, built on the WAFL for underlying data layout, ensures consistent policy enforcement across diverse storage protocols.

WAFL File System

The Write Anywhere File Layout (WAFL) is a copy-on-write file system integral to NetApp's operating system, designed specifically for high-performance network file serving in FAS storage systems. WAFL employs a block-based using fixed 4 KB blocks, which are addressed via volume block numbers (VBNs) and integrated directly with structures within storage aggregates to optimize data placement and redundancy. This write-out-of-place mechanism avoids in-place updates, preventing fragmentation by always allocating new blocks for modified data, thereby maintaining consistent performance across mixed read-write workloads. Key operational concepts in WAFL include its treatment of all data and as files, with inode files serving as the primary structure to track and block pointers. For larger files exceeding 64 KB, indirect and doubly indirect extend the inode's reach, forming a tree-like layout rooted at the to efficiently map data locations. Consistency points (CPs) provide atomic checkpoints of the state, occurring every 10 seconds to flush dirty buffers from nonvolatile (NVRAM) to persistent storage, ensuring rapid recovery and without full checks. WAFL's design enables several advantages, particularly in data efficiency and management. Its approach facilitates efficient snapshots by simply duplicating the root inode pointer, allowing point-in-time copies with minimal overhead since unchanged remain shared. Block-level deduplication leverages the 4 KB granularity to identify and eliminate duplicate across volumes, significantly reducing storage consumption in environments with redundant data patterns. Overall, WAFL's supports versatile handling of mixed workloads, from NFS file serving to block protocols, by balancing overhead with scalable allocation. Early versions of WAFL faced potential write amplification from frequent small-block allocations, but this is mitigated through NVRAM buffering, which logs operations in memory for batched writes during consistency points, achieving high throughput—such as storing over 1,000 NFS operations per of NVRAM. This buffering defers persistent disk writes, reducing amplification and enabling quick client acknowledgments while preserving consistency.

Migration Tools

The 7-Mode Transition Tool (7-MTT) automates the lift-and-shift migration of data and configurations from legacy 7-Mode FAS systems to clustered environments, supporting stand-alone volumes, volume SnapMirror relationships, LUNs, and system configurations such as quotas and exports. This tool facilitates a structured transition process, including assessment of source systems, baseline data copying, and cutover, while preserving features like copies where possible. The migration process supported by 7-MTT includes both copy-based (offline) and copy-free (online for high-availability pairs) methods, with data transfer in copy-based scenarios relying on SnapMirror for initial baseline copies followed by incremental updates to minimize resynchronization time. Pre-transition preparation involves running 7-MTT's Collect and Assess feature to inventory 7-Mode controllers, hosts, and applications, ensuring compatibility with clustered ONTAP. Following the end of limited support for 7-Mode ONTAP 8.2 on December 31, 2022, migrations using 7-MTT remain viable for supported target versions up to ONTAP 9.11.1. For current clustered ONTAP environments, tools like SnapMirror and Cloud Sync facilitate data mobility and migrations without 7-Mode dependencies. For transitions involving third-party legacy storage, NetApp's Foreign LUN Import (FLI) enables migrations to FAS systems running , supporting online workflows where hosts remain active during data import from foreign arrays via or . FLI handles LUN discovery, relationship creation, block-level data import with verification, and post-migration cleanup, applicable to environments like Windows, , and RHEL hosts. Offline FLI variants are used when zero downtime is not feasible, such as in MetroCluster setups. SnapCenter complements these migrations by providing backup and verification capabilities during resource transitions to storage, allowing pre-migration Database Consistency Checks (DBCC) for SQL workloads and post-migration integrity validation to ensure data reliability. It orchestrates snapshots and clones for nondisruptive operations, integrating with to protect databases and filesystems before cutover. Best practices for these migrations emphasize thorough pre-upgrade assessments, including verification of host OS interoperability via the , conversion of 32-bit aggregates to 64-bit, and workload characterization to size the target cluster appropriately. To achieve minimal downtime—typically under one hour for small-scale setups—administrators should prioritize copy-free transitions for HA pairs or incremental SnapMirror updates, followed by nondisruptive volume movements using ONTAP's DataMotion for Volumes . Engaging is recommended for complex environments to plan scoping, testing, and validation.

Performance and Optimization

Key Metrics

NetApp FAS systems deliver performance tailored to secondary workloads such as tiering, , and cyber vaulting, with capabilities varying based on configurations. Efficiency metrics underscore FAS's data optimization features. NetApp guarantees a 4:1 storage efficiency ratio for workloads through inline deduplication and , ensuring effective capacity utilization without performance penalties. In ransomware recovery scenarios, FAS systems with enable workload restoration within minutes, minimizing downtime compared to industry averages of days or weeks. Performance in hybrid FAS systems depends on the mix of HDDs and SSD caching via Flash Pool, suitable for cost-sensitive, capacity-heavy use cases. Current models like the FAS2750 and FAS2820 provide up to 1.2 PB raw capacity per HA pair, with NAS scale-out to 24 nodes, emphasizing efficient data management over peak IOPS.

Tuning and Best Practices

To optimize NetApp FAS performance, aggregates should be sized with RAID groups containing 12 to 20 disks for HDD-based configurations, ensuring similar sizes across groups within the aggregate to maintain balanced I/O distribution. This approach supports capacities up to several petabytes per node while avoiding fragmentation, with best practices recommending aggregates remain below 80-85% utilization to allow for rebalancing and growth. Quality of Service (QoS) policies enable workload isolation by applying throughput ceilings to limit IOPS or MBps for competing volumes, LUNs, or SVMs, preventing one workload from dominating system resources. For critical applications, throughput floors guarantee minimum performance levels, with non-shared policy groups recommended for strict isolation in multi-tenant environments. Key best practices include enabling inline compression on volumes in Flash Pool or HDD aggregates, which reduces write I/O to disk and provides immediate space savings without significant CPU overhead on FAS systems. Deploying Flash Pool—by adding SSDs to HDD aggregates—improves read latency and can deliver substantial IOPS gains, such as up to 283% in hybrid +SSD configurations compared to all-HDD setups, particularly for random read-heavy workloads. Ongoing monitoring via Active IQ Unified Manager is essential, using its Scale Monitor for alerts on disk utilization, pressure, and performance thresholds to proactively tune resources before issues arise. For troubleshooting, Adaptive QoS addresses bursty workloads by dynamically scaling limits based on size (e.g., maintaining ratios like 6,144 /TB for expected performance), with absolute minimum settings ensuring baseline throughput during spikes. To achieve even I/O distribution, perform aggregate-level reallocation during off-peak hours, adding new groups of equal size to existing ones and forcing data redistribution to balance load across shelves and disks.

Historical Development

Model Evolution

NetApp's FAS (Fabric-Attached Storage) systems originated in the as part of the company's early focus on filers. Founded in 1992, introduced its inaugural NFS server, dubbed the "Toaster," alongside the operating system, establishing the foundational architecture for scalable file services that would evolve into the FAS line. These initial filers emphasized simplicity and efficiency for NAS environments, setting the stage for unified storage solutions. By , following NetApp's rebranding from Network Appliance, Inc., the company expanded its mid-range offerings with the FAS3000 series, including models like the FAS3160, which provided enhanced and for workloads through modular designs supporting up to hundreds of terabytes. This period marked a shift toward broader adoption in data centers, influenced by growing demands for cost-effective, expandable post the 2008 economic landscape and NetApp's strategic acquisitions. A pivotal milestone occurred in 2010 with the release of Clustered (Data ONTAP 8.0), enabling FAS systems to scale out across multiple nodes in a non-disruptive manner, transforming standalone filers into cluster-capable architectures for larger deployments. In 2012, hybrid flash integration arrived via Flash Pool technology, which used SSDs as intelligent caching layers on HDD-based FAS arrays to boost I/O performance without full array replacement, addressing the rising need for accelerated mixed workloads. The FAS8000 series debuted in 2014, unifying and protocols with up to 2x the throughput of prior generations through advanced processors and -accelerated designs, while introducing All Flash (AFF) variants to capitalize on the market surge. This era saw naming conventions evolve from distinct (hybrid) and V-Series (virtualized third-party integration) lines to a consolidated /AFF portfolio, simplifying management across hybrid and all- configurations. The transition to 9 in 2014 also ended support for the legacy 7-Mode, fully embracing clustered architectures. Further advancements in included end-to-end NVMe support with the AFF A800 array, delivering over 1.3 million and reducing for high-performance applications, in response to shifts toward faster protocols. By 2023, the (All SAN Array) A-Series launched, optimizing block storage for mission-critical environments with guaranteed 99.9999% availability and simplified deployment for databases and . In 2025, introduced the FAS50 for hybrid edge and secondary storage, offering up to 10.6 PB raw capacity in a compact 2U form factor, alongside the entry-level A20 in the ASA line, scaling to 734 TB for cost-sensitive block workloads. Throughout this progression, FAS capacity has expanded from approximately 100 TB in early models to 14.7 PB raw per high-availability pair in current flagships like the FAS90, driven by clustering and density improvements. These developments reflect broader market influences, including the proliferation of for speed, cloud-hybrid integrations for flexibility, and built-in security enhancements like protection to meet evolving data protection needs.

Limitations and Improvements

Early versions of NetApp FAS systems running Data 7-Mode were constrained by limited scalability, supporting a maximum of two controllers configured as a high-availability (HA) pair, which restricted environments to effectively one active node with capability. Additionally, the initial implementation of the (WAFL) file system incurred high write penalties due to parity calculations and short-write handling, often resulting in a 4-to-1 overhead for partial block writes. Flash integration, while introduced with Flash Cache for read acceleration in 2010 and Flash Pool for automated tiering in 2012, was initially limited in scope before 2014 and relied primarily on hard disk drives (HDDs) for primary , which constrained performance for random I/O workloads. These limitations were addressed through key enhancements in ONTAP. The introduction of clustered Data ONTAP 8 in 2010 enabled scale-out architectures, allowing up to 24 nodes in NAS configurations by merging the previous 7-Mode and GX cluster capabilities into a unified platform. In 2012, Flash Pool was launched as part of the Virtual Storage Tier, combining SSDs for read caching with HDDs to accelerate access to frequently used data and reduce latency without full system replacement. Ransomware protection advanced with ONTAP 9.10.1 in 2021, incorporating Autonomous Ransomware Protection (ARP) that uses machine learning to detect anomalies in file activity entropy and automatically create protective snapshots. Ongoing developments continue to refine FAS resilience. Introduced in 2024, ONTAP's cyber vault feature provides immutable and indelible backups through air-gapped WORM () snapshots, ensuring data isolation from cyber threats with multi-administrator verification and strict access controls. Single-site HA limitations, such as dependency on fiber-channel for redundancy, have been resolved via MetroCluster IP (MCC-IP), which uses Ethernet for synchronous replication and local failover in HA pairs, supporting up to two sites with simplified cabling. FAS hybrid systems balance cost and performance trade-offs compared to all-flash arrays, offering lower upfront expenses through HDD-SSD mixes but with potential I/O bottlenecks for intensive workloads; these are mitigated by ONTAP's intelligent tiering, which automatically moves cold data to lower-cost tiers or cloud object storage.

References

  1. [1]
    NetApp FAS storage arrays for hybrid flash storage
    Economical, Simple, Secure. With NetApp FAS, achieve lowest lifecycle cost of data for tiering, back up, and cyber vault workflows with hybrid flash storage.Economical, Simple, Secure · Recover Your Data... · Rely On The Safest Storage...
  2. [2]
    FAS systems - NetApp Docs
    FAS storage systems help you build a storage infrastructure that is simple, smart, and trusted. Get started. FAS datasheet · FAS platform overview. Install and ...
  3. [3]
    Company history and experience | NetApp
    2002: Unified storage operations become the norm. NetApp creates the first unified SAN and NAS appliances–simplifying life for our customers (and our marketing ...Making History For 30 Years... · 1992: Advent Of The... · 2012: Our Customers Demand...
  4. [4]
    NetApp FAS Series: FAS2750 and FAS2820
    Organize to minimize. Powerful tools for a smaller storage footprint to easily unify your hybrid cloud data across geographical locations: · Effortlessly ...Netapp Fas2750 And Fas2820... · Netapp Fas2820 Specs · ``helpful Technical Support...
  5. [5]
    [PDF] NetApp FAS storage arrays
    NetApp Datasheet. Page 4. 2. NetApp Datasheet. NetApp FAS systems cost-effectively deliver the data protection, security, and scalability to safeguard your data.
  6. [6]
    NetApp AFF Models and Cloud Tiering: An Overview
    Feb 24, 2020 · FAS is Fabric Attached Storage, which means that block and file storage can be presented over multiple types of networks. This makes AFF a ...Share This Page · What Is Aff Storage? · Netapp's Aff OfferingsMissing: definition | Show results with:definition
  7. [7]
    What Is the Difference Between NetApp FAS, AFF, ASA and E-Series?
    Nov 29, 2024 · Key Differences Between NetApp FAS, AFF, ASA and E-Series ; Storage Media. Combination of HDDs and SSDs. Exclusively SSDs ; Primary Use Case.
  8. [8]
    Learn about ONTAP hardware systems - NetApp Docs
    Oct 15, 2025 · Learn about ONTAP hardware systems · All-Flash FAS (AFF) systems · AFX · All-Flash SAN Array (ASA) systems · FAS hybrid-flash systems · MetroCluster.Missing: variants | Show results with:variants
  9. [9]
    AFF A-Series for unified storage and AI data infrastructure | NetApp
    The NetApp AFF A-Series provides the foundation for SysAdminOK's 1PB storage infrastructure and VMware environment, composed of hundreds of hypervisors and ...NetApp AFF A-Series | NetApp · AFF A900 · AFF A250 · AFF A150Missing: history | Show results with:history<|control11|><|separator|>
  10. [10]
    NetApp ASA - All-flash block storage
    Apr 25, 2025 · NetApp ASA is a simple, resilient, secure all-flash block storage with 100% data availability, 1-click workload protection, and 12M IOPs with ...Missing: variants types<|control11|><|separator|>
  11. [11]
    All-Flash FAS: A Deep Dive - NetApp Community
    Aug 6, 2014 · NetApp has been shipping all-flash FAS configurations for a number of years. However, with the release of the FAS8000 early in 2014 and the ...
  12. [12]
    NetApp Delivers Simplicity and Savings to Block Storage with New ...
    May 16, 2023 · NetApp's new ASA (All-Flash SAN Array) A-Series family simplifies the deployment of modern SAN infrastructure, while providing guaranteed availability and ...
  13. [13]
    Key specifications for FAS50 - NetApp Docs
    Oct 15, 2025 · Max Raw Capacity · 10.5600 PB ; Memory · 128.0000 GB ; Form Factor · 2U chassis with 2 HA controllers ; Minimum ONTAP Version · ONTAP 9.16.1 ; Protocol.Creating Your File · Io · System Environment...
  14. [14]
    Key specifications for FAS70 - NetApp Docs
    Oct 15, 2025 · Key specifications. FAS70 and FAS90 systems. Install and setup. Install workflow · Review requirements · Prepare to install · Install hardware ...
  15. [15]
    [PDF] netapp aff a150, a250, a400, a800, a900tech specs
    3 The AFF A250 system supports 8 100GbE ports for NetApp ONTAP® 9.13. 1 or later, and 4 ports for earlier ONTAP releases.Missing: 2025 | Show results with:2025
  16. [16]
    What's new for ONTAP hardware systems - NetApp Docs
    Oct 15, 2025 · The new ASA C30 system extends high-performance, intelligent, and comprehensive data management capabilities to more customers and workloads.
  17. [17]
    [PDF] AFF C-Series datasheet - NetApp
    AFF C-Series systems are built on high-density. NVMe capacity flash technology. They also offer scale and flexibility to meet dynamic storage demands. Unified ...
  18. [18]
    Key specifications for AFF A1K - NetApp Docs
    Oct 15, 2025 · Platform Configuration: AFF A1K Dual Chassis HA Pair · Max Raw Capacity: 14.6880 PB · Memory: 2048.0000 GB · Form Factor: 2U chassis with 1 HA ...Creating Your File · I/o · Storage Networking Supported
  19. [19]
    NetApp ASA: All-flash SAN array | NetApp Blog
    May 16, 2023 · In fact, we created both categories, introducing NAS to enterprises way back in 1992 and releasing the first unified storage with both file and ...
  20. [20]
    Learn about All-Flash SAN Array configurations - NetApp Docs
    Jul 3, 2025 · ASAs are all-flash SAN-only solutions built on proven AFF NetApp platforms. ASA platforms include the following: ASA A150. ASA A250. ASA A400.
  21. [21]
    [PDF] NetApp ASA
    The ASA family includes A-Series models designed for your most performance-demanding applications, and C-Series models optimized for cost-effective deployment ...Missing: list | Show results with:list
  22. [22]
    Key specifications for ASA A900 - NetApp Docs
    Sep 18, 2025 · Power supply · Real-time clock battery. ASA A900 systems. Install and setup. Choose install and setup experience · Quick steps · Video steps ...
  23. [23]
    [PDF] ASA A-Series and C-Series Techsheet - NetApp
    ASA A-Series scale-out is 2-12 nodes, up to 14.7PB raw capacity per HA pair. ASA C-Series scale-out is 12 nodes, up to 7.4PB raw capacity per HA pair.
  24. [24]
    Key specifications for ASA A400
    Sep 18, 2025 · Key specifications for the ASA400 platform group.
  25. [25]
    Netapp Data Storage - Grandmetric
    Maximum Drives per HA Pair (2 nodes), 1440, 1440, 720, 144, 144. Maximum Raw Capacity per HA Pair, 14.7PB, 14.7PB, 11.5PB, 2.3PB, 1.2PB. Chassis, 8U, 4U, 4U, 2U ...
  26. [26]
    [PDF] Oracle Database Performance on NetApp ASA
    When users were scaled to saturate the system, an IOPS of 1.73 million with a latency of 2.3ms was recorded. Consequently, NetApp ASA controllers frequently ...
  27. [27]
    Datasheet - NetApp FAS6200 Series
    All FAS6200 systems are designed to optimally support flash. The NetApp. Virtual Storage Tier includes flash- specific software designed to get the most out ...
  28. [28]
    Shared IT Infrastructure with the NetApp FAS6200 Series
    Oct 1, 2010 · In addition, each FAS6200 series system has a corresponding V6200 open storage controller model that lets you manage disk arrays from EMC, IBM, ...Missing: C- legacy specifications
  29. [29]
    NetApp FAS8040 Hybrid Storage Array - HSSL Technologies
    30-day returnsSpecifications: ; Maximum capacity, 11520TB, 9600TB ; Maximum Flash Pool capacity, 144TB, 72TB ; Maximum drives, 1,440, 1,200 ; Controller form factor, Dual ...
  30. [30]
    [PDF] Datasheet - NetApp FAS8000 Series - Zones
    The FAS8000 features a multiprocessor. Intel® chip set and leverages high- performance memory modules, NVRAM to accelerate and optimize writes, and an I/O-tuned ...
  31. [31]
    [PDF] NetApp FAS9000 Modular Hybrid Flash System - Nexstor
    The NetApp FAS9000 is a modular hybrid flash system with up to 50% higher performance, powered by ONTAP, and uses NVMe flash, with up to 144TB hybrid flash.<|control11|><|separator|>
  32. [32]
    None
    ### Specifications for NetApp FAS2500 Series
  33. [33]
    NetApp End of Life List - EOSL & EOL Dates | Park Place
    Park Place Technologies makes it easy to find the latest End of Life (EOL) and End of Service Life (EOSL) information for your NetApp hardware.Missing: C- | Show results with:C-
  34. [34]
    FAS6200 Series - NetApp Support Site
    Describes how to install the pass-through panel in system cabinets where power connectors are at the front of the chassis and power distribution units are ...Missing: C- legacy specifications
  35. [35]
    [PDF] NetApp FAS8000 Series - System Fabric Works
    The FAS8000 features a multiprocessor Intel chip set and leverages high-performance memory modules, NVRAM to accelerate and optimize writes, and an I/O-tuned ...
  36. [36]
    High-Availability Pair Controller Configuration Overview and Best ...
    The NetApp® HA pair controller configuration delivers a robust and high-availability data service for business-critical environments.Missing: FAS hardware processors Intel Xeon O paths PCIe ASICs power cooling
  37. [37]
    High-availability pairs - NetApp Docs
    Aug 19, 2025 · On failover, the surviving partner commits the failed node's uncommitted write requests to disk, ensuring data consistency. The surviving node ...Missing: dual | Show results with:dual
  38. [38]
    Learn about automatic takeover and giveback in ONTAP clusters
    Aug 9, 2025 · The default delay is 600 seconds. This process avoids a single, prolonged outage that includes time required for: The takeover operation. The ...
  39. [39]
    NetApp FAS Series: FAS9500, FAS8700, FAS8300
    The NetApp FAS series (FAS9500, 8700, 8300) are heavy-duty for scaling performance and capacity, with FAS9500 having 14.7PB max capacity per HA pair.Netapp Fas9500, Fas8700... · Fas For Enterprise Workloads · ``for Us, The Greatest...
  40. [40]
    Configure ONTAP FC adapter ports - NetApp Docs
    Aug 5, 2025 · Onboard FC adapters and some FC expansion adapter cards can be individually configured as either initiators or targets ports.Missing: PCIe | Show results with:PCIe<|separator|>
  41. [41]
    Key specifications for FAS2750 - NetApp Docs
    Oct 15, 2025 · Key specifications for FAS2750. Platform Configuration: FAS2750 UTA2 Single Chassis HA Pair. Max Raw Capacity: 2.6832 PB ; Storage networking ...Creating Your File · Key Specifications For... · Storage Networking SupportedMissing: consumption | Show results with:consumption
  42. [42]
    NVRAM purpose during ONTAP outage - NetApp Knowledge Base
    Jan 10, 2023 · NVRAM stores interrupted write data, which is replayed upon next boot to maintain filesystem consistency. This data is then cleared after being ...
  43. [43]
    FAS8080 EX Delivers Performance, Reliability, and Scale for ...
    Jun 11, 2014 · The new NVRAM9 architecture used in the FAS8000 delivers twice the bandwidth of the NVRAM8. The FAS8080 EX offers 32GB of NVRAM per HA pair.
  44. [44]
    How memory works in ONTAP: NVRAM/NVMEM, MBUF and CP ...
    Dec 27, 2018 · So, ONTAP does not use NVLOGs in normally functioning High Availability (HA) system it erases logs each time CP occur and then writes new NVLOGs ...Missing: intent | Show results with:intent
  45. [45]
    Flash Cache - NetApp Docs
    Sep 25, 2025 · Flash Cache speeds access to data through real-time intelligent caching of recently read user data and NetApp metadata.
  46. [46]
    Flash Cache Best Practices Guide | TR-3832 - NetApp
    This guide describes how Flash Cache and Flash Cache 2 work provides essential information for successful implementation and explains how to measure the ...
  47. [47]
    Flash Pool Design and Implementation Guide - NetApp
    This technical report covers NetApp® Flash Pool™ intelligent caching to provide a firm understanding of how Flash Pool technology works when and how to use ...Missing: architecture | Show results with:architecture
  48. [48]
    Flash Pool SSD Partitioning for ONTAP Storage Pools
    May 14, 2025 · Flash Pool SSD partitioning allows SSDs to be shared by all the local tiers that use the Flash Pool. This spreads the cost of parity over multiple local tiers.
  49. [49]
    Disk shelves & storage media for NetApp FAS, AFF & ASA systems
    Explore a full range of disk shelves and storage media (SSD and HDD) to support your diverse business and application demands.
  50. [50]
    Install and cable shelves - DS212C, DS224C, or DS460C
    If you are installing a DS460C disk shelf, install the drives into the drive drawers. Otherwise, go to the next step. Note. Always wear an ESD ...
  51. [51]
    What are the maximum drives and shelves limits in a mixed ONTAP ...
    08 Jun 2023 · Must not exceed 240 drives for HDD or HDD/SSD hybrid stack, 96 drives for a SSD stack. 3) Must not exceed 10 shelves in a stack path (integrated storage counts ...
  52. [52]
    ONTAP storage efficiency overview - NetApp Docs
    Oct 2, 2025 · Inline deduplication speeds up VM provisioning by 20% to 30%. Depending upon your version of ONTAP and your platform, inline deduplication is ...
  53. [53]
    ONTAP RAID protection levels for disks - NetApp Docs
    May 14, 2025 · ONTAP supports three levels of RAID protection for local tiers. The level of RAID protection determines the number of parity disks available ...Missing: ASIC | Show results with:ASIC<|separator|>
  54. [54]
    ONTAP RAID groups and local tiers - NetApp Docs
    Mar 24, 2025 · For RAID-DP, the recommended RAID group size is between 12 and 20 HDDs and between 20 and 28 SSDs. You can spread out the overhead cost of ...Missing: documentation | Show results with:documentation
  55. [55]
    Default RAID policies for ONTAP local tiers - NetApp Docs
    May 14, 2025 · RAID-DP provides double-parity protection in the event of a single or double disk failure. RAID-DP is the default RAID policy for the following ...RAID protection levels for disks · Convert RAID-TEC to RAID-DP
  56. [56]
    [PDF] Best practices for modern SAN ONTAP 9 - NetApp
    This technical report provides a comprehensive overview of ONTAP blocks, including ONTAP SAN on unified platforms such as FAS and AFF, as well as the All ...Missing: DRAM | Show results with:DRAM<|separator|>
  57. [57]
    Convert from ONTAP RAID-DP to RAID-TEC - NetApp Docs
    Aug 13, 2025 · To convert from RAID-DP to RAID-TEC, the local tier must have at least seven disks, and use the command: `storage aggregate modify -aggregate ...Missing: documentation | Show results with:documentation
  58. [58]
    RAID and Oracle databases - NetApp Docs
    Nov 19, 2024 · With respect to statistical reliability, even RAID DP offers better protection than RAID mirroring. The primary problem is the demand made on ...
  59. [59]
    ONTAP Select software RAID configuration services for local ...
    Jul 22, 2025 · With the SAS HBA mode, ensure that the I/O controller (SAS HBA) is supported with a minimum of 6Gbps speed. However, NetApp recommends a 12Gbps ...
  60. [60]
    Considerations for sizing ONTAP RAID groups - NetApp Docs
    May 15, 2025 · Configuring an optimum RAID group size requires a trade-off of factors. You must decide which factors—​speed of RAID rebuild, assurance against ...Missing: documentation | Show results with:documentation
  61. [61]
    Flash Pool ONTAP local tier caching policies - NetApp Docs
    May 14, 2025 · Caching policies for the volumes in a Flash Pool local tier let you deploy Flash as a high performance cache for your working data set while using lower-cost ...
  62. [62]
    Tier data efficiently with ONTAP FabricPool policies - NetApp Docs
    Mar 12, 2025 · FabricPool tiering policies enable you to move data efficiently across tiers as data becomes hot or cold. Understanding the tiering policies ...Types of FabricPool tiering... · What happens when you...
  63. [63]
    FabricPool - NetApp Docs
    Sep 10, 2025 · FabricPool is an ONTAP feature that automatically moves data between a high-performance local tier and a cloud tier based on access patterns.Missing: Attached | Show results with:Attached
  64. [64]
    [PDF] TR-4598: FabricPool best practices - NetApp
    Jan 5, 2024 · FabricPool, first available in ONTAP 9.2, is a data fabric powered by NetApp technology that enables automated tiering of data to low-cost ...
  65. [65]
    [PDF] TR-3461 FlexArray and V-Series Best Practices Guide - NetApp
    FlexArray is different from V-Series in that FlexArray is a feature that can be licensed on any FAS8xxx system running Data ONTAP 8.2.1 or later. FlexArray is ...
  66. [66]
    Storage virtualization overview - NetApp Docs
    May 7, 2025 · You use storage virtual machines (SVMs) to serve data to clients and hosts. Network access to the SVM isn't bound to a physical port.
  67. [67]
    Thin provisioning - NetApp Docs
    Jul 2, 2025 · A thin-provisioned volume or LUN is one for which storage isn't reserved in advance. Instead, storage is allocated dynamically, as it is needed.
  68. [68]
    FlexClone volume use overview - NetApp Docs
    Dec 11, 2024 · FlexClone volumes are writable, point-in-time copies of a parent FlexVol volume. FlexClone volumes are space-efficient because they share the same data blocks.
  69. [69]
    Move a FlexVol volume overview - NetApp Docs
    Mar 18, 2025 · FlexVol volumes are moved from one aggregate or node to another within the same storage virtual machine (SVM). A volume move does not ...
  70. [70]
    Commands for managing FlexVol volumes in ONTAP - NetApp Docs
    Jan 17, 2025 · Create and manage volumes, SAN volumes, move and copy volumes, use FlexClone volumes to create efficient copies of your FlexVol volumes.
  71. [71]
    Cluster storage - NetApp Docs
    Jul 24, 2025 · You can scale out for capacity by adding nodes with like storage array models, or for performance by adding nodes with higher-end storage arrays ...
  72. [72]
    Storage encryption in ONTAP - NetApp Docs
    Apr 11, 2025 · NetApp Storage Encryption (NSE) is a hardware solution that uses self-encrypting drives. · NetApp Volume Encryption (NVE) is a software solution ...Missing: FAS | Show results with:FAS
  73. [73]
    Storage Encryption & Disk Encryption – Cyber Resilience | NetApp
    NetApp Storage Encryption (NSE) is a comprehensive, cost-effective and simple to use security solution. NSE uses either FIPS 140-3 level 2 or FIPS 140-2 level 2 ...End-To-End Encryption For... · Securing Data In Flight · Enable The Protection Of...
  74. [74]
  75. [75]
  76. [76]
    Stop ransomware attacks—data protection and security solutions
    NetApp uses AI-powered detection, autonomous protection, end-to-end encryption, immutable copies, and guaranteed data recovery to combat ransomware.Ransomware Detection To... · Netapp Firsts · Best-Protected Storage...Missing: NSE | Show results with:NSE
  77. [77]
    [PDF] TR-4705 NetApp MetroCluster: Solution architecture and design
    ... system ID (NVRAM ID) order. System ID is hardcoded and not changeable. You ... FAS system. The maximum number of. SSDs that can be RD2 partitioned is 48 ...
  78. [78]
    Required MetroCluster IP configuration components and naming ...
    Sep 18, 2025 · Existing MetroCluster IP configurations on FAS systems can be upgraded to ONTAP 9.4. Beginning with ONTAP 9.5, new MetroCluster IP ...
  79. [79]
    Expand a MetroCluster IP configuration - NetApp Docs
    Oct 10, 2025 · You can add four new nodes to the MetroCluster IP configuration as a second DR group. This creates an eight-node MetroCluster configuration.Creating Your File · Supported Platform... · Configuring Intercluster...
  80. [80]
    Configure ONTAP clusters in a MetroCluster IP ... - NetApp Docs
    Sep 10, 2025 · You must peer the clusters, mirror the root aggregates, create a mirrored data aggregate, and then issue the command to implement the MetroCluster operations.Creating Your File · Peering The Clusters · Creating A Cluster Peer...Missing: FAS | Show results with:FAS
  81. [81]
    Set up MetroCluster Tiebreaker or ONTAP Mediator for a ...
    Jul 31, 2025 · You can download and install on a third site either the MetroCluster Tiebreaker software, or, beginning with ONTAP 9.7, the ONTAP Mediator.
  82. [82]
    Overview of the Tiebreaker software - NetApp Docs
    Jun 1, 2023 · The NetApp MetroCluster Tiebreaker software checks the reachability of the nodes in a MetroCluster configuration and the cluster to determine whether a site ...
  83. [83]
    Refresh a four-node or an eight-node MetroCluster IP configuration ...
    Oct 31, 2025 · Beginning with ONTAP 9.13.1, you can upgrade the controllers and storage in an eight-node MetroCluster IP configuration by expanding the ...Missing: date | Show results with:date
  84. [84]
    [PDF] TR-4689: MetroCluster IP Solution Architecture and Design
    MetroCluster IP was introduced in ONTAP 9.3. With the ... ONTAP 9.7 adds support for MetroCluster IP without NetApp validated switches on some platforms.
  85. [85]
    [PDF] MetroCluster IP Solution architecture and design - NetApp
    • AFF A150, AFF A250, AFF C250, and FAS2750 utilize onboard ports. • AFF A400, AFF A700, FAS8300 and FAS9000 utilize a single network adapter. • AFF A800 ...<|separator|>
  86. [86]
    ONTAP Data Management Software - NetApp
    AFF C-Series. Capacity-optimized all-flash storage offering the industry's lowest-cost entry in capacity flash for unified storage.The Platform For The New... · Speed Apps And... · Unified Data Storage For...Missing: ASIC | Show results with:ASIC
  87. [87]
    TR-4872: NetApp ONTAP 9.8 Feature Overview
    Aug 9, 2024 · NetApp ONTAP 9.8 supports NAS/SAN protocols, S3 access, data protection, storage efficiencies, high availability, and security features.
  88. [88]
    ONTAP 9 release support - NetApp Docs
    Apr 11, 2024 · 9.11.1. July 2022 ; 9.10.1. January 2022 ; 9.9.1. June 2021 ; Note, If you are running an ONTAP version prior to 9.10.1, it is likely on Limited ...
  89. [89]
    What's new in ONTAP 9.17.1 - NetApp Docs
    Oct 13, 2025 · Learn about the new capabilities available in ONTAP 9.17.1. For details about known issues, limitations, and upgrade cautions in recent ONTAP 9 ...
  90. [90]
    NetApp Introduces Comprehensive Enterprise-Grade Data Platform ...
    Oct 14, 2025 · The combination of NetApp AFX with AI Data Engine provides the enterprise resilience and performance built and proven over decades by NetApp ...
  91. [91]
    Nondisruptive Operations with Clustered Data ONTAP
    Aug 12, 2013 · Clustered Data ONTAP enables nondisruptive operations by being resilient to failures and by allowing you to change your storage infrastructure without ...
  92. [92]
    Guarantee throughput with QoS overview in ONTAP - NetApp Docs
    Sep 5, 2025 · Beginning with ONTAP 9.5, you can specify an I/O block size for your application that enables a throughput limit to be expressed in both IOPS ...
  93. [93]
    What is Quality of Service (QoS) in ONTAP?
    Jul 28, 2025 · Quality-of-Service (QoS) is a way to throttle storage objects, but also includes performance monitoring of volumes with the QoS statistics commands.
  94. [94]
    Learn about managing local ONTAP snapshots - NetApp Docs
    Jul 30, 2025 · In ONTAP 9.4 and later, a FlexVol volume can contain up to 1023 snapshots. In ONTAP 9.3 and earlier, a volume can contain up to 255 snapshots.
  95. [95]
    Use System Manager to access an ONTAP cluster - NetApp Docs
    Oct 6, 2025 · You can use a cluster management network interface (LIF) or node management network interface (LIF) to access System Manager.
  96. [96]
    Learn about the ONTAP command-line interface - NetApp Docs
    Apr 14, 2025 · The ONTAP CLI is a command-based interface where commands are entered at the storage system prompt, represented as cluster_name::> and results ...
  97. [97]
    Create ONTAP cluster peer relationships - NetApp Docs
    Jul 10, 2025 · Create peer relationships by adding intercluster network interfaces, generating a passphrase, and initiating peering in the local cluster using ...
  98. [98]
    None
    ### Summary of WAFL Features from File System Design PDF
  99. [99]
    [PDF] Scalable Write Allocation in the WAFL File System - NetApp
    The WAFL write allocator, which is responsible for assigning blocks on persistent storage to dirty data in a way that maximizes write throughput to the storage ...
  100. [100]
    Deduplication - NetApp Docs
    Jun 27, 2024 · Deduplication reduces the amount of physical storage required for a volume (or all the volumes in an AFF aggregate) by discarding duplicate blocks.<|control11|><|separator|>
  101. [101]
    7-Mode Transition Documentation - NetApp Docs
    The 7-Mode Transition documentation includes information to help you install and setup the 7-Mode Transition Tool, transition data using SnapMirror commands.
  102. [102]
    7-Mode Transition Tool - NetApp Support Site
    The 7-Mode Transition Tool enables you to transition 7-Mode stand-alone volumes, volume SnapMirror relationships, and configurations to clustered Data ONTAP ...
  103. [103]
    Migrating data and configuration from 7-Mode volumes - NetApp Docs
    Mar 25, 2021 · To migrate volumes or a volume SnapMirror relationship by using the 7-Mode Transition Tool, you must first configure projects, start a baseline copy, and ...
  104. [104]
    Transition overview - NetApp Docs
    Aug 19, 2021 · Be sure to consult the current 7-Mode Transition Tool Release Notes for the latest information about supported target releases and known issues.
  105. [105]
    [PDF] TR-4052 Successfully Transitioning to Clustered Data ONTAP
    The 7-Mode Transition Tool (7MTT) enables customers to migrate their 7-Mode data from the. NetApp Data ONTAP® operating system to clustered Data ONTAP 8.3.2 and ...
  106. [106]
    What are the ONTAP Software Version Support dates
    Aug 18, 2025 · Answer ; ONTAP · 9.12.0 ; ONTAP · 9.11.1, 31-Jul-2025, 31-Jul-2027, 31-Jul-2030.Missing: Series | Show results with:Series
  107. [107]
    ONTAP FLI online migration workflow summary - NetApp Docs
    Jul 23, 2025 · To perform an FLI online migration, you should prepare your host, create a LUN import relationship, map the foreign LUN to your ONTAP storage ...
  108. [108]
    Migration types supported by Foreign LUN Import - NetApp Docs
    Dec 6, 2021 · FLI supports four main types of migration workflows: online, offline, transition, and automated. Your choice of which workflow to use depends on your ...
  109. [109]
    Best practices for Foreign LUN Import migration - NetApp Docs
    Jul 20, 2021 · NetApp strongly recommends a professional services or partner professional services, engagement to scope and plan the migration as well as to ...
  110. [110]
    Migrate resources to NetApp storage system
    Apr 3, 2024 · You can migrate your resources to the NetApp storage system or from one NetApp LUN to another NetApp LUN using either the SnapCenter graphical user interface ( ...<|separator|>
  111. [111]
    Tech refresh of storage system - NetApp Docs
    Feb 20, 2025 · In SnapCenter, create a backup of the resources whose storage is migrated. A new backup is necessary for SnapCenter to identify the latest ...
  112. [112]
    AFF A900 All-Flash Storage Array, NetApp AFF A-Series
    Maximum SSDs, 5,760. Maximum effective capacity, 702.7PB. Per-system specifications (active-active dual controller). Controller form factor, 8U. PCIe expansion ...
  113. [113]
    [PDF] Efficiency Guarantee - NetApp
    The most effective guarantee in the industry. Period. What is the Efficiency Guarantee? • A risk-free approach to more storage, more value, and more efficiency.
  114. [114]
    Ransomware Protection Myths Debunked - NetApp
    NetApp enables you to restore workloads within minutes – not days or weeks – so your business stays on track with minimal disruption. Even in the face of the ...<|separator|>
  115. [115]
    All-Flash Storage: Accelerated Performance, Yes, but Is It Enterprise ...
    Aug 5, 2015 · In a recent SPC-1 benchmark, the All Flash FAS8080EX delivered 685,000 SPC-1 IOPS, ranking in the top 5.
  116. [116]
    NVMe configuration, support, and limitations - NetApp Docs
    Jul 15, 2025 · FC-NVMe uses the same physical setup and zoning practice as traditional FC networks but allows for greater bandwidth, increased IOPs and reduced latency than ...
  117. [117]
    Boost performance with NetApp AFF A-Series and C-Series
    Nov 11, 2024 · We introduced the AFF A70, AFF A90, and AFF A1K systems last May, and now we're adding the new NetApp AFF A20, AFF A30, and AFF A50 systems.
  118. [118]
    Solved: aggregate recommended free space? - NetApp Community
    Netapp would typically recommend that an Aggregate is not more than 80% - 85% full. You are able resize Flexvols on the fly if the space is not in use.
  119. [119]
    Enable inline data compaction for FAS systems - NetApp Docs
    Mar 19, 2025 · Enable inline data compaction on FAS systems with Flash Pool (hybrid) aggregates or HDD aggregates at the volume or aggregate level.
  120. [120]
    [PDF] Demartek Evaluation of NetApp Hybrid Array with Flash Pool ...
    May 1, 2014 · The goal of this test is to show the improvements in latency while maintaining a steady I/O rate in terms of IOPS and bandwidth. The second test ...
  121. [121]
    [PDF] TR-4621: Active IQ Unified Manager Best Practices Guide - NetApp
    May 3, 2021 · NetApp Active IQ Unified Manager is the most comprehensive product for managing and monitoring performance, capacity, and health in ONTAP ...
  122. [122]
    Use adaptive QoS policy groups in ONTAP - NetApp Docs
    Jan 17, 2025 · You can use an adaptive QoS policy group to automatically scale a throughput ceiling or floor to volume size, maintaining the ratio of IOPS to TBs|GBs as the ...
  123. [123]
    What are the best practices for adding disks to an existing aggregate?
    Jan 28, 2025 · For best performance, it is advisable to add a new RAID group of equal size to existing RAID groups. A forced reallocate must be done to evenly distribute data.
  124. [124]
    Mid-line FAS 3000 from NetApp - The Register
    Tue 4 Nov 2008 // 13:34 UTC. NetApp is introducing a FAS 3160 in the middle of its mid-range FAS 3000 storage array line. As rumoured back in ...
  125. [125]
    Get Future Ready with Data ONTAP 8 - NetApp Community
    Oct 1, 2010 · It was introduced in September 2009 and reached general availability (GA) status in March 2010. The latest version, Data ONTAP 8.0.1, was ...Missing: date | Show results with:date
  126. [126]
    NetApp dips a toe into Flash Pool - The Register
    Apr 20, 2012 · NetApp has revealed plans to enhance its flash offerings and offer a single tier, flash cached array design with a sexy new name.
  127. [127]
    NetApp slims product family with launch of FAS8000 series
    Feb 19, 2014 · The FAS8000 range of NetApp filers will eventually replace the FAS3000 and FAS6000, leaving just the FAS8000 and the entry-level FAS2000 series.
  128. [128]
    Broadcom, NetApp & SUSE Announce Production Availability of
    May 15, 2018 · NetApp's new A800 platform provides NVMe-attached solid ... support for NVMe over Fibre Channel to all of the workloads that we support.
  129. [129]
    NetApp adds cheaper options to all-flash block storage line
    Feb 11, 2025 · The A20 is now the smallest model in storage capacity and cost for the ASA A-Series. It scales up to 734 TB in raw capacity and starts at ...
  130. [130]
  131. [131]
    01 03 7 Mode Hardware Architecture | PDF - Scribd
    7‐Mode Scalability Limitations. A maximum of two FAS controllers can be configured as a High Availability HA Pair and managed as a paired system. Disks owned ...
  132. [132]
    [PDF] The Write-Anywhere-File- Layout (WAFL) - NetApp Community
    With a typical mix of NFS operations, WAFL can store more than 1000 operations per megabyte of NVRAM.Missing: advantages | Show results with:advantages
  133. [133]
    [PDF] Flash Pool Design and Implementation Guide - NetApp Community
    A Flash Pool is built from a Data ONTAP aggregate in a two-step process, described in section 3,1, "7-. Mode CLI." Essentially it is the addition of SSDs into ...
  134. [134]
    BLOG: What You Need to Know About NetApp Anti-Ransomware Suite
    Mar 15, 2022 · With Autonomous Ransomware Protection, ONTAP 9.10 users can take advantage of file system analytics and a file activity entropy calculator for ...
  135. [135]
    Cyber vault frequently asked questions - NetApp Docs
    Jul 31, 2025 · Backup copies of your data are both immutable and indelible. Strict and separate access controls, multi-administrator verification and multi ...
  136. [136]
    Install a MetroCluster IP configuration - NetApp Docs
    Sep 24, 2025 · Each MetroCluster site consists of storage controllers configured as an HA pair. This allows local redundancy so that if one storage controller ...