Fact-checked by Grok 2 weeks ago

iSCSI

iSCSI (Internet Small Computer Systems Interface) is a transport for the (SCSI) that enables the encapsulation and transmission of SCSI commands, , and status over / networks, allowing block-level access to storage devices using standard IP infrastructure such as Ethernet. Developed to provide an interoperable solution for storage area networks (SANs), iSCSI maps the SCSI architecture model onto TCP, supporting reliable delivery of I/O operations between initiators (client systems) and targets (storage servers or devices). This facilitates cost-effective, scalable storage connectivity over existing networks without requiring specialized hardware like . Standardized by the (IETF), iSCSI was initially defined in 3720 in April 2004 and later consolidated and updated in 7143 in April 2014 to incorporate errata, clarifications, and enhancements while maintaining . Key features include a login phase for parameter negotiation, support for multiple connections per session for performance and redundancy, error detection and recovery mechanisms aligned with standards, and optional security extensions like CHAP authentication and . Widely adopted in enterprise and environments, iSCSI enables , , and remote replication by leveraging high-speed Ethernet advancements up to 100 Gbps and beyond.

Fundamentals

Definition and Purpose

iSCSI, or Internet Small Computer Systems Interface, is a transport for the (SCSI) that operates on top of TCP/ networks. It was originally defined in 3720 as a proposed standard by the (IETF) in April 2004 and later consolidated and updated in 7143 in April 2014. The encapsulates SCSI commands, data, and status within iSCSI data units (PDUs) for transmission over standard IP networks, ensuring compatibility with the SCSI Architecture Model. The primary purpose of iSCSI is to enable initiators, such as servers, to access remote storage targets as if they were locally attached block devices, thereby facilitating block-level storage networking over Ethernet without requiring dedicated storage area network (SAN) hardware. This approach contrasts with traditional SCSI, which relies on direct physical connections or specialized transports like Fibre Channel, by leveraging ubiquitous TCP/IP for extending storage access across local area networks (LANs) or wide area networks (WANs). Key benefits of iSCSI include its cost-effectiveness, as it utilizes existing Ethernet infrastructure and avoids the expense of switches and host bus adapters, making it suitable for small to medium-sized enterprises. It also offers scalability for large data centers by supporting high-speed Ethernet links and integration with environments, where multiple virtual machines can share remote resources efficiently. Historically, iSCSI originated from a proof-of-concept developed by in 1998, with the initial draft submitted to the IETF in 2000 and approved as a proposed standard in February 2003.

Protocol Architecture

The iSCSI protocol employs a layered architecture that maps operations onto networks, enabling access over . At the core is the layer, which generates and processes Command Descriptor Blocks (CDBs) for commands and responses in compliance with the architecture model as defined in SAM-2. The iSCSI layer then encapsulates these elements into Protocol Data Units (PDUs) suitable for transmission, handling tasks such as session management, command sequencing, and error recovery. Underlying this is the , which provides reliable, connection-oriented delivery of the PDUs without awareness of their or iSCSI semantics. This layering ensures that iSCSI maintains semantics while leveraging the ubiquity of for network transport. Central to the iSCSI layer are PDUs, which structure all communications between initiators and targets. Each PDU begins with a Basic Header Segment (BHS) of 48 bytes, including key fields such as the opcode (specifying the PDU type, e.g., 0x01 for Command or 0x21 for Response) and the Initiator Task Tag (a for tracking individual tasks across the session). Following the BHS are optional Additional Header Segments () for extended information and one or more Data Segments, which carry payloads, text parameters, or other data, always padded to 4-byte boundaries for alignment. For login-related PDUs, the structure incorporates specific formats during phases like security negotiation (for parameters) and operational negotiation (for session settings such as maximum connections or error recovery levels). Session establishment in iSCSI occurs through a multi-phase process on connections, distinguishing between normal sessions for full operations and discovery sessions limited to target . Normal sessions begin with a leading login connection using a Target Session Identifying (TSIH) of 0, progressing through three phases: to authenticate parties and establish security parameters, login operational to agree on session-wide settings, and the full feature phase to enable command execution. Discovery sessions, by contrast, restrict operations to SendTargets commands and similar discovery functions, omitting full data transfer capabilities. Initiators and targets collaboratively negotiate these phases to form a session, which may span multiple connections for enhanced reliability. Error handling in iSCSI emphasizes and at the level, with support for optional digests to verify PDU components. Header and data digests use CRC32C (or none) to detect corruption during transit, applied independently to the BHS/ and data segments. mechanisms address failures and through task reassignment (transferring active tasks to a new ), Selective Negative Acknowledgment () requests for retransmitting lost PDUs, and hierarchical levels including within-command for partial errors and session-wide via logout. These features ensure robust operation over potentially unreliable networks while preserving task .

Core Components

Initiators

iSCSI initiators serve as client-side agents on host servers that originate commands to remote targets, encapsulating them within / packets to access storage over IP networks. These components map iSCSI logical units (LUNs) presented by targets as local block devices, such as /dev/sdX in systems, enabling applications to treat remote storage as if it were directly attached. By establishing sessions via a login phase, initiators facilitate reliable data transfer while handling through unique iSCSI names and initiator session identifiers (ISIDs). Initiators are available in software and hardware forms, with software variants integrated into operating systems to leverage CPU for processing, making them the most common deployment method. initiators, typically implemented as host bus adapters (HBAs) or offload engines (TOEs), offload iSCSI and / tasks from the CPU to dedicated silicon for reduced . A prominent example of a software initiator is the iSCSI Initiator service, a built-in Windows component that manages connections to iSCSI without requiring additional . In operation, initiators issue SCSI read and write commands via Command Descriptor Blocks (CDBs) embedded in SCSI-Command Protocol Data Units (PDUs), using initiator task tags and command sequence numbers (CmdSN) to ensure ordered delivery. They manage sessions by negotiating parameters during , such as maximum burst length (default 262144 bytes), and support multiple connections per session for enhanced throughput and redundancy through multi-path I/O (MPIO). Error recovery involves levels from connection-only (level 0) to session-wide (level 2), incorporating mechanisms like selective negative acknowledgments (SNACKs) for retransmissions, task reassignment, and across portal groups to maintain during network disruptions. Performance of initiators varies by type: software implementations introduce CPU overhead for encapsulation and error handling, potentially consuming 10-20% of cycles at high throughput, whereas hardware offloads minimize this to under 10% while providing lower for latency-sensitive workloads. To optimize and performance, initiators integrate with multipathing frameworks, such as in , which aggregates multiple paths into a single logical device for load balancing and .

Targets and Logical Units

In iSCSI, a serves as the server-side endpoint that exposes resources to initiators over networks using connections. It receives commands encapsulated within iSCSI Protocol Data Units (PDUs), executes the associated I/O operations on underlying , and returns responses or status information to the initiator. Targets operate primarily in the Full Feature Phase following successful login negotiation, managing tasks such as command ordering via Command Sequence Numbers (CmdSN) and ensuring connection allegiance where related PDUs stay on the same connection. Each target is uniquely identified by an iSCSI Qualified Name (IQN), a globally unique string formatted according to RFC 3721, such as iqn.2001-04.com.example:storage:diskarrays-sn-a8675309, which combines a , naming , and vendor-specific identifier. Targets support multiple network portals—combinations of addresses and ports—grouped into portal groups to enable load balancing, , and multi-connection sessions for improved performance and reliability. iSCSI targets can be implemented in hardware or software configurations. Hardware targets are typically integrated into enterprise (SAN) arrays, where dedicated controllers handle protocol processing and storage exposure at high speeds. Software targets, in contrast, run on general-purpose servers using operating system tools to emulate storage providers; for example, the targetcli administration shell in environments allows configuration of iSCSI targets backed by local devices or file I/O on commodity . These implementations process incoming SCSI-Command PDUs (opcode 0x01), which include details like the Expected Data Transfer Length for I/O size, and respond with Data-In PDUs for reads or Ready to Transfer (R2T) PDUs to solicit data for writes. Logical units (LUs) represent the fundamental addressable storage entities within an iSCSI target, corresponding to SCSI logical units that appear as block devices to initiators. Each LU is identified by a 64-bit Logical Unit Number (LUN), formatted per the (SAM) and included in PDUs such as the SCSI Command PDU (bytes 8-15) to specify the target LU for operations. LUNs are mapped to physical or virtual storage volumes on the target, enabling abstraction of underlying hardware like disks or arrays, and access is scoped to the target's IQN combined with the LUN, as in iqn.1993-08.org.[debian](/page/Debian):01:abc123/lun/0. LUN masking restricts visibility and access to authorized initiators based on their IQNs, while mapping associates LUNs with specific backend storage resources to control data placement and availability. Target operations center on handling I/O workflows initiated by commands from iSCSI initiators. Upon receiving a SCSI command, the target processes it in CmdSN order, transferring data bidirectionally—sending output via Data-In PDUs for reads or requesting input via R2T PDUs for writes—before concluding with a SCSI Response PDU containing status, such as GOOD or CHECK CONDITION. Task management functions, like ABORT TASK or CLEAR TASK SET, allow termination of specific LUN operations, with the LUN field specifying the affected unit. Many iSCSI targets support advanced features at the LUN level, including to allocate storage on demand for efficient and snapshots to create point-in-time copies for or . These capabilities enhance in environments like virtualized data centers, where LUNs may represent thinly provisioned volumes over-allocated relative to physical backing store.

Discovery and Connectivity

Addressing Mechanisms

iSCSI employs standardized naming conventions to uniquely identify initiators and targets across IP networks, ensuring persistent and location-independent identification. The primary format is the iSCSI Qualified Name (IQN), structured as iqn.yyyy-mm.reversed-domain:unique-id, where yyyy-mm denotes the year and month of domain registration, the reversed domain follows standard DNS conventions (e.g., com.example), and the unique identifier is vendor-specific (e.g., iqn.2001-04.com.example:storage:diskarrays-sn-a8675309). Alternatively, the EUI-64 format uses eui. followed by a 16-hex-digit IEEE EUI-64 identifier (e.g., eui.02004567A425678D), providing a compact alias for nodes based on hardware or software identifiers. These names are globally unique, permanent, and not tied to specific hardware, with optional aliases for human-readable reference. Portal addressing facilitates endpoint connectivity by specifying targets via and port, with the default port being 3260 for iSCSI sessions. The TargetAddress parameter in login operations supports formats such as (e.g., example.com:3260,1), (e.g., 10.0.1.1:3260,1), or (e.g., [2001:db8::1]:3260,1), optionally including a comma-separated portal group tag for session coordination. This addressing scheme enables initiators to establish connections to targets over standard networks, abstracting SCSI commands into iSCSI Protocol Data Units (PDUs). Connection management in iSCSI organizes multiple /port combinations into portal groups, identified by a 16-bit portal group tag (0-65535), allowing sessions to span several network portals while maintaining consistent SCSI logical unit access. During login, initiators select routes based on discovered or configured target addresses, with targets returning the servicing portal group tag in the initial response to ensure session affinity. Redirection occurs if a target issues a login response with status class 0101h (Redirect), providing an alternative TargetAddress (e.g., omitting the portal group tag in redirects) to guide the initiator to another portal for load balancing or . This mechanism supports multiple connections per session within the same portal group, enhancing reliability without requiring hardware-specific adaptations. Initial addressing security integrates with protocols during , where CHAP provides in-band verification of initiator and identities using directional secrets, ensuring secure name resolution and connection establishment. For broader protection, IKEv2 enables encapsulation, supporting identification types like ID_IPV6_ADDR to secure addressing in dual-stack environments. iSCSI supports both IPv4 and natively over , with dual-stack compatibility allowing seamless transitions in addressing formats from early specifications. 3720 laid the foundation for IP-agnostic transport, while 7143 refined IPv6 integration, mandating bracketed notation in TargetAddress and IKE identification for modern networks, evolving from IPv4-centric examples to full IPv6 interoperability without protocol changes.

iSNS Protocol

The Internet Storage Name Service (iSNS) is an IETF standard defined in RFC 4171 that serves as a for iSCSI and related storage devices on networks, enabling automated discovery and management akin to the (DNS) but tailored for storage resources. It allows initiators to locate available targets dynamically without prior manual configuration of all device details, facilitating integration of iSCSI initiators, targets, and management nodes into a centralized database. As a client-server protocol, iSNS operates over (mandatory) or (optional), using the default port 3205 for communications between iSNS servers and clients. Key functions of iSNS include registration, where targets register their iSCSI Qualified Names (IQNs) and portal addresses ( and port combinations) with the iSNS server using Device Attributes Registration (DevAttrReg) messages; discovery, where initiators query the server via Device Attributes Query (DevAttrQry) or Device Get Next (DevGetNext) messages to retrieve lists of available and their attributes; and state change notifications (SCNs), which alert registered clients to dynamic events such as target availability changes or scenarios through SCN messages. These notifications support updates, with message types encoded in a Type-Length-Value (TLV) format for attributes like entity identifiers and portal details. For example, an SCN might notify of an object addition or removal, enabling seamless session management in storage networks. iSNS offers benefits such as reduced manual configuration in large-scale environments by centralizing device information and automating target discovery, which simplifies deployment compared to static setups. However, it is optional for iSCSI implementations, with alternatives including the (SLP) per 2608, static configuration of target addresses, or the SendTargets method for direct queries. Security considerations in iSNS, as outlined in 4171, address threats like unauthorized access and message replay through recommended ESP (SHOULD per 4171) for and (with optional ), timestamps in messages, and support for digital signatures or certificates in scenarios. As of 2025, while iSNS remains in use in various storage systems such as ONTAP and fabrics, Microsoft has deprecated support for iSNS in 2025, recommending the (SMB) feature as an alternative for similar functionality.

Deployment Features

Network Booting

Network booting with iSCSI allows diskless clients to load and execute operating systems from remote storage devices over an IP network, treating the remote iSCSI logical unit number (LUN) as a local block device during the boot sequence. This process integrates with standard network boot mechanisms like the (PXE), where the client's —either or —initiates the connection to an iSCSI . The initiator, embedded in the firmware or loaded via PXE, establishes an iSCSI session to access the bootable LUN containing the operating system image. The boot process begins with the client broadcasting a DHCP request to obtain network configuration, including the of the boot server and details for locating the iSCSI target. In PXE-enabled setups, the DHCP server responds with option 67 specifying the boot file name, which may chainload an enhanced like to handle iSCSI-specific operations. Once network parameters are acquired, the client uses additional DHCP options—such as vendor-specific option 43 or the iSCSI root path in option 17 (format: "iscsi:"servername":"protocol":"port":"LUN":"targetname"")—to identify the iSCSI target. If details are incomplete, the client queries a discovery service like iSNS or SLP to resolve the target name to an and port. Following discovery, the iSCSI initiator in the logs into the using the obtained credentials, establishing a session over . The then reads the LUN as a block device, loading the and mounting the root filesystem to continue the operating system . For systems, the process aligns with EFI boot services, while uses INT13h extensions to present the remote disk; multiple paths can be handled by prioritizing interfaces or based on configuration, allowing if the primary path fails. In advanced setups, chainloading via enables scripting for dynamic selection or , such as CHAP, before passing control to the OS loader. Key requirements include firmware support for iSCSI, such as iSCSI Remote Boot integrated into the or , and an iSCSI target exporting a bootable LUN formatted with a compatible partition scheme (e.g., for ). The network must provide reliable connectivity, with the target configured to allow initiator access; no local storage is needed on the client, though fallback options may be provisioned. Common use cases include stateless computing environments, where multiple clients boot identical OS images from a central for simplified management and rapid deployment, and diskless workstations in educational or lab settings to reduce hardware costs and enable centralized updates. For instance, a single 40 GB master image can boot hundreds of clients using differencing virtual hard disks, saving over 90% on storage compared to local duplicates. Challenges arise in wide area network (WAN) booting due to increased from geographic distance and potential , which can prolong the initial session establishment and OS loading. This is mitigated by enabling jumbo frames (MTU up to 9000 bytes) end-to-end to reduce overhead and improve throughput, though all network components must support consistent MTU sizes to avoid fragmentation. (LAN) deployments with 10 GbE or higher typically avoid these issues, emphasizing dedicated iSCSI VLANs for optimal performance.

Configuration Basics

Configuring an iSCSI initiator and target begins with assigning unique iSCSI Qualified Names (IQNs) to each node, defining network portals as and TCP port combinations (typically port 3260), and setting up authentication secrets using the (CHAP). On the target side, administrators create IQNs and associate them with portal groups, which manage access to logical unit numbers (LUNs), while enabling CHAP by specifying usernames and secrets (at least 96 bits recommended for security without , with implementations required to support up to 128 bits and potentially longer) for one-way or mutual authentication. For the initiator, the IQN is defined in a such as /etc/iscsi/initiatorname.iscsi, and CHAP credentials are entered in /etc/iscsi/iscsid.conf to match the target's settings. Practical setup on systems utilizes the iscsiadm utility for and operations. employs the SendTargets via commands like iscsiadm --mode discoverydb --type sendtargets --portal <target-ip>:3260, which queries the target for available IQNs and portals without establishing a full session. Subsequent is performed with iscsiadm -m node -T <target-iqn> -p <target-ip>:3260 --login, establishing a persistent session that can be automated at boot by marking nodes as such in the open-iSCSI database. During the login phase, iSCSI negotiates session parameters using text key-value pairs to ensure and optimal . parameters include MaxConnections, which specifies the maximum number of concurrent per session (default 1, range 1-65535, negotiated to the minimum value); HeaderDigest and DataDigest, which enable optional CRC32C checksumming for (default None, per-connection ); and ErrorRecoveryLevel, which defines capabilities (0 for none, 1 for within-connection recovery like retransmissions, and 2 for full session-level task reassignment across ). iSCSI supports multipathing through extensions like Multi-Path I/O (MPIO) to enhance redundancy and performance across multiple network paths. In environments, the Device Mapper Multipath (DM-Multipath) subsystem aggregates paths to an iSCSI LUN into a single device, using policies such as for load balancing I/O across active paths. Persistent bindings ensure consistent LUN mapping by configuring multipath.conf with device-specific aliases, WWIDs, and path priorities, preventing device name changes on and enabling without data disruption. Monitoring iSCSI sessions involves tools to track performance metrics and diagnose issues. The iscsiadm command provides session statistics with --stats, reporting throughput in bytes per second, , and error counts for active connections. For troubleshooting portal , administrators use iscsiadm -m session to verify connection states and manually trigger failovers with --logout and --login on alternate portals, ensuring quick recovery in multipathed setups.

Security Measures

Authentication and Authorization

In iSCSI, authentication primarily occurs during the login phase to verify the identities of the initiator and target using the (CHAP), which employs hashes for secure challenge-response exchanges. CHAP supports bidirectional verification, where the target challenges the initiator, and the initiator computes a response based on a , a nonce challenge, and the algorithm (mandatory as algorithm 5 per RFC 3720). This process uses keys such as CHAP_N (name), CHAP_I (identifier), CHAP_C (challenge), and CHAP_R (response), all limited to 1024 bytes in binary form, ensuring the parties authenticate each other before proceeding to operational negotiation. Mutual CHAP extends this to full bidirectional authentication, allowing the initiator to challenge the target after initial verification, using separate secrets and identities for each direction to prevent attacks. CHAP secrets must be at least 96 bits long or the TCP connection must be encrypted (e.g., via ), randomly generated, and unique per peer. Advanced authentication options include integration with for centralized management of CHAP secrets, enabling a single across multiple initiators while the RADIUS server verifies responses. Additionally, IKEv2 combined with provides robust and protection for the entire connection, recommended for high-security environments as per 7143, which mandates IPsec support and suggests IKEv2 alongside AES-CBC and HMAC-SHA1-96. Authorization in iSCSI relies on access control lists (ACLs) implemented at the target to initiators by their iSCSI Qualified Name (IQN), restricting attempts to authorized nodes only. LUN masking further enforces granular control by mapping specific logical unit numbers (LUNs) to initiator groups, denying access to unauthorized LUNs even if succeeds, typically aligned with standards like SPC-3. The standards for these mechanisms evolved from the basic CHAP in RFC 3720 (2004), which introduced initial authentication but left vulnerabilities like dictionary attacks exposed over unencrypted channels, to the consolidated RFC 7143 (2014), which enhanced through stronger secret requirements, IKEv2 integration, and removal of obsolete methods like SPKM to address such weaknesses.

Network Isolation Techniques

isolation techniques in iSCSI deployments are essential for enhancing by limiting unauthorized access and improving performance by preventing interference from other types. These methods segregate iSCSI from general communications, reducing and ensuring reliable block-level transfers over Ethernet. Logical and physical isolation approaches, often combined, address vulnerabilities inherent to running iSCSI on networks, where could otherwise be exposed to broader . Logical isolation employs VLAN tagging as defined in IEEE 802.1Q to create virtual subnetworks that separate iSCSI traffic from general network flows. This technique allows multiple logical networks to coexist on the same physical infrastructure, with iSCSI assigned to a dedicated —such as VLAN 8—to prevent intermingling with application or management traffic. Additionally, (QoS) mechanisms using (DiffServ) enable prioritization of iSCSI packets by marking them with specific Differentiated Services Code Point (DSCP) values, ensuring higher bandwidth allocation during congestion and minimizing delays for storage I/O operations. Physical isolation involves deploying dedicated Ethernet switches and separate Network Interface Cards (NICs) exclusively for iSCSI traffic, avoiding shared networks that could lead to or bottlenecks. For instance, servers may use distinct 10GbE NICs bonded for iSCSI, connected to specialized switches like those supporting iSCSI offload, while general traffic routes through separate hardware. This approach eliminates contention from non-storage workloads, providing a pathway that enhances throughput and through configurations like Channel Link Aggregation Groups (LAGs). The Storage Networking Industry Association (SNIA) recommends such segregation to maintain predictable performance in iSCSI environments. Best practices for iSCSI isolation include enabling jumbo frames with an MTU greater than 1500 bytes—typically 9000 bytes—across the entire iSCSI path to reduce overhead and improve efficiency for large block transfers. However, this must be paired with isolation techniques to mitigate risks like broadcast storms, which can propagate rapidly in shared domains and overwhelm the network; in portfast mode is advised on switches to prevent loops. Integration with allows for dynamic segmentation, where policies automatically adjust assignments or traffic flows based on real-time needs, further optimizing isolation in virtualized setups. Authentication methods, such as CHAP, serve as a prerequisite for secure logins within these isolated segments. These techniques primarily address risks such as on shared LANs, where unencrypted iSCSI sessions could be intercepted, and Denial-of-Service () attacks from non-storage traffic flooding the network. By adhering to SNIA guidelines, organizations can implement access controls, traffic filtering, and separate subnets to minimize these threats, ensuring iSCSI operates as a secure, high-performance .

Implementations and Extensions

Operating System Integration

offers comprehensive iSCSI support through the Open-iSCSI project, which provides a high-performance, transport-independent initiator implementation compliant with RFC 3720. The Open-iSCSI package includes user-space utilities for configuration and management, along with essential kernel modules such as iscsi_tcp for / transport, libiscsi, and scsi_transport_iscsi. Kernel-level iSCSI initiator functionality has been integrated since Linux 2.6.11, enabling reliable access over networks. For iSCSI targets on , the (Linux-IO) framework serves as the standard in-kernel target implementation, introduced in kernel version 2.6.38 and replacing earlier user-space options like STGT. supports multiple protocols including iSCSI and is configured using tools such as targetcli, a for creating and managing storage targets. This setup allows systems to export local devices as iSCSI targets efficiently within the space. Windows integrates iSCSI support natively through the Microsoft iSCSI Initiator, available as a software download since Windows Server 2003 and built-in starting with Windows Server 2008. The initiator enables clients to connect to remote iSCSI targets for storage access, with features like session management and CHAP authentication handled via the Microsoft iSCSI service. For targets, the Microsoft iSCSI Target role service, introduced in Windows Server 2012, allows servers to expose local volumes over iSCSI, supporting up to multiple terabytes of shared storage. This target integrates seamlessly with Windows Storage Spaces, enabling pooled storage resiliency and tiering for iSCSI-connected volumes. Other operating systems provide varying levels of iSCSI integration. includes a native, kernel-based iSCSI initiator and target since version 10.0, using the iscsid daemon for initiator connections and ctld for target configuration, allowing direct command tunneling over IP. offers built-in software iSCSI adapter support, enabling vSphere hosts to discover and mount iSCSI LUNs as datastores for storage, with multipathing options via the vSphere client. In contrast, macOS lacks native iSCSI support and relies on third-party solutions, such as the globalSAN iSCSI Initiator for reliable connections to targets or KernSafe iSCSI Initiator X for basic client functionality. As of 2025, recent kernel developments enhance iSCSI reliability in Linux 6.x series through improved integration with device-mapper multipath (DM-Multipath) for load balancing and failover across multiple network paths, building on the Open-iSCSI stack without native support for multiple connections per session (MC/S). Windows Server 2025 advances storage optimizations, including a native NVMe-oF initiator that converges with iSCSI workflows for hybrid environments, delivering up to 60% higher IOPS on NVMe storage compared to prior versions while maintaining iSCSI compatibility.

Target and Bridge Solutions

iSCSI target solutions encompass both hardware appliances and software implementations that expose block storage over Ethernet networks. Hardware appliances, such as the series, provide unified storage with native iSCSI support, allowing seamless integration into existing IP infrastructures for midrange enterprise environments. Similarly, systems function as iSCSI targets, enabling storage provisioning in environments where targets present logical unit numbers (LUNs) to initiators via Ethernet. Open-source and targets offer cost-effective alternatives for smaller deployments. , derived from FreeNAS, includes built-in iSCSI target capabilities that allow users to create and manage targets, extents, and LUNs through its web interface, supporting features like CHAP authentication for secure access. StarWind provides a free version of its iSCSI software, which simplifies target creation and high-availability configurations for hyperconverged setups, leveraging iSCSI for efficient VM storage. Specialized software targets extend beyond native operating system features. QNAP systems incorporate iSCSI target services within their QTS or QuTS hero operating systems, enabling the creation of targets and for block-level access, often used in hybrid / scenarios. Microsoft's iSCSI Server, integrated into as a role service under File and Storage Services, allows servers to export local or clustered storage as iSCSI targets, supporting scalability up to thousands of connections in virtualized environments. Bridge and converter devices facilitate interoperability between iSCSI and legacy or alternative protocols. FC-to-iSCSI gateways enable migration from by converting FC LUNs to iSCSI , preserving investments in existing arrays while transitioning to Ethernet-based networks. MDS switches with iSCSI gateway functionality route requests between hosts and FC , supporting seamless protocol translation for hybrid fabrics. Ethernet-to-FCoE converters, including Marvell QLogic 2670 series adapters, encapsulate FC frames over Ethernet while supporting iSCSI offload, bridging traditional FC workflows to converged networks. As of 2025, modern iSCSI targets increasingly integrate with NVMe/ for enhanced performance, where targets expose NVMe namespaces over alongside iSCSI, reducing compared to traditional SCSI commands in high-throughput scenarios. Cloud-based targets, such as AWS Storage Gateway's Volume Gateway mode, export iSCSI volumes backed by or EBS, providing hybrid cloud storage with local caching for on-premises initiators.

References

  1. [1]
    What is iSCSI? | SNIA | Experts on Data
    Oct 25, 2022 · iSCSI is a block protocol for storage networking and runs the very common SCSI storage protocol across a network connection which is usually Ethernet.
  2. [2]
    RFC 3720: Internet Small Computer Systems Interface (iSCSI)
    Network Working Group J. · RFC 3720 iSCSI April 2004 The SCSI protocol has been mapped over various transports, including Parallel SCSI, IPI, IEEE-1394 (firewire) ...
  3. [3]
    Evolution of iSCSI | SNIA | Experts on Data
    May 24, 2016 · iSCSI is an Internet Protocol standard for transferring SCSI commands across an Ethernet network, enabling hosts to link to storage devices ...
  4. [4]
    RFC 7143 - Internet Small Computer System Interface (iSCSI ...
    Internet Small Computer System Interface (iSCSI) Protocol (Consolidated) · RFC - Proposed Standard April 2014. Report errata IPR. Obsoletes RFC 3980, RFC 4850, ...
  5. [5]
    RFC 3720 - Internet Small Computer Systems Interface (iSCSI)
    Internet Small Computer Systems Interface (iSCSI) (RFC 3720, ; obsoleted by RFC 7143) ... If the specification is a standards track document, the usual IETF ...
  6. [6]
    iSCSI (Internet Small Computer System Interface) By - TechTarget
    May 15, 2024 · IBM developed iSCSI as a proof of concept in 1998 and presented the ... iSCSI standard to the Internet Engineering Task Force in 2000.
  7. [7]
    What is iSCSI and How Does it Work? Components and Benefits
    Nov 14, 2023 · Benefits of Using iSCSI and iSCSI Storage​​ Cost-Effectiveness: iSCSI operates over standard Ethernet and does not need complex, expensive cards ...
  8. [8]
    iSCSI protocol approved as standard - Computerworld
    Feb 12, 2003 · The long-awaited approval of the iSCSI protocol means that block-level data can be backed up over Ethernet channels.Missing: history | Show results with:history<|control11|><|separator|>
  9. [9]
  10. [10]
  11. [11]
  12. [12]
  13. [13]
  14. [14]
  15. [15]
  16. [16]
  17. [17]
  18. [18]
  19. [19]
  20. [20]
  21. [21]
  22. [22]
    iSCSI protocol - IETF
    The iSCSI protocol defined in this document describes a means of transporting SCSI packets over TCP/IP (see [RFC791], [RFC793], [RFC1035], [RFC1122]) ...
  23. [23]
    [PDF] Part Rosé The iSCSI Pod - SNIA.org
    Mar 2, 2017 · iSCSI can run on any physical network that TCP/IP can run on – Ethernet, InfiniBand,.. Block Storage is the most typical (and the only type.
  24. [24]
    ‎Software vs iSCSI HBA (real world performance) | DELL Technologies
    Feb 21, 2008 · ... hardware-assisted iSCSI initiator HBA, the TCP/IP and iSCSI processing is offloaded to the HBA, resulting in less than 10% CPU overhead for ...‎iSCSI HBA or ESXI swiscsi‎iSCSI Enabled NICsMore results from www.dell.com
  25. [25]
    How do I configure Device Mapper Multipath on my iSCSI LUNS?
    Aug 7, 2024 · How do I configure multipath on my iSCSI LUNS? Environment. Red Hat Enterprise Linux(RHEL) 5.3 or higher; A properly configured and active iSCSI ...
  26. [26]
  27. [27]
  28. [28]
  29. [29]
    Chapter 8. Configuring an iSCSI target | Managing storage devices
    The targetcli tool has a tree-based layout including built-in tab completion, auto-complete support, and inline documentation. 8.1. Installing targetcli.
  30. [30]
  31. [31]
  32. [32]
    What is LUN masking and how does it work? - TechTarget
    Mar 4, 2022 · LUN masking is an authorization mechanism used in storage area networks (SANs) to make LUNs available to some hosts but unavailable to other hosts.
  33. [33]
  34. [34]
  35. [35]
    RFC 4171 - Internet Storage Name Service (iSNS) - IETF Datatracker
    Security Considerations 7.1. iSNS Security Threat Analysis When the iSNS protocol is deployed, the interaction between iSNS server and iSNS clients is ...
  36. [36]
    RFC 3721 - Internet Small Computer Systems Interface (iSCSI ...
    This means that the initiator node needs to pass some type of security related identification information (e.g., userid) to a security authentication ...
  37. [37]
    RFC 4173 - Bootstrapping Clients using the Internet Small Computer ...
    This memo describes a standard mechanism for enabling clients to bootstrap themselves using the iSCSI protocol.
  38. [38]
    Howto: Boot from ipxe/iscsi target using ibft with debian and dracut ...
    Jan 17, 2016 · A standard iscsi boot process requires 4 individual logins to the iscsi target hosting the root fs. Following this guide all of those 4 logins ...
  39. [39]
    [PDF] iSCSI Boot from SAN Configuration Setup Guide - IBM
    The iSCSI Boot from SAN Configuration Setup Guide supports Version 1.0 of the. IBM® iSCSI Boot from SAN application. This application provides the ...
  40. [40]
    Overview of the Intel® Ethernet iSCSI Boot FLASH ROM Utility
    Intel iSCSI Remote Boot enables PCI Express*-based Intel® Ethernet Server Adapters to boot from a remote iSCSI disk volume on an iSCSI-based storage area ...
  41. [41]
    iSCSI Target Boot Overview
    ### Summary of iSCSI Target Boot Use Cases, Challenges, and Mitigations
  42. [42]
    Diskless PC Booting using iSCSI - iWave Global
    Jul 16, 2025 · Boot diskless PCs over network using iSCSI with Zynq UltraScale+ MPSoC, enabling centralized OS management, reduced hardware, and enhanced ...
  43. [43]
    iSCSI Best Practices: Solutions to Real-World Deployment Challenges
    Jan 14, 2019 · Jumbo Frames Jumbo frames are larger Ethernet packets that reduce the ratio of packet overhead to payload. The default Ethernet frame size ...
  44. [44]
    Configure an iSCSI Target and Initiator on Oracle Linux
    Apr 3, 2025 · Learn to configure an iSCSI target on Oracle Linux to share block storage devices with another Oracle Linux instance using an iSCSI ...Missing: software | Show results with:software<|separator|>
  45. [45]
    Chapter 9. Configuring an iSCSI initiator - Red Hat Documentation
    By default, an iSCSI service is lazily started and the service starts after running the iscsiadm command. If root is not on an iSCSI device or there are no ...
  46. [46]
    How To Configure ISCSI Target & Initiator On RHEL/CentOS 7.6
    Mar 17, 2023 · Follow the 'iscsiadm' command to perform discovery and login to iSCSI Targets. Basically, it comes with a combination of three different ...
  47. [47]
    open-iscsi/open-iscsi: iSCSI tools for Linux - GitHub
    The user space Open-iSCSI consists of a daemon process called iscsid, and a management utility iscsiadm. ... Using --stats prints the iSCSI stats for the session.
  48. [48]
  49. [49]
  50. [50]
  51. [51]
  52. [52]
  53. [53]
  54. [54]
    Chapter 6. Configuring an iSCSI target | Red Hat Enterprise Linux | 9
    The targetcli service uses Access Control Lists (ACLs) to define access rules and grant each initiator access to a Logical Unit Number (LUN). Both targets ...Missing: masking | Show results with:masking
  55. [55]
  56. [56]
  57. [57]
    [PDF] Security Guidelines for Storage Infrastructure
    Strict implementations of air-gapping should provide full physical and network-level separation. Certain storage technologies also introduce less strict ...
  58. [58]
    [PDF] Implementing iSCSI SAN Best Practices with Dell Advanced ...
    When deploying an iSCSI Storage Area Network (SAN), it is important to physically or logically separate storage traffic from ordinary LAN traffic to help the ...
  59. [59]
    Configure iSCSI - Mirantis Kubernetes Engine
    iSCSI kernel modules implement the data path. The most common modules used across Linux distributions are scsi_transport_iscsi.ko , libiscsi.ko , and iscsi_tcp.
  60. [60]
    What are the choices for iSCSI target support for Linux? - Server Fault
    May 27, 2009 · Open-iSCSI is the primary implementation for most of the Linux distros that I've run into, and the kernel has had pretty solid iSCSI support since 2.6.11.Missing: LIO iscsi_tcp
  61. [61]
    ISCSI/LIO - ArchWiki
    Apr 5, 2025 · ISCSI/LIO ... LIO (LinuxIO) is the in-kernel iSCSI target (since Linux 2.6.38). Installation. The iSCSI target fabric is included since Linux 3.1.
  62. [62]
    Do you know where I can get the first iscsi initiator version
    Jan 22, 2024 · Using Microsoft iSCSI Initiator software I'm able to build all possible clusters scenarios since Windows 2000 Server. For Windows 2000 Server, I ...How to install windows iscsi initiator in windows 11 on arm?Microsoft iSCSI initiator - Microsoft Q&AMore results from learn.microsoft.com
  63. [63]
    Microsoft iSCSI Software Initiator Version 2.08 - Legacy Update
    Dec 8, 2008 · This download can be installed on Windows Server 2003, Windows XP, and Windows 2000. For Vista and Windows Server 2008, the iSCSI initiator is ...
  64. [64]
    iSCSI Target Server Overview - Microsoft Learn
    Nov 1, 2024 · This topic provides a brief overview of iSCSI Target Server, a role service in Windows Server that enables you to make storage available via the iSCSI protocol.
  65. [65]
    Introduction of iSCSI Target in Windows Server 2012
    May 21, 2012 · iSCSI Target, built-in in Windows Server 2012, allows a server to share block storage remotely over Ethernet, using the iSCSI protocol.
  66. [66]
    Using ESXi with iSCSI SAN - TechDocs
    ESXi can connect to external SAN storage using the Internet SCSI (iSCSI) protocol. In addition to traditional iSCSI, ESXi also supports iSCSI Extensions for ...
  67. [67]
    globalSAN macOS iSCSI Initiator — SNS (Studio Network Solutions)
    The globalSAN iSCSI Initiator enables Mac computers to connect to practically any iSCSI storage target, using standard GbE or 10GbE hardware.
  68. [68]
    Download iSCSI Initiator X - KernSafe
    KernSafe iSCSI Initiator X is completely free iSCSI initiator software for Apple Mac OS X. Version 3.0 fixed bugs, improved compatibility with third-party ...
  69. [69]
    ISCSI Multipathing under Linux - Thomas-Krenn-Wiki-en
    Oct 29, 2025 · iSCSI Multipathing can be generally achieved with MC/S (Multiple Connections per session). MC/S is not supported by the Linux iSCSI Software- ...Missing: 6. kernel enhancements
  70. [70]
    Windows Server 2025 now generally available, with advanced ...
    Nov 4, 2024 · NVMe storage performance: Windows Server 2025 delivers up to 60% more storage IOPs performance compared to Windows Server 2022 on identical ...
  71. [71]
    New storage features in Windows Server 2025: NVMe-OF initiator ...
    Jul 29, 2024 · Windows Server 2025 brings numerous enhancements to the storage subsystem, including enhanced NVMe support with an integrated NVMe-OF initiator.
  72. [72]
    SAN provisioning with iSCSI - NetApp Docs
    Aug 8, 2023 · You configure storage by creating LUNs for iSCSI and FC or by creating namespaces for NVMe. The LUNs or namespaces are then accessed by hosts ...<|separator|>
  73. [73]
    Adding an iSCSI Share | TrueNAS Documentation Hub
    Apr 25, 2025 · To add a new target, click ADD and enter the basic and iSCSI group information. To edit an existing target, click more_vert next to it and ...
  74. [74]
    Protocols: iSCSI - Resource Library | Features - StarWind
    Jul 18, 2024 · Learn how StarWind VSAN leverages iSCSI to provide highly available, efficient storage for VMs and applications.
  75. [75]
    How to create and use the iSCSI target service on a QNAP NAS
    Jul 20, 2023 · Go to Storage & Snapshots > iSCSI Storage > iSCSI Target List to view and modify iSCSI targets, iSCSI LUNs, and their mappings.Creating an iSCSI Target · Creating an iSCSI LUN · Managing iSCSI Targets and...
  76. [76]
    Brocade iSCSI Gateway storage controller - Fibre Channel - Elarasys
    The Brocade iSCSI Gateway enables cost-effective, easy-to-manage server connectivity to Fiber Channel storage devices.
  77. [77]
    [PDF] iSCSI Configuration - Cisco
    The iSCSI feature consists of routing iSCSI requests and responses between hosts in an IP network and. Fibre Channel storage devices in the Fibre Channel SAN ...
  78. [78]
    Marvell QLogic 2670 Series Fibre Channel Adapters
    Gen 5 FC (16GFC) fibre channel adapters that can transform from 16Gb FC HBA to a 10GbE converged network adapter supporting NIC, FCoE and iSCSI traffic.
  79. [79]
    NVMe over TCP: Benefits, Drawbacks, and Implementation Strategies
    Mar 20, 2025 · Transitioning from iSCSI to NVMe/TCP reduces overhead and unlocks better performance, making it an attractive option for organizations seeking ...3. Replacing Iscsi For... · Nvme/tcp Vs. Other Transport... · When Would Nvme/tcp Make...<|control11|><|separator|>
  80. [80]
    Connecting iSCSI Initiators - AWS Storage Gateway
    The iSCSI standard is an Internet Protocol (IP)–based storage networking standard for initiating and managing connections between IP-based storage devices and ...