Fact-checked by Grok 2 weeks ago

Iperf

Iperf is an open-source tool for actively measuring the maximum achievable bandwidth on IP networks, supporting TCP, UDP, and SCTP protocols over both IPv4 and IPv6. It operates in client-server mode, where one instance acts as a server to receive traffic and the other as a client to send it, reporting metrics such as throughput, packet loss, jitter, and out-of-order delivery. Originally developed by the National Laboratory for Applied Network Research (NLANR)/Distributed Applications and Software Tools (DAST) group as part of efforts to evaluate network performance in the late 1990s and early 2000s, Iperf has evolved into a widely used utility for network diagnostics and optimization. The original version, often referred to as Iperf2, was maintained by contributors including Jon Dugan and John Estabrook, but development stalled after the NLANR funding ended around 2006. In response to limitations in Iperf2, such as code complexity and lack of active maintenance, the Energy Sciences Network (ESnet) at Lawrence Berkeley National Laboratory initiated Iperf3 in 2010 as a complete redesign with no shared codebase, focusing on simplicity, portability, and modern features. Iperf3, the current primary version, is released under a three-clause BSD license and is principally developed by ESnet, with ongoing contributions from the community via its repository. Key capabilities include tunable parameters for buffers, algorithms, and test durations, enabling precise simulations of real-world scenarios like bulk transfers or latency-sensitive applications. While Iperf2 remains available for legacy use with limited community support through forums, Iperf3 is recommended for new deployments due to its enhanced output for automation, bidirectional testing, and compatibility with . The tool is commonly employed in enterprise networks, research environments, and testing to verify link capacities, identify bottlenecks, and ensure .

Introduction

Purpose and Functionality

Iperf is an open-source, cross-platform command-line tool designed for active measurements of the maximum achievable on networks. The original version was developed by the National Laboratory for Applied Network Research (NLANR)/Distributed Applications and Software Tools (DAST) group, while the modern version, Iperf3, is developed primarily by the Energy Sciences Network (ESnet) at . It enables network administrators and researchers to evaluate by generating controlled between endpoints, providing insights into potential bottlenecks and limits. The tool's primary functionality involves creating synthetic traffic to assess the performance of , , and SCTP protocols. For tests, Iperf focuses on throughput, measuring the rate of data transfer while accounting for overhead. In mode, it additionally reports percentages and in milliseconds, which are critical for evaluating applications sensitive to variations. SCTP tests similarly emphasize throughput, supporting multi-streaming capabilities inherent to the . These metrics help diagnose issues such as , insufficient , or configuration problems without requiring specialized hardware. Iperf operates in a client- model, where one instance runs as a to listen on a specified , and another acts as a client to initiate or reception. This setup allows bidirectional testing and customization of parameters like test duration, buffer sizes, and traffic direction to simulate various network scenarios. Output includes interval-based reports and summaries, with throughput typically expressed in bits per second (e.g., Mbits/sec using base-1000 scaling for decimal consistency in ). When configured for byte-based reporting (via the -f flag with uppercase units like M for megabytes), Iperf applies base-1024 scaling () to align with binary memory conventions, specifically for payload throughput calculations. The tool supports a range of platforms, including , Unix variants like and macOS, and Windows (primarily through the iperf2 version). While iperf3 development emphasizes as the primary platform, with official support for , , and macOS, community reports indicate successful operation on additional systems such as Windows via or native builds, , , , and . This broad compatibility makes Iperf suitable for diverse environments, from data centers to edge devices.

Basic Architecture

Iperf operates on a client-server paradigm, where one instance functions as the and another as the client to measure . In mode, invoked with the -s , the binds to a specified and listens passively for incoming connections from clients. The client mode, activated via the -c followed by the server's and , establishes the connection and initiates the data transfer, allowing for controlled testing of and other metrics between endpoints. The operational flow involves the client generating and sending streams of packets to the over or , with the either absorbing the (in standard mode) or echoing it back as needed. By default, the client transmits to the for a duration of 10 seconds, configurable through options like -t for time or -n for byte count, enabling precise control over the test scope. This unidirectional flow can be extended to bidirectional testing: the -R reverses the direction so the sends to the client, while the -d enables dual testing where both directions occur simultaneously, providing insights into asymmetric network behaviors. Iperf utilizes port 5201 by default for both and communications, which can be customized using the -p option to accommodate rules or specific requirements. To optimize on multi-core systems, the architecture supports parallel via the -P flag, which launches multiple concurrent connections (defaulting to one) to better utilize available CPU resources and saturate higher-bandwidth links. While earlier versions like Iperf2 employ multi-threading for handling parallel , Iperf3 versions prior to 3.16 were largely single-threaded per process, with parallelism achieved through multiple ; starting from version 3.16 (released December 2023), Iperf3 supports multi-threading with one thread per to improve on modern hardware.

History and Development

Origins and Early Development

Iperf originated as a project of the Distributed Applications Support Team (DAST) within the National Laboratory for Applied Network Research (NLANR), a collaborative effort involving academic institutions and supported by the and to advance high-speed networking research. The tool was conceived as an improved reimplementation of the longstanding ttcp (Test TCP) utility, which had been widely used since the 1980s for basic measurements but suffered from limitations such as obscure command-line options, lack of compliance, and insufficient support for modern protocol tuning. The primary motivation was to create a more robust, user-friendly open-source instrument for evaluating and performance in research and academic environments, where proprietary network testing tools were often inaccessible or restrictive for collaborative experimentation. The initial development was led by Mark Gates and Alex Warshavsky at NLANR, with the first public release—version 1.1.1—occurring in February 2000. Written in C++, Iperf was designed from the outset to compile across multiple platforms, including , , , and AIX, emphasizing portability for diverse research infrastructures. It adopted a permissive BSD-style under the copyright of the University of (with certain components under the GNU Library General Public License), which facilitated broad dissemination and modification within the open-source community without restrictive clauses. This licensing approach aligned with NLANR's goals of promoting freely available tools for network optimization in non-commercial settings. From its early days, Iperf saw rapid adoption in (HPC) and initiatives, where it served as a standard for establishing baseline network assessments in distributed systems. For instance, by 2001, it was employed in projects evaluating wide-area for scientific workloads, such as data transfers in collaborations, due to its ability to isolate protocol-specific bottlenecks without the overhead of application-layer simulations. This integration into grid environments, including early Toolkit deployments, underscored Iperf's role in enabling reproducible measurements for tuning high-latency, high-bandwidth links essential to emerging computational grids.

Evolution of Versions

Iperf2, the continuation of the original Iperf tool developed by NLANR/DAST, saw its stable versions emerge around 2003 and continued development through the mid-2000s to early 2010s, with maintenance hosted on SourceForge. The project experienced periods of stagnation, particularly after version 2.0.5 in 2010, leading to unresolved bugs and limited updates. Its last stable release, version 2.2.1, occurred on November 6, 2024, incorporating minor fixes such as man page updates and alpha-level support for UDP L4S on Linux. The transition to Iperf3 began in 2009 when ESnet (Energy Sciences Network) initiated development due to Iperf2's stagnation and the need for a reliable tool to measure performance on emerging high-speed networks exceeding 10 Gbps. This effort represented a handover from the original NLANR maintainers to ESnet at , aiming for a cleaner codebase to facilitate broader contributions. The first Iperf3 release arrived on January 26, 2014, marking a complete redesign not backward-compatible with Iperf2. Key drivers for Iperf3's evolution included addressing Iperf2's multi-threading limitations, which struggled with scalability and efficiency in high-speed environments despite supporting multiple streams via threads. In contrast, early Iperf3 emphasized single-threaded design for simplicity and reliability, later enhanced with multi-threading per stream in 2023, alongside features like JSON output for automated parsing. The project also introduced the libiperf library to enable programmatic integration into other applications. Iperf3's milestones include its stable release 3.18 on December 13, 2024, followed by 3.19 in May 2025, 3.19.1 on July 25, 2025, and the latest 3.20 on November 14, 2025, with improvements in build compatibility and new protocol support like MPTCPv1. Development shifted to the repository under ESnet, fostering ongoing community contributions through pull requests and issue tracking, while Iperf2 remains separately maintained on .

Iperf2

Key Features

Iperf2 is the original version of the tool, providing core measurement capabilities with for , , and SCTP protocols over IPv4 and IPv6. It measures throughput, one-way latency, round-trip time (RTT), and other metrics like and , particularly in mode. The tool uses a multi-threaded model to handle parallel streams via the -P option, allowing multiple concurrent connections for simulating high-load scenarios and better utilizing multi-core systems. Enhanced reporting is available with the -e flag, offering detailed statistics including bandwidth per interval, lost packets, and , beyond the standard output. Unlike Iperf3, Iperf2 includes native for testing, including (SSM), enabling bandwidth assessments for group communications without additional setup. Iperf2 supports bidirectional and full-duplex testing with the -d and -r options, respectively, to evaluate simultaneous and performance. It also provides send mode with -Z to reduce CPU overhead in high-throughput tests. Advanced options include tuning the size (-l), reporting (-i), and bandwidth limits (-b for ). SCTP testing is enabled with the --sctp flag on supported platforms like and . The latest release, version 2.2.1, was made available on November 4, 2024, incorporating bug fixes and improvements for compatibility across platforms, including Windows and systems.

Usage and Configuration

Iperf2 operates in client-server mode, with the server invoked using iperf -s on the receiving host, binding to the default / port 5001. The client connects with iperf -c <server_ip>, running a 10-second throughput test by default and reporting average . Test duration can be adjusted with -t <seconds>, e.g., iperf -c <server_ip> -t 30. For UDP testing, add the -u flag to both server and client commands. The server runs iperf -s -u, while the client specifies a target to prevent network saturation, such as iperf -c <server_ip> -u -b 100M for 100 Mbit/s. mode reports additional metrics like loss percentage and . To enable parallel streams, use -P <n> on the client, e.g., iperf -c <server_ip> -P 4 to create four streams and aggregate results. Reverse mode (-R) measures server-to-client throughput, useful for asymmetric links. Enhanced output for detailed parsing is activated with -e, such as iperf -c <server_ip> -e -i 1 for per-second reports. Reporting intervals are set with -i <seconds>, defaulting to 1 second; use -i 0 to disable periodic reports. Packet length is configurable via -l <bytes>, defaulting to 128 KB for and 1470 bytes for . For bidirectional testing, -d runs full-duplex (simultaneous), while -r uses half-duplex (alternating). UDP tests require -u -C <client_ip> -c <multicast_group> -T <ttl> on the client. Server options include running as a daemon with -D (iperf -s -D) and binding to a specific with -p <port>. Output interpretation includes in bits/sec (e.g., "100 Mbits/sec"), transfer bytes, and for , loss details (e.g., "0.1% 10/10000"). and RTT are reported where applicable, providing insights into capacity and stability.

Iperf3

Improvements and Differences

Iperf3 represents a complete redesign from iperf2, lacking backwards due to changes in command-line syntax and internal architecture, such as the use of -B for binding to a specific interface instead of -b, which in iperf2 controls for tests. This incompatibility necessitates separate installations and configurations when migrating from iperf2, as iperf3 servers cannot accept connections from iperf2 clients and vice versa. Unlike iperf2's multi-threaded approach, early versions of iperf3 (prior to 3.16) operated single-threaded, relying on CPU settings via the -A flag to optimize performance on multi-core systems by pinning processes to specific cores. However, starting with 3.16 in late 2023, iperf3 introduced multi-threading with one per , addressing limitations in handling parallel connections more efficiently than its initial design. Performance enhancements in iperf3 focus on high-speed networks exceeding 10 Gbps, achieving up to 160 Gbps on 200 Gbps paths through features like mode (-Z) utilizing sendfile() for reduced CPU overhead and improved buffer management. It also offers greater accuracy in measuring and , particularly for tests, by incorporating larger socket buffers (e.g., via -w2M) and refined timing mechanisms that mitigate issues on lossy networks, outperforming iperf2's earlier buggy implementations in versions before 2.0.8. These improvements stem from iperf3's simpler, smaller codebase, which reduces overhead and enhances reliability for modern, high-bandwidth environments. Iperf3 introduces JSON output support with the -J flag, enabling structured data for scripting, automation, and integration into larger monitoring systems, a capability absent in iperf2. Complementing this, the libiperf library provides an for embedding iperf3 functionality into custom applications, allowing extensible reporting and programmatic control without invoking the standalone tool. On the security front, iperf3 adds optional RSA-based using public-private keypairs to encrypt credentials, configurable via dedicated flags for private and public key files, along with controlled test modes like reverse direction (-R) and file input simulation (-F) to limit exposure in sensitive deployments; version 3.17 in 2024 addressed a related in this feature. Official support for iperf3 is primarily limited to Linux distributions like , with successful builds reported on and macOS, but Windows requires third-party ports or environments due to the absence of native maintenance. Since taking over development in 2014, ESnet at has driven iperf3's evolution under a BSD license, with regular releases, including version 3.19 on May 16, 2025, which added multi-path support, and version 3.20 on November 14, 2025, featuring improved output and bug fixes. This shift from iperf2's broader but less focused platform compatibility emphasizes iperf3's role as a streamlined tool for research and production networks.

Key Features

Iperf3 distinguishes itself through a range of standalone capabilities optimized for contemporary environments, emphasizing efficiency, flexibility, and integration. Its , updated in version 3.16 to support multi-, allocates one per test , enabling efficient utilization of multiple CPU cores during tests via the -P option. This approach, combined with CPU pinning through the -A flag (available on and ), allows users to bind to specific cores, reducing context-switching overhead and improving performance on multi-core systems. Reporting features are comprehensive, providing detailed interval-based statistics (-i option) on , , and , alongside CPU utilization metrics for both sender and receiver. The tool supports reverse mode (-R) for distinct sender-receiver separation and operations (-Z) to eliminate unnecessary memory copies, thereby lowering CPU overhead in high-throughput scenarios. JSON output (-J) further facilitates automated parsing and integration with monitoring systems. For programmatic use, iperf3 includes the libiperf library (built as both shared and static variants), offering a C API that allows developers to embed testing directly into applications for custom bidirectional or unidirectional tests without invoking the standalone . protocol support encompasses native addressing via the -6 flag, ensuring compatibility with dual-stack environments. testing is also natively handled, enabling measurements for group-based transmissions without additional configuration. Advanced test controls permit precise tuning of TCP parameters, including the (-M, e.g., to match MTU constraints), (-S for QoS prioritization), and congestion control algorithms (-C, such as cubic or bbr on supported platforms). Version 3.18, released on December 13, 2024, included bug fixes for SCTP (such as adding details to JSON output and resolving compilation issues on systems without SCTP support) and addressed threading-related segmentation faults and signal handling. Subsequent releases, including 3.19 (May 2025) with multi-path TCP support and 3.20 (November 2025) with improvements, have further advanced the tool.

Usage and Configuration

Iperf3 operates in a client-server model, requiring the tool to be invoked separately on two endpoints. To initiate the , execute iperf3 -s on the listening machine, which binds to the default port 5201 and awaits incoming client connections. On the , the basic command is iperf3 -c <server_ip>, establishing a and measuring throughput for a default duration of 10 seconds; the transmission time can be adjusted with the -t flag, such as iperf3 -c <server_ip> -t 30 to run for 30 seconds. For UDP-based measurements, specify the -u flag on both server and client to switch protocols. The server command becomes iperf3 -s -u, while the client requires a target bandwidth to avoid overwhelming the network, set via -b or --bandwidth, for instance iperf3 -c <server_ip> -u -b 100M to target 100 Mbit/s. UDP tests inherently report additional metrics like packet loss, making this configuration suitable for assessing datagram reliability. Machine-readable output is enabled with the -J or --json flag, producing structured data that can be parsed programmatically; a common usage is iperf3 -c <server_ip> -J > output.json, which captures results including and in format for automated analysis. This builds on iperf3's support, allowing into scripts or tools. To simulate multi-stream traffic, use the -P or --parallel flag on the client, such as iperf3 -c <server_ip> -P 4 to launch four concurrent streams and aggregate their throughput. Bidirectional testing, where the server transmits to the client, is invoked with -R or --reverse, e.g., iperf3 -c <server_ip> -R, useful for evaluating asymmetric network paths. Advanced options refine test precision and behavior. The -i or --interval flag controls reporting frequency, with iperf3 -c <server_ip> --interval 1 generating bandwidth summaries every second (default interval is 1 second, but 0 disables periodic reports). Packet size is tunable via -l or --len, setting the buffer length to a specified value in bytes (e.g., -l 64K for 64 KB packets, defaulting to 128 KB for and 8 KB for ). For fair-queuing-based pacing on systems, --fq-rate limits the rate per stream at the level, such as --fq-rate 50M to cap at 50 Mbit/s and reduce bursts. Server configuration options include daemonization with -D or --daemon, running iperf3 -s -D to operate in the background without tying up the terminal. Output can be directed to a file using --logfile <file>, e.g., iperf3 -s --logfile iperf.log, for persistent logging of test results and errors. Interpreting iperf3 output involves parsing the console reports, which detail performance per interval and overall. Key metrics include bandwidth in bits per second (e.g., "10.0 Mbits/sec" indicating sustained throughput), transfer volume in bytes, and for UDP, the number of lost and duplicated datagrams alongside a loss percentage (e.g., "0.00% 0/1000 (0%)" showing no losses out of 1000 packets sent). Jitter in milliseconds is also reported for UDP to quantify variability, while TCP outputs may include retransmits if enabled via other flags. These elements provide a clear view of network capacity and reliability without requiring external tools.

Advanced Capabilities

Protocol Support and Options

Iperf supports multiple transport protocols, including , , and SCTP, allowing users to evaluate under different reliability and congestion control behaviors. mode, the default, provides reliable, ordered delivery and measures as the effective data transfer rate excluding protocol headers and retransmissions. As of iPerf3 version 3.19 (released May 2025), support for Multi-Path (MPTCP) is available on , allowing aggregation of multiple paths for improved throughput and resilience. mode enables unreliable, connectionless transmission to assess limits, , and , while SCTP offers a approach with reliable delivery, multi-streaming, and multi-homing capabilities similar to but with UDP-like message boundaries. In TCP mode, key tuning options include the -w or --window flag to adjust the socket buffer size, which influences the congestion window and can optimize throughput based on the ; for instance, increasing it to 128 KB or more often improves performance on high-latency links. The -N or --no-delay option disables , reducing latency by sending small packets immediately without coalescing, which is beneficial for interactive applications. Additionally, the -C or --congestion flag specifies the congestion control algorithm, such as Cubic or BBR on , enabling comparisons of different strategies' impact on throughput. UDP mode uses the -u flag to switch protocols and includes the -b or --bitrate option to set a target sending rate, defaulting to 1 Mbit/s, which helps simulate constrained traffic like . The -l or --length flag controls size, typically 1472 bytes for IPv4 to avoid fragmentation, and Iperf reports metrics such as lost packets as a percentage and one-way in milliseconds, aiding evaluation of real-time applications like VoIP or video conferencing. These reports quantify loss and delay variation without retransmissions, highlighting network stability under bursty, unreliable conditions. SCTP support is invoked via the --sctp flag on compatible systems like and , providing reliable transport with unordered delivery options and inherent heartbeat mechanisms for path monitoring in multi-homed setups. The --nstreams option enables multi-streaming, allowing up to 65535 independent streams per association to reduce , which is advantageous for applications requiring UDP-style independence but with TCP-like reliability, such as signaling. Heartbeat intervals are managed by the underlying SCTP implementation rather than Iperf-specific tuning, ensuring failover detection in redundant paths. Cross-protocol features include -4 or --version4 to restrict to IPv4 and -6 or --version6 for , facilitating dual-stack testing without address resolution issues. For , testing is supported in iPerf2 by binding to group addresses in the 224.0.0.0 to 239.255.255.255 range using the -B flag, with controlled via -T for scope limitation, enabling group communication benchmarks. Common tuning options span protocols, such as buffer sizes via -w for both and to align with network MTU, the -i or --interval flag for periodic reporting every n seconds (default 1), and careful payload sizing with -l to prevent , ensuring accurate measurement of unfragmented throughput.

Performance Measurement Details

Iperf calculates throughput as the total volume of data transferred divided by the elapsed time, expressed in bits per second. Specifically, the bandwidth is determined by the formula: bandwidth = (transferred bytes × 8) / time, where transferred bytes represent the payload data excluding protocol overhead in TCP mode. This measurement focuses on the achievable rate under controlled conditions, providing a baseline for network capacity without accounting for higher-layer encapsulations. For UDP tests, Iperf reports rate as the percentage of datagrams not received, computed as (lost datagrams / total sent datagrams) × 100. is calculated as the smoothed mean of the absolute differences between consecutive packet inter-arrival times, following the methodology in RFC 1889, and is reported in milliseconds to quantify variations in delay that could impact real-time applications. In TCP mode, Iperf tracks retransmissions as the count of segments resent due to or disorder, displayed in the "Retr" column of interval reports, which helps identify congestion or error rates along the path. It also monitors duplicate acknowledgments internally to trigger fast recovery mechanisms but does not report their count explicitly in standard output. Round-trip time (RTT) is not directly measured but can be estimated indirectly through the , derived from observed throughput and window sizes during the stream. Several factors influence the accuracy of Iperf measurements. High CPU load on the host can limit packet processing rates, leading to underreported throughput; this is mitigated by enabling zero-copy mode (-Z option) to reduce data copying overhead. Kernel tuning, such as adjusting socket buffer sizes via sysctl parameters like net.core.rmem_max and net.ipv4.tcp_rmem, allows larger TCP windows (-w option in Iperf) to better handle high-bandwidth-delay paths, while network stack overhead from interrupts or offloading features can introduce variability if not optimized. Iperf provides reporting at configurable granularities, with the -i option setting the interval (in seconds) for periodic summaries of metrics like and , alongside a final total for the entire test duration. In bidirectional tests (-d for simultaneous or -r for half-duplex), the sender reports outbound throughput while the captures inbound metrics, offering perspectives from both endpoints to detect asymmetries. Iperf internally uses prefixes for byte-based units (e.g., MiB/s where 1 MiB = 1024² bytes) but labels them as MBytes/sec in output for simplicity, contrasting with prefixes (MB/s = 1000² bytes) common in user-facing contexts; bit-based units like Mbits/sec always employ decimal scaling (1 Mbit = 10⁶ bits) to align with networking standards.

Applications and Use Cases

Network Testing Scenarios

Iperf serves as a primary tool for conducting baseline throughput tests in end-to-end network assessments between hosts, enabling the establishment of overall capacity limits. In local area networks (LANs), these tests typically reveal throughputs approaching hardware maxima, such as gigabit speeds with minimal , while (WAN) evaluations highlight constraints from distance and intermediate , often capping performance at lower rates like 45 Mbit/s over a DS3 link without tuning. Such scenarios provide a foundational for network validation, distinguishing local from remote path behaviors. For in data centers, Iperf facilitates saturation through high- streams, including multiple parallel connections, to pinpoint bottlenecks in high-throughput environments. By generating traffic that stresses available , these tests quantify achievable rates, such as increasing from 16.5 Gbit/s with a single stream to 26.1 Gbit/s using multiple streams (with tuning), aiding in infrastructure scaling decisions. This approach is particularly valuable for evaluating aggregate in clustered systems where individual performance aggregates to overall system limits. In wireless testing, Iperf's UDP mode measures performance for real-time applications like video streaming, focusing on and datagram loss under variable conditions. Tests simulate constant-bit-rate streams to assess how or signal degradation impacts delivery, with metrics like 0.243 ms indicating suitability for latency-sensitive services over 802.11 networks. environments leverage Iperf for VPC bandwidth testing in platforms such as AWS, where end-to-end measurements between instances or against public servers validate inter-region or intra-VPC capacities. For example, and streams confirm expected throughputs up to instance type limits, such as 10 Gbit/s on high-performance instances (as of 2019), ensuring compliance with provider specifications. Iperf integrates effectively with tools like to correlate and throughput, using round-trip time (RTT) measurements—such as 42 ms over a path—to calculate and optimize parameters for balanced performance. This combination supports comprehensive scenario analysis without relying on isolated metrics.

Troubleshooting and Diagnostics

Iperf3 facilitates diagnosis through mode tests, where the tool sends datagrams at a specified and reports the percentage of lost packets, helping to identify issues like or (QoS) policies that drop traffic. For instance, running iperf3 -s -u on the server and iperf3 -c <server> -u -b 2G -t 60 on the client generates a 2 Gbps stream for 60 seconds, revealing consistent low-level loss that may indicate physical problems such as dirty fiber connectors if observed in both directions. This approach is particularly effective for isolating drops caused by intermediate devices enforcing limits or prioritization rules, as lacks retransmission mechanisms unlike . Latency and jitter analysis in Iperf3 leverages tests with short bursts to measure end-to-end variability, which is critical for diagnosing issues in real-time applications like VoIP or video streaming where even minor fluctuations can degrade quality. By specifying interval reporting with -i 1 (e.g., iperf3 -c <server> -u -b 10M -i 1), the tool outputs jitter values in milliseconds alongside lost packets, allowing users to detect variability due to queuing delays or routing inconsistencies. These metrics provide insight into network stability without the confounding effects of , though users should note that jitter calculations assume uniform packet spacing and may require adjusted datagram sizes (e.g., -l 1470) for accurate low-bandwidth scenarios. To identify bottlenecks, Iperf3 employs varying numbers of parallel streams via the -P option, enabling differentiation between CPU limitations on the endpoints and actual network constraints. For example, a single-stream test (iperf3 -c <server>) might yield lower throughput due to single-threaded processing in older versions, but increasing to -P 4 can saturate multiple CPU cores and reveal if the network path supports higher rates, as seen in multi-threaded builds post-3.16 that scale to 160 Gbps on 200 Gbps links. If throughput plateaus despite additional streams, it signals endpoint resource exhaustion rather than path capacity issues. Asymmetric network problems, such as differing upload and download performance due to duplex mismatches or firewall policies, are diagnosed using Iperf3's reverse mode (-R), which inverts the data flow so the server transmits to the client. This is invoked with iperf3 -c <server> -R, allowing direct comparison against standard mode results to highlight directional discrepancies, for instance, in environments with TCP Segmentation Offload (TSO) issues where reverse tests succeed while forward ones fail. Such tests are essential for uncovering hidden asymmetries not evident in unidirectional benchmarks. Common diagnostics with Iperf3 include cross-verifying results against external speed tests to rule out tool-specific artifacts and employing for detailed post-analysis. Output can be saved via --logfile output.log (e.g., iperf3 -c <server> --logfile output.log), capturing timestamps, , and data for offline review, while --forceflush ensures real-time updates during long runs. This aids in correlating Iperf3 metrics with system logs or packet captures, providing a verifiable for iterative troubleshooting.

References

  1. [1]
    iPerf - The TCP, UDP and SCTP network bandwidth measurement tool
    iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers ...Download iPerf binaries · iPerf user docs · Public iPerf3 servers · Contact
  2. [2]
    iperf3 3.19.1 documentation - ESnet Software
    The iperf series of tools perform active measurements to determine the maximum achievable bandwidth on IP networks. It supports tuning of various parameters ...Missing: network performance
  3. [3]
    iperf3: A TCP, UDP, and SCTP network bandwidth measurement tool
    This version, sometimes referred to as iperf3, is a redesign of an original version developed at NLANR/DAST. iperf3 is a new implementation from scratch ...
  4. [4]
    iperf3 FAQ — iperf3 3.19.1 documentation - ESnet Software
    It's a way of testing the end-to-end performance of a file transfer, including filesystem and disk overheads. So while the test will mimic an actual file ...
  5. [5]
    iPerf3 and iPerf2 user documentation - iPerf
    iPerf3 documentation - measuring TCP, UDP and SCTP bandwidth performance.
  6. [6]
    Invoking iperf3 — iperf3 3.19.1 documentation - ESnet Software
    Using the default options, iperf3 is meant to show typical well designed application performance. “Typical well designed application” means avoiding artificial ...
  7. [7]
    [Iperf-users] 1024 & 1000 - iperf-users@lists.sourceforge.net
    Yes, but as that same wikipedia article points out, if iperf were to honor the MiB for all things 1024 based and MB for all things 1000 based, then there ...
  8. [8]
    Obtaining iperf3 — iperf3 3.19.1 documentation - ESnet Software
    Primary development for iperf3 takes place on Ubuntu Linux, FreeBSD, and macOS. At this time, these are the only officially supported platforms, however there ...Missing: functionality | Show results with:functionality
  9. [9]
    iperf2 / iperf3 - Fasterdata
    Oct 17, 2024 · iperf is a simple tool to let you measure memory-to-memory performance access a network. iperf3 is a newer implementation from scratch.
  10. [10]
    iperf3 at 100Gbps and above - Fasterdata - ESnet
    Jan 13, 2025 · ... iperf3 process is single-threaded. This means all the parallel streams for one test use the same CPU core. If you are core limited, which is ...
  11. [11]
    Iperf version 1.1.1
    Feb 23, 2000 · Iperf version 1.1.1. February 2000. NLANR applications support http://dast.nlanr.net/ · <dast@nlanr.net>. Mark Gates Alex Warshavsky.Usage · Tuning A Tcp Connection · Tuning A Udp Connection
  12. [12]
    Measuring Network Performance - Iperf - SmallNetBuilder
    Updated - Iperf is a network performance tool used for measuring the bandwidth and the performance of TCP and UDP data flows between two endpoints.Figure 1: Iperf -- Tcp Test · Table 1: Tcp Test Results · Figure 2: Iperf -- Udp TestMissing: original | Show results with:original
  13. [13]
    [PDF] Enabling Network-Aware Applications
    iperf was chosen for testing because it is a simple tool that only performs network transfers, thus ensuring that we are only measuring network performance, and ...Missing: early | Show results with:early
  14. [14]
    [PDF] Optimizing the ASC WAN: Evaluating Network Performance Tools ...
    The Iperf network performance tool was initially developed by NLANR/DAST (Na- ... In IEEE Inter- national Symposium on High Performance Distributed Computing, ...
  15. [15]
    Iperf 2 download | SourceForge.net
    Get an email when there's a new version of Iperf 2 ... Iperf here is a means of measuring networks - capacity & latency (including ECN) over sockets both TCP and ...FilesDownload iperf-2.1.5.tar.gz ...Iperf 2 Files
  16. [16]
    iperf 2.2.1 relased - SourceForge
    Nov 6, 2024 · FYI, 2.2.1 has been released 2.2.1 (as of Nov 4th, 2024). o man page updates o support (alpha level) for --udp-l4s (linux only, requires .
  17. [17]
    Releases · esnet/iperf - GitHub
    Jul 25, 2025 · iperf3 now supports the use of Multi-Path TCP (MPTCPv1) on Linux · iperf3 now supports a --cntl-ka option to enable TCP keepalives · iperf3 now ...
  18. [18]
    Building iperf3 — iperf3 3.19.1 documentation - ESnet Software
    By default, the libiperf library is built in both shared and static forms. Either can be suppressed by using the --disabled-shared or --disable-static configure ...Missing: integration | Show results with:integration<|separator|>
  19. [19]
    Release iperf-3.18 2024-12-13 · esnet/iperf
    ### Summary of iperf-3.18 Release (2024-12-13)
  20. [20]
    iperf3(1) - testing - Debian Manpages
    Aug 4, 2025 · This option is deprecated and will be removed. It is equivalent to specifying --fq-rate=0. -t, --time n: time in seconds to transmit for ( ...
  21. [21]
    iperf3: perform network throughput tests - Linux Manuals (1)
    Compare with the --fq-rate flag. Set a rate to be used with fair-queueing based socket-level pacing, in bits per second.
  22. [22]
    iperf3 Project News — iperf3 3.19.1 documentation - ESnet Software
    iperf 3.16 uses multiple threads to serve parallel tests for improved throughput on high-speed links. It also includes support for OpenSSL 3.
  23. [23]
    libiperf(3) - Arch manual pages
    Libiperf gives you access to all the functionality of the iperf3 network testing tool. You can build it directly into your own program, instead of having to run ...
  24. [24]
    iperf3 - man pages section 1: User Commands - Oracle Help Center
    Jul 27, 2022 · iperf3 is a tool for performing network throughput measurements. It can test TCP, UDP, or SCTP throughput.
  25. [25]
    iperf3 - Meaning of Retr column in TCP measurement - Stack Overflow
    Jun 26, 2020 · In iperf3 the column Retr stands for Retransmitted TCP packets and indicates the number of TCP packets that had to be sent again (=retransmitted).
  26. [26]
    Test/Measurement Host Tuning - Fasterdata
    Jan 15, 2025 · Here is a quick reference guide for tuning settings for Linux Test/Measurement hosts such as perfSONAR hosts that run tools such as iperf, iperf3 or nuttcp.Missing: accuracy | Show results with:accuracy
  27. [27]
    possible bug re: use of 1000 vs 1024? · Issue #173 · esnet/iperf
    May 18, 2014 · The basic idea for prior iperf was: 1000-based for speeds of things (because that's what the IEEE standards dictate), and 1024-based for data transfer units.
  28. [28]
    How to use iPerf3 to test network bandwidth - TechTarget
    Jul 14, 2021 · IPerf3 is built on a client-server model and measures maximum User Datagram Protocol, TCP and Stream Control Transmission Protocol throughput ...
  29. [29]
    [PDF] Recent Linux Improvements that Impact TCP Throughput - Fasterdata
    Abstract—This paper reviews recent enhancements to the. Linux kernel that impact network throughput, and their potential impact on Data Transfer Node (DTN) ...
  30. [30]
    [PDF] Measure Wireless Network Performance Using Testing Tool Iperf
    Sep 25, 2009 · When used with UDP, iPerf can also measure datagram loss and delay. (aka jitter). iPerf can be run over any kind of IP network, including local ...
  31. [31]
    [PDF] How to test network performance on AWS - awsstatic.com
    Client-only options. • -c: Run iPerf in client mode. • -u: Generate UDP traffic. • -b: Target bandwidth generation for UDP. • -t: Time in seconds to run.Missing: Azure | Show results with:Azure
  32. [32]
    Network Throughput Testing with iPerf | Linode Docs
    Jan 12, 2015 · This tutorial will teach you how to install iPerf, and use its common commands and basic configuration to diagnose your network speed.Missing: baseline capacity
  33. [33]
    Packet Loss - Fasterdata
    Jan 14, 2025 · The best way to find dirty fiber is to run a 2-3Gbps UDP test using iperf3 or nuttcp. If you are not allowed to run UDP tests, look at the ...